But then again, maybe what I think of as being "production ready" is different from what other people think.
In contrast, Parrot 1.0 was described as being a "stable API for language developers", and in hindsight I think we can all also agree that this wasn't the reality of that release either. I'm not trying to be disparaging here, but I do want to take a critical look at these two releases and try to see what is going on with Parrot.
1.0 was stable in the sense that we had the deprecation policy and could not consciously pull the rug out from under our compiler devs without adequate warning. That said, we were making pretty significant changes to the API (defined loosely as the ops, PMCs, command-line options, C embedding API and C extending API) every opportunity we got. I have also said before that the API that we had stabilized at that time was hardly golden. There were plenty of warts, and the problem with that deprecation policy is that those warts are around to stay. We can't just fix a problem or cleanup a bad interface, we are stuck with it until the next deprecation point. But, that's only if we put in a deprecation notice sufficiently early. In some cases HLL developers, most notably Patrick Michaud from the Rakudo group, were urging us to make changes faster than the deprecation policy allowed because these warts were too taxing to work around for months on end.
In hindsight, I think we can better label 1.0 not as a stable API, but instead as a critical maturation point for the community. 1.0 was a coming of age. It was a time when the community got it's act together, put some policy together, outlined our priorities, and started making promises to people. Say whatever negative things you want about our deprecation policy (everybody knows I do), but at least we have a policy.
The magic 1.0 (and now "2.0") numbers are a little bit misleading because not a lot of people understand the way we do numbering. People think 1.0 means it is "complete", when any of the Parrot devs will tell you that this is not the case. We number and release according to the calendar, not according to the state of the repo. Similarly, we released 2.0 because the calendar said to, not because we had implemented any specific feature or reached any specific milestone. The tagline "production ready" was only a vague motivation and not the final result.
That said, what was the result of 2.0?
1.0 as we mentioned was a critical maturation point for the community. 2.0 then, I think, provides that stable API that we would have liked to have had 9 months ago. This is not to say that all our warts have washed away, but the API is in a much better condition now than it was when 1.0 came out the gates. Since the 1.0 release we have done a lot of cleanup, refactoring, and improving of various systems. It's worthwhile to mention that we haven't added a whole ton of new features during that time. It really has been 9 months of non-stop cleanup and because of that effort we are in a position that I would be happy to call "stable".
Now where are we headed? What will 3.0 look like in a year?
Coming up to 2.0 we've done much cleanup. Systems are improved, refactored, encapsulated. We've improved naming conventions and code style. All the groundwork has been laid to start adding some of the blockbuster new features that we'll need to really get Parrot accepted into the world.
When I think "production ready", I think of a few key concepts: Robustness, Scalability, Performance. Business simply aren't going to employ software that is buggy, cannot adapt to different sized tasks, and makes inefficient use of expensive hardware. We can call a piece of software "production ready" all day long and pat ourselves on the back over how great Parrot seems to us, but if we haven't satisfied these three principals nobody is going to use it. So, how do we get these properties? What do we need to do between now and 3.0 to truely make Parrot business-ready?
In terms of robustness I think we really need to focus on two things: Cleanup and documentation of our external API, and improving the comprehensiveness of our test suite.
For scalability, we absolutely need to rein in our memory usage and I think we also need to significantly improve our threading implementation.
For performance I could rattle off a laundry list of things we need to improve but I will limit my list to only a few:
- Improved Garbage Collector
- Add [pluggable] optimizations to PCT
- Enable PCT to output bytecode directly
If we can get address the three priorities of Robustness, Scalability, and Performance, I think that 3.0 can truely be the production-ready release that we've been saying 2.0 was going to be. Because of all the wonderful cleanup work we've been doing in the past few months, I think the stage is set to really get to work on these things and make that goal happen.
We may have different perceptions of robustness, scalability, and performance in terms of what is needed by business, but if reliability is not a part of that, then it is simply less worth the investment for businesses at this level to initiate the learning programme.
ReplyDeleteIn my perception, looking at Rakudo, there are a lot of features to be tried out already, but something is still missing.
It is not performance. We can learn without everything working at top speed, we understand.
We need reliable interfaces. Especially in terms of database and webserver interaction. Where are the proof of concept prototypes? I know there are some, but it is damn well hidden away, and they need more attention.
I know the HLL work seems to be bogging it down, and its long overdue, but if you can just give the general interfaces some attention before the performance issues, then we can start tinkering and learning our way around, knowing the engine is being optimized as we learn to drive faster safely.
Thanks. Hope I was not out of line.
If I was, then I hope it helps.