Blog Closed

This blog has moved to Github. This page will not be updated and is not open for comments. Please go to the new site for updated content.

Friday, June 25, 2010

EmbedVideo Version 1.0 Released

I'm pleased to announce the 1.0 release of the EmbedVideo extension for Mediawiki.

EmbedVideo is a highly capable and configurable extension for embedding flash video content from hosting sites such as YouTube. The extension is being developed on Github and is available for download from there as well. Information is also available on the extension page at

Major new features include:
  • Completely rewritten code base to follow current MediaWiki best practices
  • i18n internationalization support
  • Support for new hosting services
  • Backwards-compatibility support for EmbedVideoPlus
  • Thumbnailing, floating alignment, and attached descriptions
  • Support for custom embedding clauses
  • Support for per-service default width and width/height ratios
Contributions and patches, especially those that add support for new video hosting services, will be much appreciated.

Wednesday, June 23, 2010

Parrot-Linear-Algebra: Change of Course

I've added a lot of functionality to PLA in the last few days, including a large boost to the test suite this evening. Since Kakapo doesn't have a release that targets Parrot 2.3.0, I decided to refocus my efforts on Parrot 2.6.0. However, with all the changes in Parrot since 2.3.0 came out, the upgrade path was much harder than I anticipated. Luckily I have some superstars like darbelo and NotFound on my side, otherwise I would still be fighting with things.

After talking to darbelo I've decided PLA is not going to attempt to track supported releases, at least not for normal development. It's too much of a pain in the ass, and nobody else is doing it. Sure it's best practice, but it's the kind of thing the whole ecosystem really needs to do together. If the HLL compiler is tracking Parrot HEAD because of a desperate need of new features and improved performance, then it doesn't make any sense for libraries not to be keeping up pace.

Darbelo made one particularly good point:

If we track HEAD the we'll run on 2.6 when it's released. We branch on the release day and do whatever 'release engineering' we need to do on the branch and tag a release when we're done.

That's it, in a nutshell: If we track trunk closely enough, we will always be ready with a release following Parrot's release. Spend a few days to prepare the necessary accouterments, then cut a tag and continue on with normal development. Our releases can be tied to Parrot's supported releases, but our ongoing development will target trunk as closely as possible. I don't anticipate that we will want to cut a PLA release every time Parrot does, but if we follow a 4-month schedule and follow supported releases that should be just fine. At least, fine for now.

PLA is going to track Parrot trunk and hopefully be prepared to cut a release following Parrot 2.6.0. That gives us about 4 weeks to get all our ducks in a row (and our Kakapos too!).

Tuesday, June 22, 2010

BLAS, LAPACK, and Parrot-Linear-Algebra

Last night I got back to work on Parrot-Linear-Algebra, my matrix package for Parrot. I'm really starting to get happy with the functionality it provides and am looking to start moving the project to the next level of functionality and usability.

I hadn't touched it in a while, for a variety of reasons. Austing Hastings has been very busy with other projects and hasn't cut an "official" release of Kakapo that works with the last supported Parrot release, 2.3. I've applied some hotfixes that make Kakapo build and pass tests on 2.3, but that's not quite good enough. When I make a PLA release I don't want "clone the Kakapo repository from Gitorious" in the instructions, because then the instructions get out of date when Kakapo is updated to be incompatible. What I want is to say "Download Kakapo version X from this URL", which will be invariant.

Last night I added some prototype new features to the NumMatrix2D PMC, the basic numeric matrix type. I'm going to mirror the majority of those changes to the two other general-purpose types; ComplexMatrix2D and PMCMatrix2D. I added methods to do basic elementary row operations (swap two rows, add a multiple of one row to another, and multiply the contents of a row by a non-zero constant), and used those operations to put together an example program to calculate both the Row Echelon and Reduced Row Echelon forms of a matrix. Those methods, combined with block manipulation methods I added previously and the new GEMM method I added last night as well, create a basis for us to create a comprehensive library of linear algebra routines.

But that brings me to my next concern: How to implement the remainder of the fundamental algorithms a linear algebra pack is going to want to provide? Obviously I would love to wrap the LAPACK library up and call the routines it provides directly. LAPACK provides a huge number of routines for doing things like singular value decomposition, QR, LU, and other types of decomposition, solving the eigen probem, solving systems of equations in two variables, etc. In fact, LAPACK provides almost all of the functionality I would want PLA to have in the next few months.

The problem, however, is that LAPACK is not nearly as accessible from C code as I would like. In fact, there is no "standard" for C-bindings to the library, and several lame attempts are available that tend to be incompatible with each other. The reference version, and only reliably-available version, of LAPACK is written in FORTRAN. The standard CLAPACK library is just a machine-lead translation of the FORTRAN sourcecode, with a few points in the code needing to be manually tweaked after conversion. It has a few problems, including the fact that every single function parameter (even basic ints and floats) must be passed by reference. The ATLAS library, which PLA currently prefers to provide BLAS bindings, provides some LAPACK C bindings of it's own, but only a very very small subset of all LAPACK functions are provided, and the ones it does have hardly support all the operations PLA is going to want to provide.

CLAPACK, being more or less the standard could be made to work, except that it doesn't come as any sort of user-friendly package. There are no CLAPACK packages for Debian (Ubuntu) or RedHat that I have seen, and that raises the barrier to entry significantly, something that I want to avoid.

I could use the LAPACK library directly, and twist my code to match the FORTRAN calling conventions and data alignments. That's not unthinkable, though it would require some more work on the part of PLA developers than any other solution.

I could skip LAPACK entirely, and instead rely on something like GSL with proper C bindings built-in. GSL does provide several important matrix operations and decompositions, though I would need to do more research into the capabilities of that library.. What I don't want is to lose focus and have this project grow to try and become a general wrapper for all of GSL. I want PLA to stay focused on Linear Algebra only. We could maybe create sister projects to encapsulate more of the GSL functionality and use PLA under the hood to implement the basic data structures, of course.

Maybe we will get lucky, and the new NCI system will have support for calling FORTRAN routines from shared libraries. I don't think we can just anticipate this kind of functionality, at least not during the GSoC program.

A "nuclear" option, perhaps, would be to not rely on any external library for these things and instead brew up all the basics myself. I'm not against such work per se, but it would be a huge investment in time and the case cannot be made that it's the best use of my limited development time. I did put together a Gauss-Jordan elimination routine last night, it wouldn't be too too much effort to put together a generalized QR decomposition algorithm and a singular-value decomposition algorithm, followed by routines to calculate eigenvalues, eigenvectors, and matrix inverses from those things. If PLA had a larger team of active developers who wanted to participate in this kind of work it would be an easier decision to make, but if it's primarily me and a handful of occasional-contributors, this really isn't a doable task.

My plan for PLA in the near future is this: After the 2.6 release I want to push for a stable release of Kakapo, and then using that I want to cut a release of PLA. From that point forward, PLA will target the 2.6 version of Parrot at least until 2.9 and maybe later. The first release of PLA is going to provide three basic matrix types: A 2D matrix of floats, a 2D matrix of complex numbers, and a 2D matrix of PMCs. These three matrix types will have a large base of common functionality and each type will have some additional functionality too, as required by that type. Everything provided will be well-tested (including error cases, which I haven't exercised nearly enough so far) and some example programs will be provided in both PIR and NQP.

There are a few projects I am envisioning to start in the future that will rely on PLA, so I really am hoping to create a nice, stable, functional release within the next few months. I'll post more information about any other projects as they arise.

Saturday, June 19, 2010

Parrot's Deprecation Policy

Parrot user kthakore sent a very interesting email to the Parrot developers list this week to criticize the current deprecation policy. After taking some time off from development, he returned to find that his extension no longer built on Parrot HEAD and he couldn't quite figure out why. There was a deprecation notice for the feature that changed, but the notice was so vaguely worded and so short on explanation that it offered very little help to the beleaguered extension developer.

When we're talking about code, there are only a handful of things that we really need to worry about: utility, performance, security, stability and maintainability. Well-written code, for the most part, can satisfy all these requirements. Sometimes trade-offs are made, such as trading a certain amount of maintainability to optimize for performance, but in those cases some well-placed comments can help alleviate or minimize any regressions. Code, while technical and deep, is often pretty easy: we follow rules and policies, make changes, measure results, wash, rinse, repeat.

Not so easy are the softer sides of open source software: the people. People come in varieties: core developers, extension developers, testers, documenters, end-users and well-wishers of varying levels of technical competency. Keeping all these groups of people working together nicely and happily can be quite a difficult challenge, and there is likely no way for all groups to be 100% happy 100% of the time. The tradeoffs here are harder to understand, harder to manage, and a misstep can have huge negative consequences for the project.

I've long been a detractor of Parrot's current deprecation policy: It's too rigid, too narrow, and doesn't really do anything to help the people who need helping. It also doesn't really take into account Parrot's current stage in the development life cycle. Parrot, as I've mentioned on occasion, has plenty of warts and has suffered many growing pains. It is folly to think that we should be making blanket guarantees about what will or will not be present in various releases, or how quickly we can or cannot make changes.

When there's a problem or a bug or a misfeature, people want those things fixed quickly. A good example of this were the PCC refactors a few months ago. Even though those refactors created some backwards-incompatible behavior our users (who the deprecation policy was at least nominally designed to protect) were trying to rush them through. Rakudo developers specifically were blocking on the PCC improvements and having to wait for months and months would have been bad for them. The PCC refactors were high priority and high need.

The deprecation of several core ops and their conversion to dynops recently is an example from the other end of the spectrum (and the source of kthatkore's frustration). While we followed the letter of the deprecation policy, these things didn't need to be removed with haste and created a bit of hassle for users who weren't expecting it. I'm not saying they shouldn't have been removed (I'm always a proponent of removing cruft), but it does expose some short-comings of our deprecation policy and process.

What we have is a series of conflicting motivations, even for individual groups. Consider:
  • Core Developers: Core developers want to remove bad features, want not to maintain bad features, want to add new good features and want to create the best software for the users. Developers need to work on fun new features, but also need their work to be used and appreciated. Take away either of those pieces, and many volunteer developers will simply walk away. Moving too quickly alienates the users and creates a huge disconnect between what the developers are developing and what the users are actually using. Moving too slowly is boring and developers start to leave for greener pastures.
  • Extension Developers: Want to add new extensions to the Parrot ecosystem for the benefit of users, but have to deal with binary compatibility at the C level which changes much more frequently than the "normal" user-visible interfaces like PIR. Parrot has a lousy extension API right now, so necessary improvements there require extension developers to stay up-to-date with current core development. At the same time, all sorts of other changes break things in new versions, even when necessary features are fixed.
  • Users: For stability, it's good for users not to upgrade. To fix bugs and get new features, it's good t upgrade. Upgrading brings rewards but also hassles:  Core features disappear, new bugs are added, and extensions are broken by binary incompatibilities. Upgrading Parrot means needing to upgrade (or, at least, rebuild) extensions too.
It's difficult to tell an extension developer to stick to the supported releases because the extending API is so lousy and incomplete. Having to wait 3 months or more for a necessary change is hard for these small projects, and their developers can quickly lose interest and motivation. Until we reach some basic level of usability, we have to expect that extension developers are going to be tracking Parrot HEAD more or less closely. I think it's a little disingenuous to simultaneously expect fully that developers will be tracking the repository HEAD but also write in our deprecation policy that they should only track the stable releases, and you really need to ask who exactly that policy is designed to protect in this case.

We really need to account for different levels of user and different levels of feature. End-users shouldn't be using Parrot directly. Joe Schmoe at home is not and definitely should not be writing his tools in PIR. If he's writing his code in a higher-level language like Rakudo Perl 6, NQP, or Winxed, he's buffered from disruptive changes made to the Parrot core VM. It's the developers of HLL compilers and extensions that need to worry about these kinds of changes.

Likewise, we need to differentiate between issues of multiple severities.When a big issue is blocking development in extensions and HLL compilers, it behooves Parrot to ignore the mandatory wait period and to make those fixes post haste. Alternatively, a change which is not necessary and would cause a block or a slow-down for extension developers should be put off for longer and made with more care.

What we need, in a nutshell, is a policy that actually does what we claim the current deprecation policy does now: Protect our users from disruptive changes, but also enable forward progress to be made without being forever tied to every bad decision ever made. I suggest these changes to policy and process:

  1. Deprecation should be added before a disruptive change is made
  2. The deadline for the deprecation shouldn't be blindly tied to the next stable release, but intelligently selected with input from the affected parties. We do have weekly planning meetings where these kinds of things can be decided. If we need to regularly schedule additional meetings with other parties (HLL compiler devs, extension devs, etc) we should do that as well.
  3. Deprecations should be well-documented and publicized. Information about the deprecation should include what exactly is changing, how users of those features can work around the changes, and who to contact when/if problems arise.
  4. Information about the deprecation should be sent to the users, not just dumped in DEPRECATED.pod where we expect people to be looking regularly. An email list for this purpose was suggested and I like that idea (other ideas were also suggested that I also like). Any way to send the information directly to the user is a good thing.
  5. Where possible, both old and new versions should be provided simultaneously for a period of time while users transition. This is most important in the C-level API where function wrappers can easily be provided to translate old calls into new ones.
I'm still a big proponent of the idea that the deprecation policy should be opt-in, in the sense that only features that we've put a stamp of approval onto should be covered under the deprecation policy and anything else should not be relied upon. You can use a feature that we haven't approved, but then you're responsible for paying the price when that feature changes or disappears completely. You would also be responsible for sending the core developers feedback about which features you would like to see be added and approved.

Having a deprecation policy is a good thing and a necessary part of a mature software project. However, the policy we have currently fails on a number of counts and requires some serious re-thinking if we want to make it better. I sincerely hope, and I know several other people also hope, that we do make it much better in the future.

Friday, June 18, 2010

ParrotTheory: Locks and Synchonization

I've talked a good amount in the past and recently about threading, so today I'm going to talk about some of the small bits that make threading and concurrency work. I'm thinking that we're also going to need to implement some of these primitives in Parrot eventually, so consider this blog post an extremely verbose way of planning for that.

Let's start the discussion by looking at two small code snippets; a structure definition and a routine that uses that structure:
typedef struct _array {
  int length;
  int *values;
} array;

void push( array *ary, int x) { 
  realloc(array->values, newlen * sizeof(int));
  array->values[array->length - 1] = x;
This isn't nearly as contrived an example as it may seem initially, though I purposefully made the code a little bit naive. It's worth noting here that the idea of a structure which contains both an array and the integer length of that array is very common. It's how Parrot implements it's STRING type and several of its array PMC types as well. In fact, the push_int VTABLE method for the ResizableIntegerArray PMC probably looks extremely similar to this example code.

Astute observers, and veterans of threading trench warfare will see a problem with this code: There are no locks and no concurrency safeguards, so this code isn't thread safe. Let's take a short walkthrough of this code where we have a preemptive thread switch in the middle to another thread also attempting to access this same method on the same object:
  1. We have an array object with length 5 and items {1, 2, 3, 4, 5}
  2. Thread A attempts to push the number 6 to the array, Thread B attempts to push the number 7.
  3. Thread A enters the function. We set length to 6 and realloc to gain a sixth slot. The array now contains values {1, 2, 3, 4, 5, ?}, because the last value isn't initialized by realloc.
  4. At this point, there is a preemptive thread switch and Thread B takes over control.
  5. Thread B enters the function and sets length to 7 and reallocs again. The array now contains values {1, 2, 3, 4, 5, ?, ?}.
  6. Thread B sets the sixth element to 7. The array is now {1, 2, 3, 4, 5, ?, 7}
  7. Thread B gets preempted, Thread A takes over.
  8. In thread A, the value of length is still 7, so [ary->length - 1] is still 6. When we add x to the array, we now have {1, 2, 3, 4, 5, ?, 6}
 There are some things we could try here, such as saving the values we need from the structure to local variables, so that even if we preempt in the middle of the functions the length field will be an accurate index. Or, we can try to rearrange the order of operations so some errors appear less frequently or less obviously.  The real problem here is that we have three operations that need to stay coupled: Increasing the known length of the array, reallocating array storage, and assigning an item to that storage. Any break between a set of coupled operations causes a problem. We call these areas of code where operations are sensitive to cross-thread interference critical sections.

As another example of a very small critical section, consider this one line of code:
foo = i++;
This seems pretty simple, but in fact it is not. This line of code requires several assembly language instructions, especially when we are talking about a non-optimized build:
  1. Fetch the value of i from memory into a processor register
  2. Copy the value of i to the variable foo
  3. Increment the register containing the value for i
  4. Copy the value of the register back to memory where i is located.
A preemptive thread switch can happen in between any of these steps. Consider the case where we break between steps 2 and 3 to another thread performing the same operation: If i is 5, Thread 1 foo is 5, Thread 2 Foo is 5, and at the end of the code snippet i is 7! If this code snippet is, for instance, in an SQL database generating unique integer keys for rows in a particular table, we've just generated non-unique keys and created a world of hurt for ourselves.

To get around these kinds of problems, one solution is to use a synchronization primitive. A synchronization primitive is any of a class of algorithms and objects that are designed to synchronize and limit access to shared resources. In this sense, a resource is anything that multiple threads might want to access: An IO stream, a global variable, a shared pointer, even a sensitive sequence of instructions,  etc. Any time we have a critical section of code that is sensitive to sharing we want to find a way to limit access to a finite number of simultaneous threads (usually one). There are several ways to do this.

A mutex, short for "mutual exclusion", object is a type of lock that helps to prevent access to a critical section. To pick a pertinent example, think of a mutex like a basketball. Only the one person in the game with the basketball can do things: shoot, pass and dribble. Other players on the team can do other stuff like running, covering, or posting, but they cannot do ball stuff without the ball. You cannot shoot the ball if you do not have the ball, you cannot pass the ball if you do not have the ball. This is a convention of the sport. If we were playing Calvinball instead, maybe we could do these things without looking preposterous. By convention also, if we as programmers declare that a certain shared resource can only be accessed by a thread (player) with the mutex (ball), the those are the rules for our system (game) and things can move along smoothly. Here's an example of that convention in action:
Mutex *m;
// critical section code
The power in this code is that the AQUIRE_MUTEX() function will attempt to gain ownership of the mutex, and will wait indefinitely for the mutex to become available if some other thread already owns it. ACQUIRE_MUTEX is like waving your arms in the air, shouting "I'm open" until the current ball carrier passes the ball to you. Until you get the ball, you just have to stand with your arms in the air until you get it. Because of that behavior, no two threads can enter the same critical section, assuming of course that the programmer (you) has properly protected that critical section with the mutex. Keep in mind that there is no intrinsic property of the critical section itself that prevents multiple threads from running it simultaneously and corrupting data. The exclusion comes from the proper and pervasive use of locks like our mutex to keep the critical section safe. Here's another example:
Mutex m;

int pop(array* a) {
  int item = a->values[a->length - 1];
  return item;

void push(array* a, int item) {
  a->values[a-length] = item; 
In this example we can see that we aren't properly using the mutex everywhere, so we can't guarantee that we won't get corrupt data. Multiple threads could just as easily enter the push function simultaneously as could attempt to enter the pop function. If you don't use mutexes everywhere, it's almost as good as not using them anywhere. This is a convention that the coder must decide upon beforehand and follow diligently.

There are multiple ways to implement locks and mutexes. One idea is a spinlock, which attempts to access a flag and enters an endless while-loop until it can. An empty while-loop can be very inefficient on a processor, but if we call a sleep command inside the loop to allow other threads to run while we wait it isn't such a big problem. Spinlocks implemented by the OS inside the kernel event loop can be very efficient indeed. In fact, as a general rule, if the OS implements locking primitives they tend to be much better to use than anything you can write in userspace.

Another type of lock primitive is a semaphore, though it is subtly different. A semaphore allows a finite number of threads to access a finite number of shared resources at a time. Where a normal mutex, like a spinlock, allows only one thread to enter at a time the semaphore may allow one or more. Consider a case where we have five worker threads in a web server, and 100 incoming connections. A semaphore uses a first-come-first-served method to assign incoming connections to available threads. Each incoming connection attempts to access the semaphore. As requests are completed, threads signal their availability and the semaphore assigns the next connection in the list to that thread. A semaphore with only one shared object acts like a normal mutex or spinlock.

The overhead of a lock is the amount of effort it takes to acquire and manage the lock. In a uniprocessor system the lock may be very simple to obtain: First disable interrupts so we cannot be preempted by another thread, check the status of the lock, obtain the lock if it's available, and re-enable interrupts. In a multiprocessor system, especially one with shared memory, the overhead and error-checking involved can be much higher. In these systems the performance gain from using threads can be much higher too, so it's a trade-off.

Granularity is the amount of stuff in your critical section protected by a lock. Course Granularity means that we have lots of code inside our critical section. This is good because we need fewer locks and therefore experience lower overhead. Plus, it's easier as a programmer to make sure we acquire fewer locks over large swaths of our program. The downside is that the larger our protected critical section is, the more likely other threads are going to be blocked waiting to enter it. This, in turn, can create problems like high latency. Fine Granularity is the opposite, where we lock as little code as possible. The upside is that we don't have to worry about multiple threads blocking for long on small bits of code. The downside is that acquiring more locks means more lock overhead, and more programmer effort to implement all the locks consistently and safely. Fine granularity can also lead to deadlock, where multiple threads are stuck waiting for locks that other threads own.

The Python interpreter, as an example, implements a single Global Interpreter Lock, which is a lock to govern the entire Python interpreter. Only one operating system thread can be running the interpreter at once, to prevent corruption of global data. I think new versions of Ruby do this too.

There are other methods of synchronizing access to shared resources. One method is to make all data immutable; If you can't modify data, you can't corrupt it. Since Parrot's strings are immutable, you shouldn't ever need a lock when working with them. You may still need to worry about playing with a mutable container PMC which holds strings, or the mutable registers which point to strings, however.

Parrot is definitely going to want to make use of OS-supplied locks in some fashion. Maybe we want to make a PMC wrapper around system lock primitives, or we want to create some kind of lock manager that uses a single system mutex to distribute a series of immutable tokens to worker threads. The exact details of locking are certainly up for debate, but the fact that we don't want to brew our own should be obvious.

Since locks need to be used consistently for them to be of use at all strongly hints at the fact that Parrot should probably do the locking internally. We probably don't want to apply locks to every single operation, since the common case programs are single-threaded applications and we don't want to apply the performance penalty of lock overhead to programs which don't need it. If Parrot can identify only those PMCs which are shared, it can apply locks selectively to those PMCs only, limiting overhead to only the places where it is necessary. For instance, if we add a synchronize op:
$P0 = syncronize $P1
We can create some kind of wrapper PMC type whose vtables enter a lock, call the vtable of the synchronized PMC, and then release the lock. In this example, if everybody used $P0 when they wanted to modify $P1, all operations would be safe. The onus would be on the programmer to explicitly mark the PMC as synchronized, of course, and many programmers will probably forget to do that.

Maybe instead of passing PMC references between threads directly we create and pass clones and modify them separately on different threads. Then, when we want our changes from one thread appear in another thread, we would call some kind of propagate op:
propagate thread, obj
This would pause the specified thread, update the object contents, and then unpause the thread. This would be very similar to the message passing that languages like Erlang use (not exactly the same, because Erlang wouldn't pause the recipient for this, but you get the idea).

Maybe we have a system where we only share read-only copies. So thread A would own the PMC, but thread B could get a read-only copy of it. This would completely obviate the need to lock the PMC since only one thread could write to it, but then we need some kind of mechanism where thread B could make modifications back if necessary, or maybe B could gain the writable copy and make A's copy read-only. This system could get very complicated very quickly, however.

We could also avoid most locks if we used a transactional memory system to avoid memory corruption, but that could still add overhead to the single-threaded common case and then we would still want a system of locks for other operations that don't require locking a PMC.

These are only a handful of the many potential options that Parrot has ahead of it, and I can go into greater detail about any of them that people are interested in thinking about. I think Parrot is going to want at least some kind of locking mechanism, so we could start prototyping those things immediately if we wanted. How these mechanisms get implemented and applied within Parrot is obviously the bigger issue that we can ignore for now.

Tuesday, June 8, 2010

Lost Finale: Redux

I wasn't planning to write any more about the disappointing Lost series finale, but I received too many comments on my last post to ignore. I started writing a reply and, as happens so frequently, my reply grew to the size of a full blog post. So I'm posting it here.

Wednesday evenings were a time that I kept blocked off for Lost. Watching that show is what I did on those evenings and now my schedule has a gaping 1-hour hole. Looking back on the past few years of television, I don't feel the kind of fondness I should. Instead, I feel an utter disappointment and emptiness because of the way that damn series ended. 6 years of loyal TV watching is now nothing more than wasted time and bad memories. For 6 years I would tell myself "Sure things are crazy now, but there are more episodes to come and if I follow this through to the end everything else will make sense". Well, we made it to the end. I still have that feeling like the show is a crazy mishmash of weird ideas and disjointed storytelling, but there aren't any more episodes to look forward to for these things to be resolved.

I sincerely can't tell if the writers were simply being lazy and disconnected, tired of the grind for 6 years and already focused on their next big projects. I can't tell if maybe the writers are just bad at writing, not able to resolve, interpret, or make sense of the complex predicaments and scenarios they created. Every time the situation became too complex, every time the scenario became too crazy and unmanageable, they simply abandoned ship and moved on to the next mystery. Night after night, season after season they wrote themselves into holes that they either had no intentions or no capability to write themselves out of. They introduced new characters and new subplots with no explanation, no development, and no lasting effect on the overarching plot. In fact, I can't say with certainty that there was any single overarching plot to the whole show. They used throwaways to distract the audience from the real plot, and they abandoned so many story elements without so much as a passing apology.

Who would you say was the main antagonist of the show? For 5 seasons, the smoke monster was described as being a protective force, or a "security system" for the island, not it's main antagonist until the final season. Maybe you could say Widmore was the antagonist, but he had a tangential effect at best on most of the characters throughout, and later was revealed to be acting on Jacob's behalf (right before he died). Was Ben the antagonist? He certainly appeared so for about 2 seasons, though once we started seeing into his past and his actions we realize that he's a force for the good of the island, even if his means-justify-the-end modus operandi is a little questionable. And who would be the main protagonist? Jack would appear so, though for the first few seasons he butted heads against Locke and frequently was overshadowed by him. Also, his "go with the flow" attitude in the later seasons is hardly reconcilable with the notion of the series' primary hero. Maybe we could say Locke was the protagonist, but he died halfway through the series.

Let's look at some other characters too. Sawyer was one of the few people to actually escape from the island alive, though he is still the same angry, vengeful conman he was at the beginning of the series. He had some liberating moments early on when Kate learned about his backstory and when he was finally able to kill the man who caused the death of his parents. He even had some moments of heroism after his transformative time in the Dharma Initiative, though the death of Juliet appeared to drive him back to his old self in Season 6. Kate was an interesting and likable character, but we can't really say that she transformed as a character. The role she mostly appeared to play was a catalyst for Jack and Sawyer, and a suspense-building element in their weird love triangle. Sayid's character was interesting because he was constantly being pulled across the line between good and evil, torn between needing to do whatever was necessary, and needing to be a good person. Jin and Sun were basically non-participants, spending the entire show involved primarily with each other. In the beginning their characters were defined by their language barrier and weird marital problems.  As the show progressed they found true love together, but then they died leaving behind little more than an orphaned child and motivation for Kate to catalyze Jack to kill the Man in Black. Desmond was also an interesting character, though he didn't really make any appearance in Season 1, and was missing entirely in Season 5 and the first half of Season 6. In the end, Desmond was used as little more than a tool to help Jack.

It's hard to say that Lost really had any central antagonists, protagonists, or even a single overarching plot. It ended up being a mishmash of unresolved subplots which just happened to feature a common set of characters. Sometimes they seemed to do good, sometimes they seemed to do evil, but the results of their actions were never shown and in the end almost all of them simply died or were forgotten.

In the end it's really all about the audience, and nearly two weeks later the audience that I have talked to is still unhappy. It's worse than the infamous ending to the Sopranos where the screen simply cut to black. That was remarkably fulfilling for most audience members, but at least some meaning could be drawn from it: Tony Soprano was continuing his life as usual, was still an unremorseful gangster, and after everything that happened to himself and his family and his friends we was still carrying on as he had always carried on.  Any time that the series ended would have been the same, you could just as easily cut out an episode prior or an episode later with no effect. After all, we weren't seeing and weren't expecting to see the complete demise of organized crime, or even the salvation of one of it's prominent figures. The Sopranos was simply a short window into that life, nothing more and nothing less.

The Lost season finale offered no such message. You can't say that things would continue as they always had. You can't say that the troubled characters were changed for the better. You can't say that anything of importance happened at all.

That's one of my biggest problems with that show. Looking back on it, it's hard to say with certainty that anything happened. The net result, that some people crashed on an island and only a handful of people survived (though, notably, not all of the final survivors were on the original crashed plane) is a pretty boring story. Look back at some of the big subplots: Babies and expecting mothers dieing? Not addressed at all. Going back in time to change the future? Didn't happen. Lessons about "what happened, happened" and "dead is dead"? Casually dismissed without any explanation. Greater lessons about good and evil? Mired in too much ambiguity and mystery to draw any meaningful conclusions. Greater lessons about fate and free will? Absolutely ignored.

We could maybe say that the whole island was some crazy nonsense, but the important part were the characters and their development. That would be fine if the whole show was about a group of troubled people who were put in an absurd situation and learned important lessons and ultimately grew to transcend their own problems and limitations. Or maybe the show could have focused on the relativity of "good" and "evil" and show how a lot of people ended up senselessly dieing to protect ideals (especially unverified oral traditions) that ended up being false or irrelevant. That would have been extremely deep, and would have made for a satisfying conclusion, but it was not so. It might also have been nice to compare and contrast the killings of the Temple people by the Man in Black with the killings of the Dharma initiative by the followers of Jacob. Trust me, there is tons of unmined gold in this show that, instead of being polished and crafted, was ignored in favor of giving the audience a huge piece of coal.

And it really is all about the audience. My other biggest problem with this finale was the complete lack of regard and respect for the audience on the part of the writers. This is a show which thought nothing of naming characters after famous philosophers, theologians, and scientists. This is a show which was filled with obscure cultural and literary references. This is a show which used Latin writing and Egyptian hieroglyphs for the fans to painstakingly translate and interpret. This was a show that appeared to have been targeted towards an intelligent and educated audience. Why then were we treated like dogs; so easily distractable by the next shiney object, loud noise, or scrap of food from the table? Why, in the presence of so many fantastic and occasional absurd scenarios did the writers simply choose to ignore the story we were following and distract us with the next mystery, the next suspense-building cliffhanger? Were we, the audience, just the observers in the Pearl station, tasked with observing something we did not understand, not realizing that we were the experiment, and we were being observed? What is the lesson here, that if you pretend to treat your audience as if they were intelligent, that you can treat them just like every other stupid audience, make a boatload more money, and call it revolutionary TV?

The lesson I learned, ultimately, is that TV is a huge load of shit. Lost over-promised and severely under-delivered, and I've been burned by that. I do like to watch Burn Notice, when I can find it on at a reasonable time. I do like to watch NCIS too, sometimes. But, I absolutely refuse to get invested in any new show the way I invested myself into Lost.

Sunday, June 6, 2010

EmbedVideo Extension Pre-Release

Last Tuesday I got confirmation that I now had a commit bit to the MediaWiki subversion repository. I don't have access to the whole thing, I only needed access to the extensions section. Today I checked in a snapshot of the EmbedVideo extension I've been working on, in preparation for the first release of the extension under new management.

Since I started working on the extension I've made a number of changes, most of them internal. Here's a short list of things I've done:

  1. Broken the extension up into multiple files and created a proper directory structure, following current best practices
  2. Added i18n support, though I've only added English-language messages so far.
  3. Added support for the website
  4. Added the ability to float the video left or right, and add a caption
  5. Added an {{#evp:}} tag, to be fully compatible with the EmbedVideoPlus extension syntax.
The biggest changes have been the internal cleanup tasks. The code base has been almost entirely rewritten from the ground-up to follow best practices, although there is still a little bit more cleanup work to do.

I'm planning a formal release of the extension to happen sometime within the next two or three weeks. I will be trying to test the release against a small handful of releases: Probably 1.12, 1.13, 1.14, 1.15, and the betas of 1.16. I'm not sure how I want to handle releases in the future, since I do plan to add new features and support for new sites, but I don't want to spend huge amounts of effort backporting every feature I add to all sorts of old MediaWiki versions. I will probably try to tag revisions for old versions, but will not worry about backporting new features myself. If somebody wants to do the hard work of backporting features, they can submit patches (or ask for a commit bit!) and do it themselves.

I'm not planning to add any major new features before I release in the next few weeks, so from here on out it's all testing and documenting. As soon as it's tested to my liking, I'll put out a new release.

Wednesday, June 2, 2010

ParrotTheory: Green Threads

This summer I'm mentoring GSoC student Chandon in his project to bring Hybrid Threads to Parrot. Today I'm going to talk a little bit about that, and maybe give some information about what Parrot needs to do to properly support a full and robust threading system in the long term. I've written about threads previously on this blog, so I won't cover all the nitty-gritty details again.

At the risk of over simplicity there are basically two types of threads to consider: OS Threads and Green Threads. OS Threads or "native threads" are what most people are probably familiar with: These are the threads managed by your operating system kernel. Native threads represent a flow of execution of native machine code on a single processor core, and multiple native threads can run simultaneously on multiple processor cores. On a single core, they run sequentially but in time slices so short it appears to the human observer that things are happening at the same time. This appearance of simultaneity is extremely important for computer users, especially for graphical interfaces, because people don't like waiting and don't like it when the computer "freezes" or "locks up" during periods of heavy computation.

A native thread represents a system context: A separate call stack and a snapshot of the processor state at any given time. By saving the processor state and call stack away, we can switch to a different task and later resume our original task as if nothing has happened.

I don't know where they were originated, although Green Threads were probably most famously implemented in early versions of the JVM.  Instead of being managed by the OS, they are managed by the VM. Green threads are scheduled onto OS threads in the same way that OS threads are scheduled onto processor hardware. The VM saves it's current state into a VM context object, moves to a different task, and then reinvokes that context object to resume the original task.

Green threads are very interesting tools, when they are available. They tend to be extremely inexpensive compared to native OS threads. They are managed by the VM, and if they all execute sequentially on a single OS thread the risk of lock contention and data corruption drops almost to zero. The down-side to green threads is that they are all multiplexed onto a single OS thread, so they each get a smaller share of the processor's time.

To help enforce that green threads in the VM may only execute on a single OS thread at a time, the VM may implement a global interpreter lock (GIL). Famous examples of this occur in Python and Ruby. A GIL helps with integration to C libraries which are not thread-safe, and also helps to prevent things like callback functions from executing in a different OS thread and causing unprotected data corruption. The downside to the GIL approach is that  extra overhead must be spent to perform locking/unlocking and checking of the GIL.

Chandon's Hybrid Threads idea tries to combine the two models of OS and Green threads. Parrot will maintain a pool of OS "worker" threads, based on user settings and the number of available processor cores. Parrot will then be able to schedule a queue of green threads onto the available OS threads. Yesterday on his blog he actually posted an example of using Parrot's existing continuations to form a system of cooperative green threads. Instead of happening at any time, green thread switches in his example are manually triggered by calling the functon th_resched(). If we move the call to th_resched into the interpreter's scheduler object so it can be called when the scheduler runs, we gain a proper preemptive green threads system without too much effort.

There are some difficulties with this model, especially since Parrot currently has no mechanism to manage data contention across OS threads, and also considering that each OS thread ill have it's own interpreter object, which will need to be created at some runtime expense. Parrot is also going to need to add new mechanisms and policies for dealing with data which is considered "global": namespaces and class definition metaobjects, etc.

We're also going to need to start answering some very difficult questions: If a continuation created in interpreter A is passed to interpreter B and invoked, what do we do? If an exception is thrown by interpreter A and is caught by a handler defined in interpreter B, what do we do? If an object's vtable override is preempted by a green thread switch and the object is in an inconsistent state when it is attempted to be accessed in a different green thread, what do we do? How do we even detect when any of these things happen? What mechanism are we going to need to write to allow sharing PMCs between OS threads? Do we use STM again? If so, do we write STM ourselves or try to use a pre-existing and proven library? Considering that we now have immutable strings (which don't need to be locked because they can't be written), do we maybe want to consider a system where shared PMCs are either read-only in other threads or maybe are COW to prevent corruption? How do we deal with long-running operations, such as blocking IO and calls to long-running library functions?

For Chandon's project to be a success I think we need just two things: a robust green threads system, and a prototype OS threads system with a scheduler that can intelligently assign green threads to OS threads. Once we have that system in place, along with some rules or algorithms against sharing writable data until we sort out the details for how that becomes possible, we can add new mechanisms piece-wise to enable new features and functionality. If he has time sure he can do more, but if we can get the basics in place we will be in a much better state than we were before he started.