Blog Closed

This blog has moved to Github. This page will not be updated and is not open for comments. Please go to the new site for updated content.

Tuesday, May 5, 2009

Path to AIO on Parrot

In the last few blog posts I've put out about Asynchronous IO (AIO) on Parrot, I've looked at some issues including how things work on Windows and Linux, and how things could start looking on Parrot. The series started with this post and ends with this one. Notice that I haven't provided all the answers here, so I'm still hoping to get some feedback from readers. I'm also pretty well convinced that we will learn some lessons quickly as work progresses. The biggest hassle is going to be creating a system which is semantically transparent on Windows and Linux, so that PIR users don't need to care about the differences.

PDD22 contains some information about AIO although it's all speculative. I've been pouring over this document for a few days and I think I'm really understanding a lot of the nuances about it and some of the methodology that the original designers had in mind. I think it's a decent approach, and I think we will be able to follow it reasonably closely without too many issues popping up. I showed some speculative PIR examples in a previous blog post, but I'm starting to think now that I will follow the PDD more and pursue that idea less.

The PDD has this to say about the relationship between synchronous and asynchronous opcodes:
Parrot only implements synchronous I/O operations. Initially,the asynchronous operations will be implemented separately from the synchronous ones. There may be an implementation that uses one variant to implement the other someday, but it's not an immediate priority.
and it says this about how the interface to PIR will look:
Synchronous opcodes are differentiated from asynchronous opcodes by the presence of a callback argument in the asynchronous calls. Asynchronous calls that don't supply callbacks (perhaps if the user wants to manually check later if the operation succeeded) are enough of a fringe case that they don't need opcodes.
The main point to take away from this is that asynchronous IO requests are not implemented as a PMC (at least not as one that the user needs to be aware of) and they are not handled using Parrot's existing concurrency opcodes like schedule. Here's a PIR example for something that, I think, will perform an asynchronous file write according to the specification in the PDD:

$P0 = open "/path/to/file.txt", "w"
$P1 = find_global("io_callback_sub")
$P2 = print $P0, "hello world", $P1

In the example above, $P0 is the FileHandle PMC that we are writing to, $P1 is the callback subroutine, and $P2 is the IO request status object that keeps track of the status of the request. The PDD says that there should be a division between the synchronous and asynchronous opcodes. However, on Windows platforms at least the file handle will need to be opened in asynchronous mode before any asynchronous operations can be performed on it. I don't see any real way to avoid immediate unification in this case, unless:
  1. The print opcode closes the filehandle and reopens it in asynchronous mode (bad)
  2. We add an additional specifier to the open opcode that specifies that the file handle should be opened in asynchronous mode ("wn" would be "write, non-blocking", for instance).
  3. We always open file handles in asynchronous mode and just implement all the blocking opcodes in terms of underlying asynchronous operations.
I like idea #3 the best personally, but I imagine there is going to be some significant support for #2 as well. I'm just worried about cases where a non-asynchronous filehandle object is passed to an asynchronous opcode, or any other related combination of things that can and will happen.

Here are some notes that I'm following in general (I will discuss some caveats below):
  1. All streams are opened in asynchronous mode by default, except for streams that don't support it.
  2. Asynchronous opcodes will accept as a callback either a Sub, a Continuation, or PMCNULL. In the case of a Sub, the opcode will perform asynchronously and call the sub in a separate thread when an event occurs. If it's a Continuation, the operation will block until completed and then resume execution at the Continuation. If PMCNULL, it will launch the asynchronous request and ignore any results.
  3. We need to create a new PMC for the AIO status object. I'm thinking we call it "IORequest". The IORequest object will have interfaces to check for the current status (in progress, complete, error) and the result (number of bytes read/written on success). I am not sure how we will handle errors, there are a few options for this that I won't talk about here.
The problem with this approach that I can see is that nowhere in here do we interact with the concurrency scheduler at all, unless the C-level callback function of the request schedules the PIR-level callback Task instead of executing it directly, or for systems that don't support AIO directly and we need to fudge it. Relying on the direct callbacks is definitely better performance-wise then polling a result flag internally. However, I've been told now by a handful of people that the better way to go in Linux anyway is probably a poll loop using epoll anyway.

An alternative idea, that I personally like less but which might cause fewer headaches, would be to create the IORequest object as a simple flag accessor. The scheduler would need to poll the pending requests regularly (using epoll in linux and an IO Completion Port in Windows) and when it finds one that needs handling, it would update the flag in the IO request object and schedule the callback for us (if one is provided). The difference to this approach, of course, is who is checking the status and who is scheduling the callback (kernel vs Parrot). I feel like this way is going to have a lot of performance drawbacks, but then again the biggest performance drawback in any IO system isn't going to be the callback scheduler anyway. The differences might be negligible.

So we don't even know how we're going to do simple read/write operations yet, but we might be able to nail down the specifics of a few other tasks. A listener object or a Select/Poll object for instance might be registered with the scheduler to repeatedly check their status and call callbacks when an incoming event occurs. This would be useful on a server-side network app where we could listen on an array of sockets for incoming data passively and let the scheduler take care of checking flags and scheduling callbacks.

Here's a straight-forward example of code to create a passive listener (which some Unix/Perl folk would probably prefer we called "Select") PMC which calls a callback when an incoming event occurs:

$P0 = new 'Socket'
# ... Connect the socket here
$P1 = new 'IOListener'
push $P1, $P0
schedule $P1

So at the end of that snippet, we have a listener object in the the scheduler. The scheduler will poll the IOListener, which in turn will poll the Socket and all other PMCs that it contains, and for every event that it finds it will add the corresponding callback to the scheduler for execution.

So the PDD definitely offers a nice base to start working on, but there are a ton of questions to be answered about some of the implementation specifics. We'll know as we start breaking ground and writing code what does work and what doesn't. I'm sure I'll be reporting on these issues as they arise.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.