PDD22 contains some information about AIO although it's all speculative. I've been pouring over this document for a few days and I think I'm really understanding a lot of the nuances about it and some of the methodology that the original designers had in mind. I think it's a decent approach, and I think we will be able to follow it reasonably closely without too many issues popping up. I showed some speculative PIR examples in a previous blog post, but I'm starting to think now that I will follow the PDD more and pursue that idea less.
The PDD has this to say about the relationship between synchronous and asynchronous opcodes:
Parrot only implements synchronous I/O operations. Initially,the asynchronous operations will be implemented separately from the synchronous ones. There may be an implementation that uses one variant to implement the other someday, but it's not an immediate priority.and it says this about how the interface to PIR will look:
Synchronous opcodes are differentiated from asynchronous opcodes by the presence of a callback argument in the asynchronous calls. Asynchronous calls that don't supply callbacks (perhaps if the user wants to manually check later if the operation succeeded) are enough of a fringe case that they don't need opcodes.The main point to take away from this is that asynchronous IO requests are not implemented as a PMC (at least not as one that the user needs to be aware of) and they are not handled using Parrot's existing concurrency opcodes like
schedule. Here's a PIR example for something that, I think, will perform an asynchronous file write according to the specification in the PDD:
$P0 = open "/path/to/file.txt", "w"
$P1 = find_global("io_callback_sub")
$P2 = print $P0, "hello world", $P1
In the example above,
$P0is the FileHandle PMC that we are writing to,
$P1is the callback subroutine, and
$P2is the IO request status object that keeps track of the status of the request. The PDD says that there should be a division between the synchronous and asynchronous opcodes. However, on Windows platforms at least the file handle will need to be opened in asynchronous mode before any asynchronous operations can be performed on it. I don't see any real way to avoid immediate unification in this case, unless:
- The print opcode closes the filehandle and reopens it in asynchronous mode (bad)
- We add an additional specifier to the open opcode that specifies that the file handle should be opened in asynchronous mode ("wn" would be "write, non-blocking", for instance).
- We always open file handles in asynchronous mode and just implement all the blocking opcodes in terms of underlying asynchronous operations.
Here are some notes that I'm following in general (I will discuss some caveats below):
- All streams are opened in asynchronous mode by default, except for streams that don't support it.
- Asynchronous opcodes will accept as a callback either a Sub, a Continuation, or PMCNULL. In the case of a Sub, the opcode will perform asynchronously and call the sub in a separate thread when an event occurs. If it's a Continuation, the operation will block until completed and then resume execution at the Continuation. If PMCNULL, it will launch the asynchronous request and ignore any results.
- We need to create a new PMC for the AIO status object. I'm thinking we call it "IORequest". The IORequest object will have interfaces to check for the current status (in progress, complete, error) and the result (number of bytes read/written on success). I am not sure how we will handle errors, there are a few options for this that I won't talk about here.
An alternative idea, that I personally like less but which might cause fewer headaches, would be to create the IORequest object as a simple flag accessor. The scheduler would need to poll the pending requests regularly (using epoll in linux and an IO Completion Port in Windows) and when it finds one that needs handling, it would update the flag in the IO request object and schedule the callback for us (if one is provided). The difference to this approach, of course, is who is checking the status and who is scheduling the callback (kernel vs Parrot). I feel like this way is going to have a lot of performance drawbacks, but then again the biggest performance drawback in any IO system isn't going to be the callback scheduler anyway. The differences might be negligible.
So we don't even know how we're going to do simple read/write operations yet, but we might be able to nail down the specifics of a few other tasks. A listener object or a Select/Poll object for instance might be registered with the scheduler to repeatedly check their status and call callbacks when an incoming event occurs. This would be useful on a server-side network app where we could listen on an array of sockets for incoming data passively and let the scheduler take care of checking flags and scheduling callbacks.
Here's a straight-forward example of code to create a passive listener (which some Unix/Perl folk would probably prefer we called "Select") PMC which calls a callback when an incoming event occurs:
$P0 = new 'Socket'
# ... Connect the socket here
$P1 = new 'IOListener'
push $P1, $P0
So at the end of that snippet, we have a listener object in the the scheduler. The scheduler will poll the IOListener, which in turn will poll the Socket and all other PMCs that it contains, and for every event that it finds it will add the corresponding callback to the scheduler for execution.
So the PDD definitely offers a nice base to start working on, but there are a ton of questions to be answered about some of the implementation specifics. We'll know as we start breaking ground and writing code what does work and what doesn't. I'm sure I'll be reporting on these issues as they arise.