First, a polled write:
.local pmc request
request = new 'AIOWriteRequest'
say request, "Hello World!"
poll_loop_top:
unless request goto poll_loop_top
say "Done!";
This would be, coincidentally, a way to implement a blocking call to the "say" opcode, in terms of an asynchronous variant. Of course, we would probably write it in C, but the steps would be the same. Now let's look at the same kind of idea, but using a callback to restore control flow instead of a polling loop to stall:
.local pmc request
request = new 'AIOWriteRequest'
.local pmc continue
continue = new Continuation
set_addr continue, resume_point
request.set_callback(continue)
say request, "Hello World Again!"
end
resume_point:
say "Done!"
A lot of this is speculative, I don't really know what would be considered the best way to halt the current execution thread but allow a scheduled task to continue execution later in the future. The "end" opcode is probably not a good one for this use, but I don't think any others exist. In either case, we probably want some kind of guarantee internally that all outstanding IO requests (besides a passive "listen") will be handled prior to interpreter termination, so calling "end" with an outstanding IO request could cause execution to resume as we expect. Again, speculative. Regardless of the specific details, this method performs the same blocking "say" call, but has the added benefit that it's not looping endlessly and eating resources the entire time. The currently running thread simply stops, and Parrot is free to use that time to do other work (such as handle the IO call).
I'm also ignoring some details about how buffering would work. For instance, if we were writing lots of little snippets to a file in a loop, would we buffer multiple snippets together into a single request, or would we dispatch each separately. Would we write:
loop_top:
$S0 = 'get_next_snippet'()
print requestobj, $S0 # Add $S0 to the buffer
unless ready_to_exit goto loop_top
schedule requestobj # Start the write operation with all snippets
Or maybe:
loop_top:
$S0 = 'get_next_snippet'()
print requestobj, $S0 # Set the payload
schedule requestobj # Schedule it, one snippet at a time
unless ready_to_exit goto loop_top
Obviously lots of questions to answer as we talk about implementing this system. One thing you will notice in the two previous examples, but you did not see in the first two was the use of the "schedule" opcode to actually dispatch the requests. For something like an asynchronous write operation, would the "say"/"print" opcodes actually schedule the IO, or would they just write data to the request buffer and then the "schedule" opcode would schedule it? Lots of questions to answer.
Let's look now at a passive listening PMC, such as a socket connection that would receive incoming data, or a PMC that listens to the OS to receive filesystem events:
.local pmc listener
listener = new 'AIOListen'
.local pmc callback
callback = find_global 'callback_sub'
listener.set_callback(callback)
listener.listen() # Could also be "schedule listener"
...
Every time the listener PMC received input, it would pass it along to the callback function:
.sub 'callback_sub'
.param pmc aiolisten # The AIOListen object
.param string data # The incoming data
...
In all the examples I've obviously left out some details. For instance, on none of the objects did I specify a filename or streamname, nor did I specify anything like a timeout, and I certainly didn't talk at all about buffering. Those details are necessary, but not important for these examples.
Here's another idea: What if instead of PIR-exposed AIO PMCs we had opcodes that managed these requests automatically?
writerequest filehandle, "Hello World!", callback1
readrequest filehandle, 20, callback2
And maybe an optional fourth argument would specify whether the current executing thread should suspend, terminate, or continue while the request is executing (which would enable us to do blocking IO very simply.
So these are some conceptual ideas about how AIO would be usable from PIR code. I'd be very interested to hear ideas that other people had about how it could be used. What I would like to see is example code that people think should work. Show me what you think these code examples should look like. There isn't anything firm on the design board yet, although I have a few more blog posts left to write about this topic.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.