Blog Closed

This blog has moved to Github. This page will not be updated and is not open for comments. Please go to the new site for updated content.

Monday, May 4, 2009

AIO On Linux

Yesterday I talked about the AIO situation on Windows. Today, as well as I am able, I am going to talk about the current situation on Linux. I've done asychronous AIO programming before on Windows, so that was an easy start for me. I've never used the Linux equivalents before, so this post is going to be a little bit more shakey.

First, let me start by saying that my initial findings on Linux AIO looked pretty bleak. I've seen some pages that talk about long lists of possible routes, each with unresolved problems. I've seen some project pages that look like they haven't been updated since 2007 or earlier. I've seen quotes like "This is stupid" or "That is crap". However, on closer inspection it's looking like there are some real possibilities for good AIO as of kernel 2.6. Specifically, the POSIX AIO API looks like it has a lot of promise for us. Unfortunately, a Google search for "Asynchronous IO on Linux" returns so much negative information.

I have found a good documentation page that does a decent job of talking about the various structures and functions involved in the POSIX API. However, I do notice a problem immediately that things like console IO cannot be accessed asynchronously. That's an issue that we can work around, however. Also, I notice that this POSIX API appears to use the one thing that I didn't like: Separate threads of blocking IO primitives running in user space. So, that's a negative, but not something we can't really get around here since the other options are worse. It's my understanding that the POSIX API is going to manage the synchronization issues, so maybe it's not such a bad thing.

In the last post I talked about adding an IO queue structure to the concurrency scheduler. Whenever we have a message, we schedule a handler task to process it. In Linux, we don't have anything like an IO Completion Port that can be used to manage a long list of requests for us (that I know of), but we can manually manage a list of struct aiocb structures and poll them for changes.

My interest has also been piqued by the aio_suspend function to help in implementing blocking IO operations internally. I also keep looking into the lio_listio function as a great way to launch a number of simultaneous requests, but I don't know how useful it will be in practice. We'll see how things fall into place one work begins on the actual implementation. One thing is for sure though: these POSIX functions don't appear to be available prior to the 2.6 Linux Kernel. So we're definitely going to need good configure probes, and we're going to need a good Plan B so we can simulate AIO on systems that don't support it directly.

Next up in my little series about AIO in Parrot will be a discussion about how we're going to be writing an AIO implementation that will be sane across Windows and Linux, and will properly integrate with the rest of Parrot. If you have any thoughts about this issue, or if you know anything about AIO on Darwin, please let me know. There are a lot of things that I don't know about these topics, so any input is much appreciated.


  1. 1. You seem to be putting AIO in one bag with non-blocking IO.

    POSIX AIO means that as soon as you call aio_write(), the request is queued but it returns _before_ the data you gave it is copied to kernel space/elsewhere. Only after you get confirmation of it being coplete can you touch the data buffer again.

    Non-blocking IO means that the call to write() waits until the data is copied to kernel space. Immediately after write() returns, you may freely thrash the buffer. The data is already in kernel space and from there it may be written to a file, socket, serial port, whatever.

    So, with AIO there is theoretically less waiting than with non-blocking IO.

    2. IIRC, AIO on Linux doesn't support certain types of fds. For example, it does not work on serial ports. It requires extra kernel-side code to work, so IMO it's not a good solution for a general framework at the moment.

    3. If you use callbacks, you still need a synchronisation point. This usually means calling callbacks_update() which checks the queued callbacks and calls them. It's not much different from polling in a loop.

    So in general, the situation is complex. If you ask me for advice, I'd say go for a queue and implement it using epoll on Linux.

    Hope this helps :-)

  2. Thanks for the comment!

    1) I'm not trying to get the two mixed up, but I am talking about two separate contexts that I might not be too clear about. My goal here is to implement Parrot's blocking IO operations in terms of C non-blocking operations.

    2) you are right that this raises issues. We can't asynchronously access the console, the serial port, etc. At the very least that's going to make it harder for us to come up with an elegant unified IO system, but by no means impossible. Especially not if we sacrifice a little elegance.

    3) I did forget to mention the necessity of synchronization points, thanks for reminding me of that. Parrot does have a concurrency scheduler that could call callbacks_update() regularly, so it doesn't really affect what we need to do from user-facing PIR code.

    I do appreciate any advice! It's better to hear it now then after I've already started writing code for all this. I don't think we are going to use a polled solution just because that doesn't jive well with the currently spec'd interface. It's not impossible to combine the two, but again it's going to be a pain in the ass to make it work correctly.

  3. Sorry if I wasn't clear on this point, but in (2) I meant that the POSIX AIO interface (aio_write) doesn't support all types of file descriptors on Linux, but non-blocking IO (write, epoll) works pretty much on everything. So I think it would be a bad idea to use POSIX AIO in this case, but anything else should be OK. Non-blocking IO works on consoles and serial ports too.

    Of the methods described on the page you link to, I would rule out POSIX AIO for reasons described above and signals (signals are evil because the application might want install its own handler, overriding yours). This leaves non-blocking IO with epoll/poll or threading as acceptable implementation options (though threading with too many fds can use up memory because each thread needs its own stack).

    I've had some experience with non-blocking IO using edge-triggered epoll in a real-time thread on Linux. I don't know much about Parrot's requirements here, but I'll keep reading and if anything comes up then I'll post comments.

    Good luck with the research and implementation :-)

  4. Thanks again for the comments! I did understand what you were talking about in #2, but I guess I wasn't clear that I did understand it. Note to self: more clarity!

    Signals are out, for the reasons that you mentioned and so many more. I'm also trying to stay away from managing my own IO thread pool because it's going to be a huge synchronization and management hassle.

    I had been leaning towards POSIX because it seems like the most complete implementation, but you are convincing me that this is probably not the best. One upshot to using epoll is that it has behavior similar to Windows IO Completion Ports, so we can use these methods to get similar behaviors across platforms.

    So that may be the route we take going forward: Use epoll (on linux systems, IO Completion Ports on Windows), we'll poll it's status internally and launch callbacks directly from our scheduler.

    Thanks for all your input!

  5. You could look at libevenet, it implements asynchronous event notification on various UNIX systems and Windows with an unified API (on windows it uses event ports if I understand correctly). Though it has some limitations in multithreading applications.

    Best regards

  6. Raphael DescampsMay 6, 2009 at 10:35 AM

    more hopefully useful links...

    Wish you good luck :)


Note: Only a member of this blog may post a comment.