Blog Closed

This blog has moved to Github. This page will not be updated and is not open for comments. Please go to the new site for updated content.

Thursday, January 31, 2008

Displaying graphs

My Expert System shell produces graph objects using the Graph::Easy module. I could probably switch over to using the regular flavor of Graph, but for now I'm not. Graph::Easy has the option to output into GraphViz, SVG, and HTML outputs. What I want is the ability to display one of these graphs in my Tk program, without having to open an external file viewer.

There are two possible paths to take: I can find a way to convert the SVG output to an image format (JPEG or PNG should be fine) and then display the image file, or I can find a widget that can natively display SVG images.

One requirement that I have (it's a firm requirement, although not an absolute one) is that I would like to try to find a pure-perl solution, or at least a solution that doesn't require me to compile any XS code. There are many reasons for this, and I won't take the time to explain them all here. Eventually I'm going to switch to Strawberry Perl, and will probably lift this restriction.

Tk::Zinc, and SVG::SVG2Zinc seemed like a good combination, but ActiveState doesnt offer either in it's download repository. It also doesnt offer Tk::HyperText (which should be able to render the HTML output versions of my graphs), or Tk::GraphViz. This would seem to indicate that I can't find an (easy) solution involving just Tk widgets.

I also haven't been able to find a satisfactory solution for converting GraphViz, SVG, or HTML into an image file. Of course, maybe I haven't been searching hard enough, or looking for the right keywords. Many of the modules I do find for this purpose are based on compiled libraries such as GD, LibRSVG, Cairo, etc. SVG::Convert, which seems to have a PNG output mode, isn't downloadable from activestate. SVG::GD is available for download, but relies on GD which is not. GD::SVG isn't available either, but it doesnt look like that library supports what I want to do anyway.

I'll keep looking, this is the first day of my search and I'm not putting a lot of effort into it.

Tuesday, January 29, 2008

POE Edit counter

I threw together a quick edit counter for Wikibooks and Wikiversity today, using Perl/POE/Tk. It was actually surprisingly easy, once I figured out what I needed to do. Of some note is the fact that much of the documentation on is terribly inaccurate, and it took me some time to find a working example for me to emulate. Of course, I shouldn't complain since i'm not going to offer to update it myself.

I primarily wanted to count the relative edit frequency of Wikibooks and wikiversity. I had a suspicion, based only on a visual inspection of the RC feeds, that Wikibooks was attracting approximately three times as many edits as Wikiversity. I ran the feed for a few minutes, and sure enough, my prediction seemed to be about true:
  • WB: 16 edits
  • WV: 5 edits
I only ran this for a brief period of time, certainly not enough to "prove" anything about the numerical relationship on average. These counts are for all RC entries, including log entries, page moves, edits, rollbacks, etc. Anything that appears in the IRC channel gets counted. It would be trivial to perform some kind of filtering on these results, and I may do that eventually.

My curiosity quickly got the better of me, and I extended the code to count Wikipedia edits as well. After a few minutes of writing (just enough really to type this blog post), here are the counts:
  • WB: 19
  • WV: 5
  • WP: 4056
Going by these numbers, it appears that en.Wikipedia is about 200 times more active then WB, and about 800 times larger then WV. Of course, this is just a very short sample, I intend to run the counter for a long while to try and get a better picture of average edit counts.

I like this combination of Perl/POE/Tk, and I can already think of a few things I would like to do with it (assuming I can find accurate documentation I need to make it happen).

Monday, January 28, 2008

Text Editor

For the longest time, and I can't even justify this, I was using regular Windows Notepad for my coding work. I had Bloodshed's Dev-C++ for my C and C++ work, and I liked it pretty well. It was certainly more affordable then Visual Studio, and to this day I recommend it to my students are are looking for a good and free C or C++ IDE.

But beyond the world of C programming I was using notepad for all my needs, although for a time my needs were small indeed. As time went on, I found myself in need of more and more features from my text editor, not the least of which were syntax highlighting. Brace matching is nice too. Of course, since I'm programming in so many different languages (C and Perl are most often, but MATLAB, some Python as I learn it, and a few other scripts here and there) I needed an editor that was more flexible. Some people will, inevitably, stop here to ask "why don't you just use VIM, or EMACS, or editor X"? The answer is that in the past i've never found extensive keyboard bindings to be particularly useful to me, nor particularly fast and efficient. Plus, I've spend a lot of time using notepad, and I really need something that's going to be a plain text editor that works the way I expect it to work. A few visual features are always nice, of course, but in the end I want it to do what I want.

I picked up Notepad2 because it promised to be a drop-in replacement for notepad, all I needed to do was switch around a few binaries and it would be like this new editor had always been there. I liked the editor a lot: it was simple and efficient, and met my needs almost completely. However, adding a new syntax to the editor required modification to the C source code and a recompilation of the executable. I am looking at getting into some Parrot development (I'm especially looking to implement an Octave dialect to run on Parrot) and so I needed an editor that could support PIR and PASM as well. But, since both of these "languages" are still in development and subject to change, I didn't want to go through the hassle of compiling and recompiling the executable every time a change needed to be made.

A few of my students had clued me in to another notepad-like text editor, Notepad++. Notepad++ is similar in many respects to Notepad2 (they both use the Scintilla editor component), although Notepad++ has more features. Among these features are the ability to add new scripts without needing to recompile, a tabbed interface, a multiple-document view, and a slew of add-ons and extensions.

I'm slowly migrating to Notepad++, but I'm currently in the strange position that I have Notepad, Notepad2, and Notepad++ and different files and actions appear to open different editors. It's just another project for me to work on eventually.

Thursday, January 24, 2008

Morning So Far

I got an email this morning from a person at GE, saw my resume online and thought I might be a fit for a position for "Embedded Control Engineer". I'm not entirely sure which email she saw, and how old it was, so I updated mine and sent a new copy back. Spent some time working on my cover letter too, since it was completely non-existant and it needed to be upgraded to "existing".

I rewrote a big portion of my thesis this morning too. I combined two objects that were essentially the same thing. One object acted as an interface to a model file, and the other acted as a cache for data in that model (this is a simplification, but it serves for these purposes). By combining them together, I was able to actually reduce the amount of code by putting all my update routines into loops. With this new class defined, I need to create all the necessary interface routines (which can be tedious in MATLAB), and update my other classes which rely on this class.

Combining the two classes together reduces the amount of data that I need to duplicate, simplifies many of my sanity checks, and is going to provide a much simpler and faster interface to the rest of my design. With this complete, there are basically 3 areas that I need to finish:
  1. Component Libraries. These are all drag-and-drop designs in simulink. I'm not optimizing them, so this should be quick and straightforward. Estimate 2-3 days of hard work for this.
  2. Library Interfaces. Code to traverse the various library components and select one based on certain criteria. Should be simple enough to implement. Estimate 1 day for this
  3. User Interface. I actually have two user interfaces, but through some earlier work, the one is already completed. The second interface is designed, but the buttons and toggles are not all properly connected to the necessary logic. Estimate 2-3 days for this, depending on how nit-picky I get, and how much sanity checking I perform.
Taking the maximum amount of time, and doubling it arbitrarily to account for the inevitable problems and debugging work, subtracting weekends, and I am setting a soft deadline for code-completion for February 13th.

I also finished my Parse::RecDescent parser for the expert system this morning. Only took me a few minutes once I realized what the problems were:
  1. I added a directive to one of my productions, which caused my @item field indices to be off by 1. returns "1", so when I was accidentally taking it's value in my production, I was using the string "1" as a hash reference. Since I had literal "1" strings in my input file, I assumed that I was not properly returning values, and spent a lot of time double-checking my various data structures and references.
  2. In one of my productions, I was passing a hash reference where I should have been passing a hash. Later, when I took a reference to this value, I was dereferencing a hash reference reference, which was giving me "Is not a hash reference" errors that I couldn't figure out.
I committed my changes, and am going to prepare a new release probably sometime next week (it shouldn't take me too long).

Wednesday, January 23, 2008

Long Day

I read this blog post about phone interviews for software engineers today. A lot of it sounds like the phone interview I had with Google (I still can't believe what I said "fork" did). I feel pretty good about the material on this page, but there are two notable exceptions: First is some of the OO terminology that i'm fuzzy on, notably "virtual methods". Second, under the data structures section I can't quite recall what's so special about a "Graph", or how you deal with one practically. These are small issues in the grand scheme of things, but I need to hit some books now to satisfy my own curiosity.

In my own defense about the OO stuff, however, I will say that most of my recent programming efforts have been in MATLAB and Perl, neither of which has an object system that I would call "state of the art" (and MATLAB's objects are so broken and idiosyncratic that they hardly count as being OO). I know Perl 5 has Moose and other object frameworks to try and emulate the object environment of Perl 6, but I can't be bothered to learn a hack-around object framework when I could be learning the real deal.

I pulled out Parse::RecDescent today to help implement the knowledge base script for my expert system shell. It's been a while since I read through the dragon book, so I wasn't as sharp with it as I should have been. I also made the zealous mistake of trying to implement the entire grammar in a single run, when I should have done smaller test cases. I paid for it in debugging time, and am still having a persistent error with one of my productions. I'm trying to keep my grammar for this very light on the punctuation, although some of my recursions are suffering because there is a little bit more ambiguity then there would be otherwise.

I talked about wanting to learn SVN the other day, and now that I've started using it, I almost can't imagine software development without it. I'm half thinking about installing it on my office computer so that I can manage my thesis through it. Although, considering the effort to make this happen (which is, admittedly, very small) and the fact that I am so far along in my thesis, it's basically not worthwhile. I do have a rudimentary file-backup system in place, so I'm not worried about losing too much data. Next on my list of things to learn (or, "master") is make. I'm sure I'll fall in love with that program too, given the chance.

I wasn't feeling too good today, so I didnt get much work on my thesis. However, working on other projects has helped to clear my mind, improve my mood, and rebuild some much-needed enthusiasm for that project. I plan to work on it a lot within the next two days and over the weekend, so hopefully I can stay on schedule.

Tuesday, January 22, 2008

Thesis Status

Gotten a lot of work done on my thesis recently. Most of the work at this point is just in writing code, so I've written a lot of it.

The design, at the moment, consists of three large data structures, although two of them are going to be merged together. These two objects are going to be wrapped in a large interface class for easy access. The first object (hmodel) represents both an abstract processor hardware design, and an interface to an actual hardware design file for simulation and implementation. The second object (optstruct) represents the design for the associated assembler. These two are non-orthogonal, in that some pieces of data overlap (machine code word formats, opcodes, etc). Integral to this relationship is the implementation of frequent sanity checks, to ensure that the overlapping data items are equal.

The hmodel object interfaces with a Xilinx microprocessor model and a set of associated libraries. As changes are made to the object, changes are effected in the hardware model itself. The processor is broken down into 6 primary components: Instruction Fetch (IF), Instruction Decode (ID), Register File (REG), ALU, IO Ports (IO) and Writeback (WB). For each component there exists a library of plug-in options to choose from. As options are set in the hmodel object, the libraries are scanned for components that match those values. If a match is found, that library component is put into the model. This interface is nearly complete.

Here is a basic rundown of my progress:
  • Hardware and software design: 90%
  • Basic classes and interfaces: 90% (barring a partial rewrite)
  • Component libraries: 30%
  • Graphical User Interfaces (GUI): 50%
Once the classes and interfaces are complete, the GUI will fall into place very quickly. The component libraries are trivial once I have all my interface designs finalized, especially since I'm not optimizing them. Once I'm code-complete, I start on full testing and debugging, and then I need to prepare 3 physical hardware designs. Once the three processors are complete, I'm finished. At my current work pace, I expect to reach code-completeness within 2 weeks, including a substantial rewrite of some core components. Testing and debugging should last anywhere from a week to two weeks after that. With the system up and running, building the necessary processor models will be trivial.

Monday, January 21, 2008

Amazon and Recommendations's recommendation feature is certainly an interesting one, and it can give pretty remarkably good results if you give it enough data. Of course providing enough data, and enough of the right kinds of data, can be difficult and time-consuming. Although, it would seem that giving it too much data can be problematic as well. Let me try to explain.

I don't know the exact algorithm that Amazon uses to make recommendations, although I assume that they are using data based on past user histories. For instance, if a user buys both book X and book Y, when I say that I like book X, the system then also recommends book Y to me. I may be nowhere near the correct explanation with this, and to be fair it doesn't matter at all.

For school, I tend to buy computer-related engineering books from Amazon. For my pleasure, I also buy books on computer programming languages, philosophy, and literature. For a long time, this is all I bought on Amazon, and these were the only kinds of books that Amazon recommended to me. I found a few gems through the recommendation process, but in general it was slim pickings because i was so narrowly defined inside the database. So, looking to get better recommendations, I began to expand my horizons. I listed some CDs and DVDs on my wishlist. I added some books about chess, and some other casual reading books. I added videogames too, since I'm still a player (although hardly so avid a player as I once was).

The recommendations did diversify, although the CD and DVD recommendations were not numerous nor particularly diverse. Eventually, I stopped getting recommendations for CDs and DVDs altogether. Maybe people just aren't buying those kinds of things on Amazon, so there isn't a lot of recommendations to be made. When I saw videogames that I owned, I would tell amazon that I already owned them, and so I would be recommended more videogames.

Eventually, Amazon got just the right information it needed, because it hit the motherload: it began recommending to me games that I already owned by the dozens. nearly every recommendation was spot-on, and in short order I had input more data about videogames then about any other type of media. The double-edged to this sword comes because now Amazon only recommends videogames, related memorabilia, accessories, and strategy guides. It can take several pages of searching before I find things like books in my list of recommendations, and I dont want to take the time to mark "not interested" on all the millions of games and accessories that the system wants me to look at.

Friday, January 18, 2008


Apparently the Vatican has "slammed" some company for doing cloning work. I don't know what company, I didn't read the news story and I don't need to. It's the same old crap every day.
  • Somebody does something new that may have benefits
  • Some conservative or religious throws a fit about it
  • Followers of said conservative/religious authority also throw a fit about it
On the train the other day, 4 old people were talking about cloning, cloned food in particular.

person 1: I don't have any problem with cloned food. If it's good and cheap, I'll eat it.
person 2: Me too, I don't really care about it.
person 3: Well, I do! I'm not eating no cloned food.
person 4: Why not? what's wrong with it?
person 3: I don't know, but I'm not eating it. It's just a bunch of scientists playing god, and we don't even know if it's dangerous.

This is exactly my point in a nutshell. Person 3 above "knows" that cloned food is "bad", but doesn't know how he knows it. Can't explain why it's bad, but he knows that it is and he can't be reasoned with.

I don't have a problem with cloning or cloned food. In the case of meat, I can't really see how cloning is really an economical or even desirable alternative to the "natural method" of "new cow production". Genetic engineering has been having a great effect in fruit and vegetable productions, and cloning is just another tool that scientists have to continue that process.

Cloning is, more or less, creating a genetic copy of one organism, and using that copy to grow a new organism. A cloned animal (in an ideal situation) should be identical to the original, and shouldn't be harmful in any new or exotic way. A hamburger made from a cloned cow should be the same quality and composition as a hamburger made from a non-cloned cow. There are, of course, the ethical issues to consider in that cloning is not fool-proof, and a majority of cloned embryos are damaged by the cloning process and are not viable. I also don't consider this point to be an issue, but I can see where other people might.

The pope doesnt want us "playing god" by manipulating genes and cells. The pope also doesn't want us to have sex, and when we do have sex he only wants us to do it in certain ways. Sex, according to the pope, should really be considered no more then the mechanics behind "new human production". I think that if we are going to take the fun and the love and the excitement out of the whole process, we may as well be doing it in the lab anyway.

Thursday, January 17, 2008

Snow and Laptop

Finally getting some snow around here, more then half way through an unseasonably warm winter. There's about an inch on the ground and driving is treacherous.

I re-assembled Geoff's computer today and took it up to the UPS store. The guy there said I had to go back home and call Toshiba. Called Toshiba and was told that I need to take the laptop to a different store in Essington. I don't know when I'll have time to do that (maybe tomorrow, weather permitting). Here's some backstory:

Geoff's laptop was having some occasional problems. They were sparse and intermittant. Every once in a while, the computer would turn off without warning (it may only have happened once or twice, i dont remember specifics at this point). One day, he turned it on and it refused to boot: halfway through the boot process it would power down and reset. Assuming it was a WinXP software issue, we tried the restore CD but there was the same problem: it would reboot shortly after turning the power on.

The A75 has a known hardware problem (for which there was a recall that we didnt take advantage of) with the DC power adaptor. Luckily for us, it's the kind of problem that with a little know-how and the right tools, you can fix it at home. The bug is the result of a lousy solder connection between the DC power jack and the motherboard. According to the online fixes I was seeing, fixing the problem was as easy as removing the power jack and soldering on a replacement.

So I take the laptop on to campus where I assume there is decent equipment to work with. I am wrong. The soldering irons were shitty, the de-soldering irons were worse, and it was impossible to get the old jack off the motherboard. So, after another couple weeks, I bring the motherboard back home in despair.

However, there's another problem. When I disassembled the computer initially, I had all the pieces laying out on a table. Shortly thereafter, the ram chip went missing. So now the computer has no RAM, so even if it did work now there is no way I would be able to test it.

When I do go to drop the computer off, here is what I have to say: the computer was disassembled and reassembled by myself in my home. It has no RAM. There is also an unspecified problem with the boot sequence (which is most likely hardware) but we can't test it because there is no RAM. Also, when I reassembled the computer, there were two screws left over. I know where they are supposed to go, and will probably reinsert them tomorrow.

Tuesday, January 15, 2008

What else am I missing?

First a quick note about Google Reader. I've never used an RSS aggregator before (at least not to any effect), but now that I've turned all my regular news sites over to Google Reader my online escapades have become significantly more efficient. In the old days (ie, a few weeks ago), I would click on site after site in my list of bookmarks, searching for new content. Now, Google Reader checks these sites for me. Beyond just showing me the stuff that I already kept up on, Google Reader also recommends more feeds for me to subscribe to.

It's through this process that I found a blog called "Coding Horror". On that blog, I found this entry which laments the quality of CS programs (the failure of CS programs is actually a common topic that people seem to be writing about). Specifically, it mentions that graduating CS students tend not to have practice in software rollout, including version control and deployment software.

Now, I consider myself to be a decent programmer, but I also don't have any experience in this area either. A lot of this is because my projects tend to be personal and are not team-based. A lot of it also is because I do most of my work on a Windows computer, and I always feel like my options there are limited (especially since my computer at home doesnt even have nmake or a c compiler) unless I am prepared to go through a big hassle. What I need is a real computer running real linux, not just Cygwin. Although, this is a topic for another thread.

There are a few things that I would really like to try to do, mostly for my own personal development:
  1. Learn version control and deployment software: Subversion/CVS, make, etc. I think I know make well enough in theory, but I've never employed it.
  2. Convert over my two sourceforge projects to use Subversion, for practice.
  3. Try to move some of my perl modules (especially my MediaWiki modules) to CPAN. That's going to require other sorts of organization, plus cross-platform testing, etc.
  4. With a knowledge of version control in hand, I would like to try and join a big software development project, like MediaWiki, Pserl6, or Parrot. Even if i just submitted a bug patch here or there would be a big step up from my usual software development routine.
This begs the question: what else am I missing? What other skills and software proficiencies do I not have that I should?

Monday, January 14, 2008

Online Safety Book and Course

I'm writing this post in response to a Slashdot article from a few weeks back:

Which, in turn, is written about a news story:

"US Curriculum to Include Online Safety"

The potential here for Wikibooks and Wikiversity is enormous. A course and an accompanying textbook for online safety would have the potential to help influence a nationwide standard for online safety. How often have we all seen newstories of children being solicited online, or people downloading destructive bugs or viruses from unfriendly websites?

This is the kind of thing that many of or tech-savvy members could participate in, talking about avoiding phishing, keeping your computer protected from viruses/spyware/adware, keeping children protected from predators/pornography, etc. I would like to start getting some ideas together on this, and I hope that other people would be interested in helping.

Sunday, January 13, 2008

Cell phone

I'm getting ready to buy a new cellphone soon, because my old one is lousy. Dana is on Verizon and ideally i would like to migrate with her onto a family plan. Both being in Verizon is going to do a lot to keeping our bills lower, because we talk to each other more then anybody else. That's a lot of minutes that are free instead of coming out of the monthly pool.

Considering how web-integrated i've become, I want a phone that has significant internet presence. This includes web browsing, emailing, and a good keyboard. When it comes to keyboards the bigger, the better. Also, I would highly enjoy it if my phone could be used as a wireless gateway for my computer to access the internet. I wouldn't have to use that capability all the time, but it would be handy during long train rides, or when i travel, or whatever.

I'm thinking about the EnV, which is the phone that dana has already. She likes it, and it does have a lot of the features that I am interested in. I'm not necessarily interested in the 100$-150$ price tag, but i am willing to pay to upgrade from my current phone. I would like to have an Android phone, but I don't know if i can wait for that platform to mature. It's getting to the point now that if i don't get a new phone soon, I'm going to go crazy and run my old one over with my car.

Saturday, January 12, 2008


I really wish that I had the time to get back into some of my programming projects. My expert system shell, for one, really could use a lot of work. Major upgrades to it would likely require me to upgrade or even completely rewrite the inference engine (and I see no reason why that shouldn't be possible). In an ideal situation, I would like to have a very advanced and configurable inference engine. It could optionally use forward or backward chaining, trace it's reasoning, provide automated explanations, etc. I would like to expand it to be able to cover all arbitrary code snippet evaluations (arithmetic, etc).

My wiki bot needs a serious overhaul. I want to redo the library module to include more features, more functions, and more error checking (I do a minimal amount now). I also need to update the library to use POD for documentation (i currently don't offer any real documentation).

Since I need to wait anyway (I can't justify working on these while my thesis still has so much work left undone), I might as well wait for perl 6 and just upgrade them both at once.

Speaking of Perl 6, I had a great idea for a MediaWiki parser engine that would run on Parrot. It would work like PHP, in the sense that all plain text would be directed to STDOUT immediately. All formatting would be converted to HTML and then directed to STDOUT. I keep reading about the fantastic Parrot compiler tools suite, but I have yet to find any real documentation on them. I think that if the compiler tools are as fast and easy to use as people say, I could become very productive in porting things to parrot.

I'm working now on an upgrade to my RemoteTalk gadget at wikibooks. It's a tool that uses an iframe to allow the user to check talk pages on other wikis quickly. The current implementation requires the user to construct an array in javascript of all their remote aliases, and I am trying to upgrade it so that the user doesnt need to write any javascript at all. Much more userfriendly, I think. I would also like to break ground on a new message alerter program. The message alerter would be a daemon that periodically makes an API call to check for new messages (think hourly, or maybe you would have to select a "fetch" option manually.

My other wikibooks gadgets are relatively stable, but there is always more work that could be done. My book designer gadget (which creates a new book TOC from an outline) needs to be expanded to use my AJAX editing library to create pages with default text (which would, in turn, allow for multi-level outlines to be used). Speaking of which, my AJAX editing routines all need to be improved for increased reliablity and functionality.

There is lots of work on the agenda, so I am hoping that I can get significant portions of my thesis completed soon. I'm still struggling with a data representation problem, and I'm having a few questions about how I am going to manage the GUI portion of it, but once those questions are answered I expect the work to go very quickly. The assember is basically "complete", the GUI has been designed but a lot of the code to handle the GUI has yet to be written. After that, I mostly need to work on the simulator and the component libraries and then I'm finished. It's difficult to estimate how long each of these things will take.

Thursday, January 10, 2008


I've added some Google Adsense advertisements to my blog here. I have not, however, added these adds to my Wikimedia-related blogs. I don't expect to generate any substantial revenue from these ads, and I'm purposefully keeping them small and unobtrusive.

If I find that they can be used in a tasteful and unobtrusive way, I may extend these to some of the wikimedia-related blogs as well. Money I earn from such ads (and again, it certainly won't be much) will definitely be used for wikimedia-related purposes.

Wednesday, January 9, 2008

USA Chapters Meeting.

We had a long chat today in #wikimedia-chapters about the USA chapters situation. I was there, Delphine, Pharos, and a few other cameo appearances. I was speaking only from the standpoint of a WMF PA planner, not a ChapCom member. I pushed a slightly fanatical agenda, and I know Delphine played devil's advocate a lot. A little bit of role-playing helped us to put a lot of information on the table. It is going to be a while before the chapcom makes any kind of official statement on the issue of USA chapters, and i certainly can't say anything about it until then.

I would like to have more meetings like this in the future, possibly getting more chapter planners, chapter members, board members, and other related people involved. There are a lot of issues to hash out, and the chat today was promisingly productive.

Sunday, January 6, 2008

Weekend: Ruby Tuesdays

Had a good, albeit boring, weekend. Dana and I didn't have any major plans, which is a nice change from the many weekends during the massive thanksgiving/christmas season. Unfortunately, the time we normally would have spent running around was instead spent watching TV and playing my gameboy.

We worked on Dana's resume, which actually turned out very nicely.

A while back, we had gone to a Ruby Tuesdays restaurant, and while neither of us quite remember the details, we had decided that we were never coming back. However, today we were up the road, hungry, and decided to give it another shot. First off, we waited and waited. It was 20 minutes before the waiter first showed up to give us drinks. We asked for water, waited another 5 minutes. We put in our orders (we had plenty of time by now to mull them over), and put them in as well. The food was, to put it simply, lousy. My sandwich was wet and drippy, and fell apart halfway though. Not only did it not taste particularly good (especially not when disassembled), but the weird juice dripped all over my frys to make them soggy and unpleasant.

When we finished, we decided to ask for the check and get out of there ASAP. Of course, the waiter took his time, and it was another 10 minutes before we got out of there. We wanted to leave a small tip, but we didn't want to waste the time for this guy to bring back change, so we gave him a 4$ (more then he deserved, by a long shot) and stormed out. Never going back to Ruby Tuesdays again, and this time we will remember why.

Wednesday, January 2, 2008

Lex & Yacc

got my new book "Lex & Yacc" today in the mail, and I'm excited about it. I've wanted this book for a long time now, but I've either never seen it at B&N, or when I have seen it I've been appalled by the price. Lucky for me, I picked it up on Amazon for 10$, including shipping. not bad for a book that is ordinarily 30$ (despite being over 12 years old).

The book is certainly interesting, and I am going to use it as a resource to help me beef up some of the books on Wikibooks such as the compiler book. I'm also going to learn as much as I can about these so that I can port those lessons to Perl's Parse::Lex and Parse::Yapp modules. My C days are basically over, and besides some stuff for school and work, I would like to do my coding in higher-level languages from now on.

2 more books to go, I hope they show up this week or early next week.

Tuesday, January 1, 2008

Dana's Blog

Dana set up a blog of her own now, and I think it's a good thing. i've found blogging to be very therapeutic, and putting down your thoughts makes an excellent record that you can trace later. At the moment her blog is very very pink which is fitting, I think.

She's already posted some New Years resolutions, and I have to admit that i'm stumped in coming up with some for myself. They say that you should be more specific and set specific goals instead of vague generalities. One thing I do want to do is to be more organized with things. This will involve making more frequent and more productive use of my PDA. With my fancy-shmance new keyboard, that should be easier to accomplish.

Dana and her parents also got new phones for themselves. Dana got a cool "EnV" phone with a full keyboard and internet access. I think i might just want that for myself, but we will see about that in the coming weeks. A new phone would be a great thing, especially if i can use it for internet access on the train.