Archive for the ‘Misc’ Category

Of Course he Got it Today

Thursday, June 24th, 2010

There was a guy in the supermarket today with the new iPhone. “Nice phone,” I told him.
“Thanks,” he said, “I got it today.”
“I know,” I replied.

I scored 90%

Monday, September 15th, 2008

I got 9 out of 10 correct on the San Francisco Bay Area TV Trivia Quiz.

Future History’s Evil Mastermind

Sunday, August 3rd, 2008

I’ve been watching reruns of this old show on the History Channel, History’s Lost & Found, a sort of where-are-they-now of historical artifacts. I can’t get past this idea that it’s just a shopping list for a super-villain from the future with a time machine. Someone should check and make sure all that stuff is still there.


Saturday, August 2nd, 2008

I’m working on a theory that uses socks as the factor determing how I regard someone. I can’t think of a single group of people that I care about that don’t wear socks in the normal course of their day.

People who don’t wear socks:
Indigenous people.
Surfers/beach people.
The homeless.
Double amputees (legs only).
Vice cops from the 80’s.

I think I might be on to something. I don’t really care about any of those people.

How Documentation Can Impede Development

Sunday, November 18th, 2007

I’ve had this noble desire to document my development work for a long time, but whenever I try, the first casualty always seems to be productivity.

One of the projects I’ve been working on for a while now, for example, is documentation on encoding DVDs and AVI files for iTunes/AppleTV. So far, I’ve documented my hardware setup, and the method I use for ripping and encoding DVDs, as well as tools I use to convert AVI files to compatible MP4 files. This documentation, though, only accounts for half, maybe a quarter of what I’ve implemented. I’ve got scripts for automating the converting process and moving the files back and forth between several computers, scripts for grabbing Plot, Relase Dates, Ratings and Posters from iMDB and automatically adding them to iTunes. I’ve got scripts for displaying parsing iTunes and uploading all that data to my web page, so I have remote access to view the movies in my collection.

None of this is documented yet. In fact, I find myself reluctant to continue to develop new features and new scripts because I don’t want the divide between the code and the documentation to grow any more than it already has. At the same time, the weight of the documentation is high, the desire do it is low, and the call of Oblivion is strong.

Doing nothing isn’t even a guaranteed method to preventing the Documentation Divide ((Someone make a note of the term “Documentation Divide”. If there isn’t already a phrase for this phenomenon, I want to nominate this.)) from growing. My windows system started to go south, so I was forced to buy a new computer ahead of a schedule (a shiny new Intel-basic Mac mini; I call it Cletus).

Not only does this render most of my existing documentation obsolete (or at least irrelevant), it means that my model changes from a cross-platform, Frankenstein style system running on multiple machines to an integrated application running on a single box. Development gets easier, new features are possible, and my audience goes from three people who happen to be running the same outdated, multi-cultural hardware setup as me to anyone running a halfway decent Mac. The gulf between what’s documented and what I’m running is wider than ever, and finally having the speedy new Mac I wanted probably means it will continue to grow.

… and maybe that’s the lesson here. When I started this blog, I made the slogan “The Blog is Not the Point” as a reminder to myself that this was a web log, and that my projects were more important than usage statistics, getting Dugg, or trying to generate ad revenue. That same slogan can be equally effective in reminding myself that I don’t have to wait for my blog to catch up with the code before I get back to work: the Blog is Not the Point.

My First Digg

Sunday, October 21st, 2007

The other day, for the first time, I felt like I had something interesting and timely to say, so I submitted the story to Digg. Yeah, it was weak, but I read Digg regularly, and was curious about what would happen. It turns out, not much: I got a few hits and a handful of people yelled at me ((Which actually sounds like the Internet in a nutshell, now that I think about it)). Fortunately, at least a few people thought it was a good idea, because I got a couple of diggs. Which is good, because I avoided that pathetic, I-dugg-myself “1 digg”… which, I now understand, is the real reason one might not want to Digg one’s own work. But I digress…

In retrospect, my title, “Buy a new Mac now to avoid having to pay for Leopard on your old Macs”, was probably my first mistake. Not only does it sound like an ad and sound like piracy, it sounds like an ad about piracy. In my own defense, though, I swear that it didn’t occur to me that I was advocating piracy; I thought I was just pointing out a good deal. But no, let’s be clear: you can only install your copy of OS X on a single computer unless you purchase the family pack.

To be fair, though, I was suggesting that people buy a new Mac, which, at the absolute minimum will run you about $600… and I was suggesting it to people who already own at least one Mac. In other words, even if you’ve already made a significant financial investment, and you make an additional financial investment, it’s still piracy; people will yell at you if you suggest it.

This is why people hate Mac users.

Things to Do

Friday, October 19th, 2007

My productivity dipped, not surprisingly, after a bout of food poisoning coinciding with the brief lull preceding the start of the fall television schedule prompted me to buy an Xbox 360. In an effort to get back on track, I’ve compiled a to-do list, of sorts, to jump-start my flagging… something.

Finish Video Server series.

I got off to a good start with my series of entries on ripping and encoding files for my iTunes/AppleTV video server, but I’m sort of stuck on what was to be the final entry- adding meta-data to the files to make them pretty. It comes down, in part, to not being sure how to distribute a couple of accompanying scripts. They’re too long to post as part of the entry, so they’ll need to be archived and put up for download as a tar file. The real sticking point, though, is that they need more documentation, and I just haven’t been able to bring myself to do it.


Moving to was far simpler than I’d hoped, but that was only the first step. I need to reduce the complexity of the site, removing the ill-conceived user-centric directory structure for public facing content, and making everything function privately. In other words, go back to a log-in to use the tools model, and decide after the fact what is publicly accessible. The model is great for complex social-networking type sites, but since I don’t hve much interest in doing that, the complexity is making everything else three times more difficult. While I’m in there, I also need to streamline the database access and see if I can come up with a way to benchmark the capacity of the site. It would be nice to know exactly what kind of pounding the codebase could take.

Implement OpenID on

I was driving back from Sacramento one night when I had a revelation about how one might implement decentralized centralized user authentication for Websites. When I started looking around to see if anyone was doing what I was thinking about, I found that the guys at were doing almost exactly what I had thought of, though there were some issues with the implementation at the time. I carefully weighed my options, and rather than using my copious free time and obvious genius to get involved and fix the perceived problems, I think I elected to watch TV and play video games. Eventually the sxip technology morphed in OpenID, and, for the most part, it seems to work. I implemented one of the early incarnations of the library for the code, and for all I know it still works, but I’ve been less successful finding a good WordPress plugin. One guy seems pretty close, and if he hasn’t released a stable version by the time to get around to working on this, I’ll try playing around with his beta.

Unit testing and monitoring

One of my chief goals is to build a codebase I can use for rapid development of new sites. I had two sites in mind when I started, and one of them,, I already built before realizing that it wasn’t very exciting and not something I had any desire to work on long term. However, that doesn’t mean don’t want to it stay up and running. Since it’s running on the same alpha code that runs, there’s a very real chance that any changes I make on skedevel will break precautionmail, and it’s unlikely I’d notice it for weeks, since, let’s face it, precautionmail is boring, and there’s not a lot of incentive to make sure it’s working properly. Therefore, I need to implement some sort of functionality to monitor that the major functions of the site are working correctly, beyond just some sort of simple pattern matching HTTP check (though that would be a good start). Since fully testing the site involves logging in, writing a message, and verifying that it was delivered after a preset period of time, there could be some fairly major engineering involved.

Find a new WordPress theme.

When I first implemented WordPress, I liked the default theme, and I thought that most people using WordPress would change the theme- thus rendering my site, using the default, kind of original. I don’t know if that’s the case or not, but I have decided that it’s always lame to use the default. Even I were the only one in world doing it, I would still be lame, because using the default is lame almost by definition.