Category Archives: Geeky

Work Drained Me

Happy Saturday, dear readers! It seems I missed Friday’s post. Please blame my work on this. I’ve been mentally exhausted most of the week as my tasks have changed on a near-daily basis. On the days that the tasks haven’t changed, I spent most of my time realizing that what I thought would be a relatively easy task is far more complicated than I first thought. And then I’d realize it’s even more complicated than that the next day. So by the time it came time to work on my next post about Spinning Wyrd by Ryan Smith, I had nothing left in the intellectual gas tank. So today, you get some vague ramblings of a software developer.

The big issue I ran into today was discovering that a preexisting piece of software was an architected in a way that made the feature I was supposed to add to it nearly impossible1 to implement without a massive rewrite of the existing code. Since we don’t have the time or budget to rewrite the existing code, that new feature was finally shelved after I spent two days playing “why won’t this stupid thing work the way I think it should?”2

This whole experience is a reminder that project managers and project architects really need to spend more time thinking about a product roadmap for software. They need to try to anticipate what future features might be added so that when they make these architectural and design decisions, they don’t implement something that makes those features nearly impossible — or even just difficult — to implement. No one can possibly envision every feature that might get added to a piece of software in the future, but I’ve encountered more than one scenario like my current one and thought “someone probably should’ve sen this coming.”3

At any rate, my apologies to my readers who were looking forward to more book discussion/Heathen talk. I promise to get back on schedule next week. For anyone who observes it, happy start of Winter Nights on Thursday!

Post History: This post was written on October 12, 2024. There was no proofreading or revision process.

Footnotes

  1. Not completely impossible, mind you. As I’ve thought about it, I think I’ve come up with a workable solution, but it’s ugly. We’ll see when management decides they want to spend the time and money to revisit the feature. ↩︎
  2. Part of what took me so long to figure out why things weren’t working they way I thought they were is because I’m the fourth person to work on this piece of software and the original author who laid out the architecture left the company a couple years ago. So I’ve had to delve into the details of how the software works to a degree I haven’t had to before. Oh, and it’s written in a programming lanuage I’m not terribly familiar with. Fun! ↩︎
  3. For full disclosure that “someone” has been me at times. I’ve made design decisions in the past that I later realized were a mistake I should have anticipated. There’s a whole other discussion to be had about why this sort of lack if foresight is so common. ↩︎

Playing with Parsers

Lately, I’e been fascinated with the concept of parsing and developing my own simplistic programming language. When I have my work laptop with me, I’ve been playing with ANTLR 3, which has been fascinating. I’ve gotten my parser partially implemented so I can define new enumerated and collection types, declare variables, and even start declaring functions with parameters and a return type. Granted, I still have to add instructions to each function and make it so everything runs, but hey. Rome wasn’t built in a day, right?

The thing is, I really would like to do this on my home computer, which is a MacBook Pro. And while technically ANTLR was originally written for Java and will therefore work on MacOSX, I feel like that’s cheating. I’d much rather work with Objective-C and Cocoa (especially since I’m till trying to learn them.) Plus, I don’t know Java and the last thing I need to do is add to the list of programming languages I’m still trying to get better at (C# an Objective-C).

Granted, ANTLR 3 technically has an Objective-C runtime (allegedly, there’s even something written for Cocoa to parse ANTLR 3 grammars and create the parser and lexer classes), but it’s pretty darn old. It looks like the Objective-C development stalled about four years ago.

Plus, to be honest, I’d really like to use ANTLR 4, since the documentation for the older stuff seems to be partially missing. Plus I’m geeking out, which means I’d rather geek out with the latest and greatest. Naturally!

The thing is, the latest version doesn’t yet support targets (or have runtimes for them) other than Java right now. Well, that’s not true. I guess there’s a C# runtime and target, though I played with that a bit on my work computer and it seemed a bit undercooked. (Getting things to run was a nightmare.)

Naturally, I went looking for a Cocoa based alternative and found ParseKit. It seemed great, but I ran into a few problems.

  1. The instructions for including the framework into your own projects seems horribly buggy.
  2. Using the parser generator app that comes with the framework, I still haven’t been able to get my own basic grammars to work. I keep getting errors with not-entirely-helpful messages.
  3. There’s so little documentation for the framework, I might as well say there’s none.
  4. And once again, looking online suggests that this is a project that hasn’t been touched in a couple years.

How frustrating! So I finally figured out how to download the ANTLR 4 code and I’ve decided to try my hand at porting the whole darn thing to Objective-C myself. I’m not sure I’ll actually finish it. I mean, there’s a LOT of code. I figure I’ll start first by just porting what I need to recreate the tool that creates parser and lexer classes from grammars. If I can get through that, I’ll see about creating the necessary Objective-C runtime and the target to actually generate parser source in Objective-C.

This means that I often have a lot of windows open:

  1. Eclipse, so I can sort through the original ANTLR 4 (and ANTLR 3.5, since the former uses the latter in places) source.
  2. XCode, so I can work on my new Objective-C workspace.
  3. The XCode documentation window, so I can look up the Cocoa classes I’m still unfamiliar with and may need.
  4. Firefox so I can look up details about the Java code I don’t understand. (Sadly, this may mean I learn Java after all.)

And to think, this is what I often do for fun. Some days, I think my middle name should be “Masochist.”

 

On Challenges and Geekery

As I’m getting ready to head to Canada, I thought I’d take a step back and just offer a bit of insight into another area of my life and psyche.

I learned to program in machine code when I was in junior high school.  Some of my readers are probably somewhat impressed. A couple of them might be saying, “me too!”  I suppose some might have even learned at a younger age than I did.  The rest of my readers are going, “What the heck is machine code?”  For this group, let me give a quick explanation.  (Those who already know this or can’t handle so much geekery are welcome to skip over the next few paragraphs.  I’ll throw up a flag letting you know where you can rejoin me post-geekgasm.)

Machine code is the only programming language that the microprocessor that makes your computer work actually understands.  While most programs you use are written in C, Perl, Javascript, Java, Python, C#, or one of dozens of language, another program which is already in machine code either took the program written in that other language and converted it over to machine code or read the program in the other language and told the microprocessor what to do.

It’s much easier to write a program in C, C#, Pearl, Javascript, Java, or Python than it is to write one in machine code.  Machine code consists of very simple instructions, like:

  • Add the number stored here to the number stored there and store the result over there.
  • Check the number stored here and if it’s greater than the number that’s stored there, set this flag over here.
  • If that flag over there is set, jump back twenty instructions in this program and start running from that point.

Even the simplest of tasks can take dozens of instructions in machine code to complete.  Doing everything a word processor does would require hundreds of thousands of machine code instructions.  Maybe millions.  Only people who write device drivers and extreme masochists (and believe me, there’s a lot of overlap between those two groups) write in machine code.  Even then, they tend to write in assembly, which uses keywords to represent instructions.  So for example, if I was writing in assembly language, I might write:
ADD AX, BX  (Meaning:  Add the value in AX to the value in BX and store the result back in AX)

In machine code, that would just be a bunch of numbers:
102, 01, 208

The microprocessor would read in those three numbers and know that it was supposed to add the value it had in AX to the value it had in BX and store the result back in AX.  There are programs (conveniently called assemblers) that read programs written in assembly and translate them to machine code for you.

Like I said, in junior high school, I learned (taught myself, actually) to program in machine code.  Technically, I learned to program in assembly too.  But I had to learn to translate my assembly programs into machine code myself (this is called hand-assembling, by way) because I didn’t have an assembler.  You see, I was working on a VIC-20 (the predecessor to the Commodore 64, for those who remember them, and those who don’t, well, just assume we’re talking some really old computers that probably aren’t as powerful as the graphing calculator you used in your algebra class) that my father had gotten me at a garage sale.  I had the computer, the power supply, the old tape drive that you could use to save your programs to cassette tapes.  It was an ancient computer when I got it, so there was no way I was going to find an assembler for it.

Okay, the geek-talk is more or less over.  Welcome back to those who chose to skip it.
  So, why on earth did I decide to teach myself programming in machine code when I was so young?  Well, because I was bored.  As I said, I was playing around with a computer that I had nothing for, a computer that let you type in programs written in BASIC (an old programming language hardly anyone ever uses before — and no, VisuaBasic is not (quite) the same) and run them.  I had written all the programs in BASIC I could think of and I was bored with it.  I needed something new to do.  Something challenging.  Then I noticed that one of the manuals I got with the computer included a section on assembly code and listed all the machine code instructions that the microprocessor in the computer knew.  So my next challenging adventure presented itself.

My point in all of this isn’t to show off my geek cred or brag about what a smart (and possibly insufferably smart) kid I was.  It’s that I’ve always loved a challenge.  When I get bored, I want something to do.  I want something to tinker with.  I want a problem to solve.  I especially love those challenges where people tell me I can’t do something, especially when it comes to computers.  (I had college professor use that fact to trick me into taking on a project for him, actually.)  Learning to program in machine code on that old computer meant doing something that wasn’t easy.  (It also gave me the ability to do something with that computer that an uncle said I couldn’t possibly do.  Like I said, I especially love challenges where people tell me I can’t do something.)  It’s a trait that’s marked most of my life.

Granted, the downside is that it also means that I’m more interested in the challenge than the result at times.  There’s been a few times where once I’ve conquered the challenge, I’ve lost interest in the work that was actually related to the challenge.  “Why should I finish the program?  I figured out how to do the hard part.  The rest of it is easy tedious, and uninteresting.”  Needless to say, that’s an attitude the college professors found irritating.  Fortunately, I learned to suppress it on the job.  But I’ve also learned to let my boss know when I need another challenge.  Because I live for them.  And I falter without them.

“Sneakers” and past computer worship

Spoiler Alert:  This post is going to give away plot elements in a nineteen year old movie.  Face it, if this ruins the movie for you, you probably weren’t going to see the movie anyway.  😉

This past Friday, I ran to The Living Room Cafe for movie night.  One of the movies we watched was the 1992 movie, “Sneakers,” starring Robert Redford.  It’s one of my favorite movies, and I love taking every opportunity to watch it.

One line in the movie, however, has always bothered me.  It’s delivered in the scene when Liz, Warner, and Cosmo are about to leave the building and the team of thieves is about to get away with their caper.  Liz mention in passing that she was giving up on computer dating.  Cosmo looks at the “couple,” declares that no computer would pair them together, and (correctly) concludes that the date is part of the caper set-up.

I’ve always taken issue with Cosmo’s declaration.  I find it quite possible to believe that a computer would pair up just about anyone.  Leaving aside the fact that people who use online dating services are notorious for being less than 100% honest when providing their information — even when taking the kind of “personality profile tests” that sites like eHarmony and Chemistry.com use — there’s always the possibility of computer glitches and programming errors.

I suppose the screenwriters felt that given Cosmo’s love of computers, he would buy into such a conceit.  However, I would argue that Cosmo’s love of computers — and more importantly, his deep understanding of them — would make him more aware of how imperfect computers are.  After all, the movie starts with  college-aged Cosmo and Martin working together to hack computers and cause mayhem in the name of “fighting the system.”  It seems to me that someone who not only works with computers, but has a history of seeking out and taking advantage of vulnerabilities in computer systems.  Such a person cannot possibly think of computers as perfect.

I think this is more likely a case of non-computer people of the time projecting their own sense of awe and mystery for computers onto a character who should know better.  In the 70’s, 80’s, and ’90’s, there was the sense among the “uninitiated” that computers were incredible devices and capable doing amazing things, and they tended to idolize them as such.  Movies like “Sneakers” demonstrate this sense of awe and worship for them.

I think as more people become familiar with the Windows operating systems and the infamous Blue Screen of Death, that sense of mystique has diminished, if not outright vanished.  But for those of us who delved into the mechanics, that sense of mystery was gone long before that.

Confessions of a Reformed Command Line Snob

The box art of Windows 1.0, the first version ...

Image via Wikipedia

Adam Gonnerman wrote an insightful post about how showing the average user the Linux command line (or the DOS command shell on a Windows box, for that matter) can create a sense of fear and intimidation.  It’s an interesting piece and I highly recommend reading it as well as the conversation in comments.

As I look at my own comments in that discussion, I’m reminded of how much my thinking about computers has changed over the years.  When I was in college, I was a total command line snob.  I looked down at GUI’s in general and thought they were the road to making every computer user stupid.

I think this was a common mentality for a lot of us who were into computers back when I was in college and before.  After all, when I first started college in the Fall of 1992, Windows was still something you started from the DOS command line after you booted the machine.  And the computers in the college’s computer labs were set up under Novell.  You’d enter your login credentials, get dumped to the DOS prompt, and type “win” if you wanted to start that stupid GUI.

Even Linux distributions tended to treat XWindows as an afterthought at the time.  That same freshman year, I loaded Slackware Linux onto my IBM XT clone (I will admit that I was nowhere near the cutting edge in terms of the computer I personally owned).  It involved downloading a couple dozen images and burning them onto 3.5″ floppies and then using a special boot disk to install the system on the computer.  XFree86 was an optional install and the distribution was — again — set up to have you log into a command prompt and then start XWindows from there.  And since trying to get XWindows to work on your particular configuration was no easy task back then, it struck me as mostly a waste of time.

So I came through a time when using a computer meant you had to be a wizard with a command prompt.  It wasn’t optional.  You learned all the magic commands and you lerned how to use them extremely well or you were hopelessly lost.  It was a glorious time, especially for those of us who loved the challenge.  So to me at the time, the growing popularity of GUI’s (by my senior year, all the computers in the lab were running NT 3.5) and the ease of access they offered was destroying the challenge.  It was making computers something useful for anyone rather than the playground of the geeky elite.  And I was just enough of a snob (and had just enough of my self-worth invested in my geekiness) that this upset me.

So what changed?  To be honest, I changed.  I quit keeping up to date on computers.  I became the average computer user myself, and I found that I liked being an average computer user.  So I let go of my elitism.

I suppose a few readers may be surprised to hear me refer to myself as an average computer user.  After all, how can a software engineer — someone who is well versed in programming computers — be merely an average computer user?  Well, the answer to that is that I’m an embedded software engineer.  And that’s a rather different kind of computer programming.

I’m currently developing the software for a very unusual device.  It’s a computer, but you won’t see it sitting on anyone’s desk.  It has no keyboard, mouse or monitor.  In fact, if you look at it, all you see is a big metal box with a bunch of cables coming out of it.

Inside, there is a bunch of analog-to-digital converters and I/O expanders that allow the processor to read or assert logic levels on various signals on the circuit boards inside that box.  My job is to develop the software that accesses those ADC’s and those I/O expanders with all the signals, do stuff with the data read, and assert certain signals based on that data.  I spent most of this afternoon making sure I could communicate with the ADC’s and I/O expanders.  Tomorrow, I’ll spend a significant amount of my time making sure that the readings I’m getting from the ADC’s are valid and mean what I think they mean.  I’ll also spend time making sure that I can read and control the logic signals from the I/O expanders as I expected.

This is a typical programming project for me.  I spend most of my time looking over data sheets for devices like ADC’s, I/O expanders, microprocessors/microcontrollers, EEPROM’s, and power management chips.  I also read schematics and hardware design specifications that explain how these devices are configured and are supposed to work on the system I’m currently working with.  I’ve learned to write assembly code for PowerPC‘s, ARM processors, Blackfin processors, and a few others I’ve probably forgotten about.

Quite frankly, after I’ve spent all that time learning about the stuff I need to know to work with the devices I program, I don’t want to learn about the computer sitting on my desk anymore.  I just want it to work and work relatively well.  I’ll let someone else worry about making sure all my programs work correctly and that my computer is secure and safe from viruses.  After all, the computer on my desktop is just a tool to me now, and tools are good if they’re easy to use.  It gives me more time to focus on all that embedded stuff that’s part of my job.

So I quit being a command line snob.

Geek talk

One of the annoyances that I run into due to my job as a software engineer is that when I tell most people that I’m a software engineer is that most people get that I work with computers. Unfortunately, they assume that this means I work with personal computers, which usually leads to them uttering the phrase, “You know, I’m having problem with Windows, and…” That’s usually when I have to stop to them and explain to them that when it comes to the PC world, I’m only slightly more knowledgeable than the average user. Most of my professional expertise is in the world of embedded systems.

Of course, part of the problem is that the average person doesn’t really understand what an embedded system is. They’re naturally inclined to think “computer” and picture that machine with a monitor, keyboard, and mouse they have sitting on their desk. So that usually leaves me racking my brain figuring out the best way to explain what an embedded system is.

Today, while talking to Tracie, I realized that I’ve been overlooking a perfect example of an embedded system (though the more geekish might argue how accurately said example can really be called an embedded system) that everyone is familiar with. In fact, three different flavors of this example are quite popular now, the Xbox360, the Nintendo Wii, and the Playstation 3. In the loosest sense, gaming consoles are embedded computer systems.

For those who may not know what an embedded system is, it is a “computer” system that is designed and programmed to perform a dedicated task, as opposed to a general purpose computer — most notably a PC — which is designed to be highly configurable and usable for just about any task. A game console fits this definition quite well in that it is a computer system designed to do exactly one thing: Allow a user to play games.

Most importantly, as an example of an embedded system, the gaming console excels in that it shows the advantage of embedded systems over general purpose computers. After all, a user can play games on his PC as well. However, because the PC is designed for multiple purposes, playing a game on a PC incurs a great deal of overhead. The game software has to interface with a rather complex operating system that has been highly abstracted and work with (or bypass, which is just as problematic) the operating system’s device drivers in order to access resources like the keyboard, mouse, game controller, and graphics controller. If there are other applications running in the background, the game has to be able to play nice with them as well. The net result is that games can run slow on a PC. Anyone who has played many graphics intensive games on a PC will notice that there are just times when the image or sound lags while the computer tries to catch up. This is usually because the operating system or some other application has been doing things to steal resources from the game.

Like any embedded system, a gaming console doesn’t have this problem. The core operating system on the console is bare bones, and designed to support the current game being played. Because of this, the game has access to all of the system’s resources and doesn’t have to worry about another application jumping in and slowing it down. More importantly, the interface to devices like the graphics controller, game controller, and audio controller are much simpler and less abstracted. In fact, while I cannot say so for certain, it would not surprise me if most console games access these devices at the hardware level directly rather than going through any device driver at all. Again, with an embedded system serving a single dedicated purpose, this is not only possible, but perfectly acceptable.

Of course, as I said, a game console is not a perfect example. After all, they are still far more configurable and less dedicated than more traditional embedded systems, such as the electronic computer module that controls the fuel injection process in your car. The game console can accept different games, which will provide different software and a different gaming experience for the user, which makes it slightly more “general purpose” than some purists might consider worthy of being declared an embedded system. However, as a coworker pointed out a few months ago, this is a problem in the world of embedded systems in general. With the increasing features being added to cell phones and PDA’s (two other devices that have tradtionally been considered embedded systems), the line between general purpose computers and embedded computers is becoming increasingly blurred every day.