Jon Galloway posted that he disagrees with one of the points in Joel Spolsky's Career Advice column, specifically this one: Learn C before graduating. Jon goes on to provide six arguments why you shouldn't learn C anymore if you're not familiar with it now and you want to get into software development. I disagree with most of his points, but the most important one (and therefor the one I'll respond to) is this one:
Quote from Joel's article:
I don't care how much you know about continuations and closures and exception handling: if you can't explain why while (*s++ = *t++); copies a string, or if that isn't the most natural thing in the world to you, well, you're programming based on superstition, as far as I'm concerned: a medical doctor who doesn't know basic anatomy, passing out prescriptions based on what the pharma sales babe said would work.
That sounds good on the surface, but does his code sample really tell you how the string is being copied? It's a symbolic representation of a process which is moving memory, sure, but still several levels of abstraction away from shuttling the bits through memory. Why is this the magic level of abstraction that gives you the edge? Why not assembler?
Worse still is that it can make you think that programming is about telling the computer what to do with its registers and memory. Oops, wait, that memory was magically paged from disk without your knowlege. The compiler applied some optimizations behind your back. Your code is running on top of an operating system, which is rationing out the CPU cycles between tens of processes, and yours gets a measly 10% timeslice. And, hey, what CPU are you on? Any sneaky little tricks going on with your 64 bit hyperthreaded chip? What about two years from now, when you run your app on a virtual server on a multicore chip?
Thinking that you're in the pilot's seat because you're handling pointers is silly. Better to understand that you're asking the CPU(s), through multiple levels of abstraction, to copy a string. Do it politely - this ain't no rowboat anymore, it's a durned space ship.
However, All popular spaceships (and a bunch more) for now and the forseeable future are written in C. So while it's absolutely true that the level of abstraction that C teaches you is far from all possible angles you can have with regards to what happens with the software you write, it's currently the one that undoubtedly matters most. Whenever you find a weird quirk in functionality exposed by an API, be it a .NET Framework class library, a Java package, an STL template or an opensource GUI toolkit, chances are that it will be related to the original implementation of that functionality in the operating system. And all of them are written in C. So what's a good language to learn before you graduate?
After the release of a big project that contains lots of plug-ins, I've been thinking about a way to maintain everything in such a way that it won't consume all my time and at the same time will still be easy to deal with for my users. The entire system is built so that plug-ins can be added on the fly afterwards and that different versions of the same plug-in can all be installed at the same time (actually, this is a feature of .NET using strong names that I'm thankfully using). So I could just release lots of plug-ins as DLL-files and be done with it. In practice however, I'm a little scared that if I start throwing around DLLs that I'll have to rely on all those users putting those files in the right place and installing them. Not that the application makes it hard to do or work with, in fact the assembly cache is used so it doesn't even matter where the actual DLL is.
But what if users store these files in temp-directories, install them and see them work, only to discover that there's a problem next time they reboot and the temp-directory is missing? I'm not sure if it's wise to rely on the GAC that much. Or if they want to give a plug-in to other users? I started to think that even though it's possible to release single plug-ins, it might be better to just do this in specific cases where I want to give someone a plug-in for a special use and create full installers everytime a new plug-in is released or the framework is updated instead.
But that can get messy if I end up releasing multiple plug-ins a month (I'm not that productive, but small and important bugs are easily found and need to be fixed quickly) which leads me to think it's probably best to do a full installer release every couple of months, every three or six months for instance depending on how fast developments end up being. It's a good system: users know there's a set time when they can expect fixes and new functionality, the amount of different versions in use is limited (compared to ad-hoc releases anyway) and I can integrate the process of creating a new build into my workflow.