Joel got it wrong - Why learning C is a waste of your college time and money

Back in January, Joel Spoelsky wrote a  Career Advice column. It gave some generally good advice to college students, the main points being:

  1. Learn how to write before graduating.
  2. Learn C before graduating.
  3. Learn microeconomics before graduating.
  4. Don't blow off non-CS classes just because they're boring.
  5. Take programming-intensive courses.
  6. Stop worrying about all the jobs going to India.
  7. No matter what you do, get a good summer internship.

[update: note that Joel's advice is to college students in general, not just to computer science majors]

I agreed with everything but (2). I believe learning C before graduating may help some, but hurt just as many. While it might have helped Joel, I don't think it's universally good advice. It's been bugging me for a month now, so it's time to get it in writing. In the interest of hopefully getting comments from someone other than my pals at www.online-cheap-meds-4u.ru and www.poker-gambling-casino.net , I aim for the nearest windmill and present:

Jon's 6 reasons why learing C is a waste of your college time and money:

  1. It's not a skill you'll use in most of the software development jobs you'd want to have
  2. It can give you a false sense of control
  3. It can teach you to get in the way
  4. It can make it hard for you to love famework based development
  5. It can teach you philosophies which will prevent you from really understanding modern programming
  6. It can divert you from the real challenges of software engineering

1. It's not a skill you'll use in most of the software development jobs you'd want to have

 

Without trying to offend anyone who's into this kind of stuff (weirdos!), here are the kind of things you might use C for these days - writing some kind of device driver, maintaining extremely old applications, embeded development on very limited hardware, or maybe writing an operating system or compiler or something. I guess you could like hack away on the Linux (or is that GNU-Linux) kernel.

Hey, if that sounds fun then off to the races. Consider this though - you're not really going to be solving any new problems. If you want to do web development, database development, desktop applications - you know, stuff you can talk about at a party - you're probably not going to be working in C.

 

2. It can give you a false sense of control

 

Quote from Joel's article:

I don't care how much you know about continuations and closures and exception handling: if you can't explain why while (*s++ = *t++); copies a string, or if that isn't the most natural thing in the world to you, well, you're programming based on superstition, as far as I'm concerned: a medical doctor who doesn't know basic anatomy, passing out prescriptions based on what the pharma sales babe said would work.

That sounds good on the surface, but does his code sample really tell you how the string is being copied? It's a symbolic representation of a process which is moving memory, sure, but still several levels of abstraction away from shuttling the bits through memory. Why is this the magic level of abstraction that gives you the edge? Why not assembler?

Worse still is that it can make you think that programming is about telling the computer what to do with its registers and memory. Oops, wait, that memory was magically paged from disk without your knowlege. The compiler applied some optimizations behind your back. Your code is running on top of an operating system, which is rationing out the CPU cycles between tens of processes, and yours gets a measly 10% timeslice. And, hey, what CPU are you on? Any sneaky little tricks going on with your 64 bit hyperthreaded chip? What about two years from now, when you run your app on a virtual server on a multicore chip?

Thinking that you're in the pilot's seat because you're handling pointers is silly. Better to understand that you're asking the CPU(s), through multiple levels of abstraction, to copy a string. Do it politely - this ain't no rowboat anymore, it's a durned space ship.

 

3. It can teach you to get in the way

 

That false sense of control I mentioned before can lead to outsmarting yourself. Some examples:

  • Trying to pool database connections or holding a global connection, not knowing that the data layer is automatically pooling if you let it
  • Overriding the .NET Garbage Collector, assuming that Java / Micro$oft doesn't know how to manage memory nearly as well as you can

In both these cases, a little bit of knowledge is (as rumored) a dangerous thing. If all you learned from C is that you are the boss, you will most certainly write code that plays poorly with others.

4. It can make it hard for you to love framework based development

Moderns programming languages run on top of frameworks. .NET apps use the .NET framework, Java uses J2EE (et. al.), and arguably web apps run on top of a browser / communication communication that constitutes an application framework. The list could go on (GTK, XUL, web service, Flash). Most good frameworks are standards based, and all of them host your solutions so you only solve new problems.

C code, by and large, is not about frameworks. At its best, it uses some libraries and links to some API's. C gives you a toolset that can solve just about any problem, but requires that you solve each and every problem every time. Frameworks were created to obviate solving the same problems in each program you write. Software development has made a steady progression from code to libraries to components to frameworks. Thankfully, you don't need to retrace this path just as you don't need to experience the dark ages to live in the post-renaissance world.

To be productive as a programmer these days, you either need to be learning to get the most out of existing frameworks or helping to build tomorrow's frameworks. To learn to be a productive programmer, you need to learn how to work with frameworks.

Learning to work with a framework takes work. You need to learn research skills that go far beyond fixing syntax errors. You need to learn how to optimally integrate with our environment, what the environment will provide for you, and what's expected of you.

Thankfully, learning to work with a framework is a transferable skill, so the exact framework doesn't matter so much as that you learn how to use a framework - pick an environment and get going.

5. It can teach you philosophies which will prevent you from really understanding modern programming

 

C code is procedural. That means you describe an exact set of steps you want your program to accomplish. If you want to find a value in an array, you write code that loops through each element in the array - a house to house search.

Modern programming is often declarative, meaning that you describe what you want and thing else decides the best way to get it. Databases, for instance, respond to "Queries" - a request for information in a descriptive language. The database maintains indexes to find the data without looking at every line, and the good databases self-optimize based on the questions they're asked most often. To continue the analogy from before, we replace the house to house search with a 411 call. Cursors in database queries are just about always a failure on the programmer's part to describe the problem rather than limitations of the database engines.

A surprising amount of work in current software is described rather than prescribed. HTML, for instance is a markup language that describes content and layout to a browser, which does the hard work of wrapping text, applying styles and enforcing user-defined security settings. XPath and regular expressions some more current technologies which implement declarative programming.

The point here is that modern programming is moving towards Domain Specific Languages (DSL's) which efficiently communicate programmer intent to CPU cycles. C is not a good prototype for any of these DSL's.

6. It can divert you from the real challenges of software engineering

There is significant work to be done today in software engineering that requires fresh, energetic minds. Some examples:

  • We are on the verge of ubiquitous AI - self organizing systems (software and hardware) that will collaboratively solve problems for us. We're still figuring out how to manage them.
  • We've got a lot of disparate systems that still don't communicate well.
  • We are being bombarded by more information than we can currently process. Fixing the U.S. intelligence system to the point that pre-9/11 dots could have been connected requires storage and analysis of 36 Terrabytes per week ongoing (quote from recent IT Conversations - Accelerating Change audiocast). RSS and e-mail volume continue to grow.
  • Computer users need both security and privacy. Neither is relible today.
  • Our current methodologies routinely result in failed projects. (Examples - recent failed $170B FBI project, Mars mission software errors, etc.)
  • Bob in accounting needs a report on recent payment orders to office supply companies, but the data is stored in Excel files on Mary's desktop. Mary is on vacation.
  • Walt in IT just gave two weeks notice. He justtold you that he's been managing several daily FTP transfers of critical company data. He manually inserts into the production database using some scripts his roommate helped him figure out.
  • Marketing promised product X by June. Management budgeted for product .25X by June 2025. You are the developer for product X.
  • Etc.

The point is, today's software development environment is dynamic, evolving, and extremely challenging. If you're going to be of help, you need to do something more productive with your time than learn a 20 year old language. There's plenty to learn - if you're going to take one computer class, make it HTML, Java, C#, VB, SQL, XML, Javascript... anything but C! I've personally interviewed over 100 people for programming positions. I would personally take skills in any of the above technologies over C in my hiring decisions.

Epilogue

In the movie Real Women Have Curves, the main character is a Mexican American girl who's been offered a full scholarship to Columbia. Her mother won't let her accept, giving some bogus reasons about needing to stay near her family. When her father challenges it, she tells him the real reason: she's had to work in a sweatshop her whole life to provide for the family, and there's no way her daughter is going to skate from school to college to the easy life without some time in the sweatshops. It's her turn to work.

When an experienced programmer tells you to learn C nowdays, they're either trying to apply their out of context life experiences to yours, or they're trying to throw you in the sweatshop because that's what they had to do.

There's plenty to learn in computer science circa 2005. Don't waste your time on history unless you want to be a historian.

26 Comments

  • Sorry, but I have to disagree - although it's not a skill you'd actually *use* many employers ;usually old traditional software houses like...say...Microsoft; still mention it in most job postings, actually, usually C++ - but if you know C, C++ is not such a huge leap...well if you've got the basics of OO obviously.

    So, I learnt C way back whilst I was at University (I didn't learn it *at* university, we learnt Pascal and Ada) - I'm really glad I did, it's like learning latin - though in and of itself it's not much practical use it does provide a kind of lingua franca when approaching languages derived from it. When learning Java for instance, most of the stuff there was already pretty familiar, same for C#, Perl and PHP - it was a shortcut to learning more languages. Oh, and if I need to delve into some of the more obscure algorithms or patterns, well all the old books use C!

  • I think C / C++ was a good thing to learn back in the day, but I think it's poor advice now. If you had a friend in college who could take one computer class, would you honestly say C is the best language for them?



    If it's not practical now, why not learn programming skills AND a practical language you can acutally use today? Heck, even Javascript does is OO, and it's something you might actually use.



    As for Microsoft and other software houses wanting C++ on resumes, they're not looking to hire the casual college graduate who took one or two computer classes. Joel's advice was that everyone should learn C, which assumes that CS majors would learn C and some modern languages.



    Microsoft is building the infrastructure and frameworks that higher level code runs on, and unfortunately that work is still done in C based languages. However, that's a statistically insignificant portion of the actual code being written today, especially when you consider that this advice is to all college students.

  • If you are an employer, why stop at C? Why not require an understanding of assembler or even hardware engineering? These are good things to have, but the edge they give to programmers is incremental.



    If you are a good developer but you're not too keen on pointers and you run into a memory issue in a managed language, you'll figure it out eventually.

  • I think that one thing is good in C and one thing only. Showing relativly low level memory management. I think that people should see what goes on behind new and delete at some level. That being said I think that it should only be one course. Writing code in C is a very dangerous proposition. People are taught string handling in C but never taught what a buffer overflow is.



    I only had to write C for one semester and believe me it was a long semester. And its not like I had not been writing code for companied for the prior 2 years.

  • Actually I'm with Darrell, sort-of.



    Every software engineer MUST know an assembly language.



    The CS course of instruction at CMU, back when I attended required that each CS grad take a "fundimentals of digital electronics" course which included a segment on assembly language programming.



    This was to guarantee that every CS graduate had a fundimental understanding of how computers worked at the lowest level.



    I don't know how many times I run into people don't understand the simplest aspects of computers because they never had the fundimentals.





  • <i>I don't know how many times I run into people don't understand the simplest aspects of computers because they never had the fundimentals.</i>



    I run into these people all the time, too. In fact, I've always worked for some. I started my career as a C++ programmer and I wound up maintaining 5 year old software while everyone else built intelligent, componentized, modular web apps in VB. And got paid more.



    I'm with Jon. C is the sweatshop of programming languages. Leaving it behind was, by far, the best thing I ever did for my career. There might be a lot of C++ programmers at Microsoft but most developers, and this may come as a surprise to you, but most developers have less than zero interest in working for Microsoft. This isn't an anti-Microsoft sentiment, this is a sentiment of "I'd rather build cool web/database apps that do interesting things than spend 6 weeks optimizing the paging algorithm for virtual memory in Windows."



    There's a time and a place for C programmers, and the kind of person that would be good at the jobs that require C are the kind of people that wouldn't care what the rest of us say, anyway.

  • I think what the C-ophiles are trying to say is that you can't be a truly effective programmer while being ignorant of computer organization and architecture. C is close enough to the hardware where you have to worry about allocating and deallocating everything from HANDLES to memory blocks and how to optimize your code. This knowledge is useful even if you're programming in JavaScript because you can make reasonable assumptions about how the script engine is going to interpret your commands. Why not assembler? yes, it is true that Assembler will take you down even further to the level of registers and such, but I can think of two reasons to perfer C. Assembler is platform-specific as the instruction set will vary. C is cross-platform. malloc will work on any platform for which a compiler has been written. C can teach you about hardware resource management without tight-coupling to your platform. I still agree however, that there are better starting points. One can learn C++ without first learning C. In fact, you could probably be successful if your first language was C# or Java, so long as you took a course on Computer Organization and Architecture and really understood the concepts. As for C++ being an old/dying language, I'm not so sure. I think it is moving from the mainstream into legacy but there are millions of lines of code that will continue to be maintained in C++ for a long time to come. Also, there are sufficient cases out there where you want to get down to the hardware resource management level and write code. Major game studios like EA aren't likely to be putting out C# or Java code any time soon.

  • I have specific technical issues with your arguments, but the part that bothers me the most is that you're encouraging CS students (the target of Joel's essay) not to learn something that's already proven to have lasting value, and is certainly quite important to the way things are done these days.

  • Who are we focusing on here? Are we talking CS students who will go on to be CS professors or focus on device drivers or embedded systems? Or are we talking about training the next generation of Software Engineers who will build business applications?



    If we're talking about the latter, let's look at the facts. Software construction only accounts for 20-30% of a large software project. Bugs introduced during the design the system take many times the amount to fix than bugs introduced during coding. 50% of software projects fail.



    So excuse me if I don't give a rat's a$$ that you don't know EXACTLY what while(*s++ = *t++); does.



    I care more if you know how to use a source control system, work in a team, identify missing requirements and get things done that need to be done with high quality. I've seen the effects of C whizzes who know the finer details of every algorithm in Knuth's landmark books, but couldn't program a maintainable, internationalizable, scalable system if their life depended on it. I think a lot of CS programs are lacking in teaching students the basics of being a software developer.



    Who here learned to use Source Control in college? Or how to work with a team, push back on management, and deliver projects on time and under budget?



    By all means, learn about computer architecture and C because you're curious, but let's get college students to really learn the necessary skills to be software engineers.



    Unless of course, you did mean the former.

  • I very much appreciate everyone's comments, and am willing to give a bit on some points and clarify on others:



    * C++ is not a dead language, and that won't change in the near future

    * Embedded / realtime apps still rely on C / C++ based code, and they are important

    * Memory management is important

    * Learning modern languages without an understanding or appreciation for what's going on behind the scenes isn't ideal



    Counterpoints:

    * Joel's article says "college students", not CS majors. I agree with haacked here - I can understand arguments for learning C if you're a CS major and have a non-zero chance doing C work or needing to know about memory management. The thought that Ed the accounting major should spend his computer class hours on learning C just seems ridiculous to me. He'll never use it, and the only thing he'll remember in two years is that he'd rather crash test Cooper Mini's than program computers. That's much worse than never exposing him to computer programming at all. What percent of college college graduates will ever compile a C program? What percentage of college graduates will need to write a spreadsheet macro, or do some basic HTML editing? I don't think anyone can convince me that every college student should learn C.



    And you know what? I've seen just that. My friend Laurie was in a medical program and had to take a basic programming class. UCSD wasted her time with C, probably because that's what the head of the CS program learned when houses cost a dime. I helped her when she was stuck on her assignments, knowing all she was learning was to fear computers. What a waste!



    * Game programming sounds fun, but from all the game programmers I've spoken to, it's a lot harder and less fun than you'd think.



    * I have to believe that someone (probably not the U.S. university system, but someone) could come up with a way to convey the complexities of memory management without wasting an entire semester.



    * Data structures and algorithms, as some have mentioned, would probably be more useful that C.



    * Understanding what goes on behind the scenes when someone posts to your webpage or runs your time card application is nice, but is of less importance than understanding how to complete a software project successfully. Given our failure rate, does it make sense to burn out our new recruits on memory allocation, or should we get them going on designing and building professional (secure, performant, economical) software applications?



  • Just realized this after submitting the above:

    * CS professors teach C instead of effective software development because it's much easier. We know very well how C works, effective software development is much more of an unknown.

  • i have no argument with anything you said except "stuff you can talk about at a party " because i am thinking that talking about programming at a party is not going to get you laid! :)

  • The problem with many folks is that they don't understand what "tools" are. The point of a "tool" is to help you get a particular job done... nothing more, and nothing less. C is a decent learning "tool", helping you understand loops/arrays/etc cetera. C is a good learning "tool" getting you close to the hardware without using up a lot of valuable time moving bits around different registers. C is a bad tool if your trying to code a CMS (content managment system) for a website, PHP/MySQL is a far better tool for that purpose. A decent carpenter has many tools, he's not going to pound in roofing tacks with his 3 foot level, he's going to use the proper tool for the job, case closed. A decent CompSci student will take a useful 101/102 C course to get an understanding of program flow/functions/etc... then as the student matures and becomes job ready, he will use the proper tools for the job.... that is .NET or whatever she focuses on. Everbody in this entire post has their intelligent arguments... yes... but I think people miss the boat because they don't know when to use the right tool, at the right time, for the right job.
    Paul

  • I have to disagree. C is obviously not the language of choice for most application programming today especially with OO languages like C++ (and besides OO features, support for abstract mechanisms such as templates [esp the STL], namespaces, enumerable types etc) not to mention the likes of 'virtual' platforms such as .NET and Java which make coding up applications more straightforward than ever, but what happens if you do not go into application programming when you graduate from your computer science course? If you are a systems programmer, C is invaluable - I still find the task of writing a kernel in C++ daunting. Almost all kernels are written in C. Even at the application level, there are times when one needs to get closer to the machine and shread the abstraction for a while and C is ideal for that. A system administrator may use perl on the command line and for text processing but for doing something that relies heavily on bitwise operations like for example an implementation of AES, performance is crucial and C would be a suitable candidate. Anyway learning C should take little time - since many languages are based on its syntax and it has probably more source code examples than any language, it is trivial to get to grips with - knowing an extra language is seldom useless.

  • @Michael - I agree that systems programmers will probably still need to work with C, but the percentage of computer science graduates who will do systems programming is incredibly small. The majority of students in CS courses, even CS majors, will end up working on applications. They would be much better prepared to do great work by focusing on state of the art tools for application development rather than C.

    As processors and frameworks continue to improve, the occasions in which C level "close to the metal" development are appropriate have really shrunk. Now performance problems are generally caused by poor architecture or fighting the system (OS, frameworks) rather than any hardware limitations.

  • I disagree. C is a very worthy thing to learn. Forget about it's practical uses (which you have already dismissed), it's all about the ability to understand complex systems. In my experience with programmers who started with higher level languages (vb/java/some other garbage collected system), they simply don't understand a lot of basic programming methodologies and patterns, even the types of things that transcend programming languages. They only understand the concepts in the world of VB, or the world of Java, and the second they leave the world of Java, it's a whole new universe. It's human nature to not want to exceed the bare minimum, if the language doesn't make you learn about memory, then you won't learn about memory. Any programmers aptitude is a function of their knowledge of all of the technologies underneath the one they are using. If your assignment is to make your Java program faster, how could you possibly get started if you didn't know how Java interprets and executes code? How do you know how to optimize a loop if you don't even know what "the fastest possible loop" should look like? And it's not any one thing about C that makes it a great language to learn, it's everything. Humans are excellent learners when we are forced to learn, and that is exactly what C is.

    "It can give you a false sense of control"

    So no sense of control is better than a false sense of one? At least with a false sense you are on the right path. And just because you can't have full control doesn't mean we should have no control. Experienced C and C++ programmers know about paging, code optimization, and cpu time slices (hell, those are TOOLS at the programmers disposal, not hidden things like you seem to think...) and all that stuff you mentioned, the only ones who don't know about that are the VB and Java people.

    I agree with the specific example that you gave about outsmarting yourself, but my experience is wholly the opposite, the only people falsely claiming they know what they are doing and trying to outsmart themselves are again, the VB and Java people.


    "It can make it hard for you to love framework based development"

    This is very true, coming from C++, I like to write my own code for some things which can tend to overlap with built-in functionality which you inherit with the 60mb runtime.



    "It can teach you philosophies which will prevent you from really understanding modern programming"

    Modern programming, I consider, to be the code going into Photoshop, or digg.com, or Microsoft Word, or AIM, all of these apps are based around procedural programming. Even the database query languages are partially procedural (T-SQL, PL/SQL.. you know the stuff actually getting used...) I don't really see a lack of procedural programming.


    I don't know if this was said, but C and C++ are still very much alive in DESKTOP applications. Almost every single high end productivity solution on Windows is a C++ application at its core, at best they have some managed integration endpoints for extensibility and maybe some UI. Where are all of these "modern" apps?

  • "If your assignment is to make your Java program faster [...] how do you know how to optimize a loop if you don't even know what "the fastest possible loop" should look like?"

    Again the conceit that, as a C programmer, you're going to know that. In Java. In *this version* of the JIT.

    Either you write the obvious code, the same in any language from C to VBScript, or you commit to perpetual microoptimization by observation. Or do you have the inside track on the JIT optimizer?

    [in a seperate post]
    "This knowledge is useful even if you're programming in JavaScript because you can make reasonable assumptions about how the script engine is going to interpret your commands."

    No, no you can't. You either use it the way it expects to be used, or you optimize based on testing. Second guessing what the script engine does under the hood is a recipie for disaster (or at least wasting a lot of time for no measurable gain).

  • Sure, todays graduating "developers" are idiots anyway, so why teach them something useful like C or ASM?

  • "I think everybody should learn COBOL"

    Wot the _____?? COBOL to C# or Java??
    Seems really weird. Place it in resume ??
    O humanity!!

  • 1. C/C++ programmers rarely have trouble speaking Java if they have to. Can you say the same about an average Java/.NET/Python/PHP coder?

    2. A C programmer knows very well that copying an array of objects involves a loop. A good C programmer knows that Shlemiel-style str(n)catting a lot of strings to a buffer is not a very good idea. A C++ programmer knows the difference between X() and new X(). While this is very simple of course, most very-high-level language programmers don’t have Dr. Watson’s skill of noticing the obvious. “O(N²)” says nothing to quite a few of them. While premature optimization is the root of all evil, premature pessimization is hardly better.

    3. This is not to say programming should be taught in C. My personal opinion is that students should be taught the very basics (conditional operators, loops, error handling (all the methods) etc.) in a really simple language like Python, or perhaps Scheme. But then they should switch to C++.

  • I'm a Mangement Consultant and have been doing BA and PM work amongst other stuff for many years. I have interviewed more than a few developers in my time.

    I must confess that in all my interactions with developers the ones who have really known their stuff seemed to all come from a hard core c/c++ background. Thats not to say there weren't a few good 'uns who came from VB or Perl or other backgrounds. I have of course met many C++ geeks who I wouldn't rely on to code anything beyond some small utility.

    I think its like every other profession. The good ones are good because theyy're interested and dedicated and want to take it to the nth degree. You tend to find more good ones with a background in the hardcore languages like c and c++ because they were interested enough to investigate beyond the packaged frameworks of the modern languages. Why and how are powerful words. Why was it like that? How does that work? If you have that sort of mind then you will be good no matter what language you learn.

    Personally I think c and c++ teach you all about memory management in a way that the managed languages don't which gives you an instant advantage in the programming game.

    I want to get into development more for my own purposes. I will start with c++ and then move on to c# and/or java. It'll take me longer but my interest will mean my understanding will be far greater in the long run

  • In my current work, I can’t underestimate the significance of C. A large number of UK-based companies are looking for a number of talented C software developers. But lately, I noticed that the demand shifted to C++ coders. In order not to waste any manpower, C++ coders teach the C folks enough C++ to be more effective.

  • I am a C++ dev who now mostly does C#. I can tell you that the difference between coders who did C/C++ first and those that learned C# as their first language are different as night and day. The most brain dead moron can eventually get C# code to "work", but neither cares why nor whether it is efficient or not.

    The pure C# coder blames bugs on the Framework, garbage collection, the OS, etc. They are so used to so many things being done for them it's no great leap to assume that they don't have to actually write good code or debug it.

    Unfortunately, this is the future of programming. God help us all.

  • The addition to your first main reply, that CS professors teach C because they know it better than they know proper design principles is, in my experience, EXTREMELY accurate. Every professor I've ever had who taught C (and several I've talked to, as well as many other programmers who started with and continue to use C) has had only the barest concept of even proper Object Oriented programming, much less designing good applications.
    When confronted with a problem, they hack something out in C and call it "good enough". They spend DAYS on micro-optimizations, and miss the O(n^2) that their entire algorithm results in, when there's a O(log(n)) solution that they could have found had they thought about the problem in abstract instead of in C-code.
    Heck, few of them even see the use for the Model-View-Controller pattern when building a GUI app that hooks into a database. They roll it all into one, with very poor separation of classes. And yes, they've SAID that they don't think there's a reason to use MVC in large applications.


    That said, C-as-learning-fundamentals is useful, and EVERY programmer should have at least some concept of the fundamentals. But can't we find a better language that fits this bill? C produces hideously un-readable code, and most classes deal exclusively with the standard library, which is RIDDLED with obscure idiosyncrasies that shouldn't exist in the first place. The few good C/C++ developers I've run across almost universally use BOOST, or at least rant and rave about the idiocy of the standard library.
    Oddly enough, those good ones are also usually math professors, and they absolutely devour design patterns and hard problems (artificial neural networks, thread control in a kernel, etc). They _learn_, while most C-teaching professors seem to be constantly patting themselves on the back for their insignificant micro-optimization that is called once in their entire program, rather than improving the program as a whole.

  • Joel got it (mostly) right. It is *you* who got it wrong.

    The point of Joel was low-level is a good filter to filter out mediocre programmers, in the very beginning. You completely missed to address it. In a fact, today the market is flooded with BAD programmers producing dismal code, in many ways just because it is so easy now to become one.

    What prevents you from starting low-level and moving high-level later on? Nothing. In a fact, a good programmer can do well both, a bad one - probably neither. Will programming low-level impact your high-level abstractions? Well, if you are dumb, you probably will pool connections and go override GC without a need. But if you are dumb, you will do some stupid things anyway. Assuming you're not, you simply won't do it.

  • That's an apt answer to an itenresting question

Comments have been disabled for this content.