while(Coder.LOC < LOC.Max)

"The idea is that it’s not so much a good idea to measure how many lines of code a developer is producing (e.g. 7 per day), but rather how many lines of code they are responsible for. And that there’s an absolute maximum. In other words, once I’ve got 50KLOC, I’m done – all my time is going to be spent in maintenance.

I’m not saying 50K is the number – in fact the number will be different for different developers – but the idea that a single developer has a capacity resonates with me. I know how hard it is to go back to things I haven’t looked at for even a week when I’ve got a lot of stuff going. And forget about supporting code that I wrote two years ago."
[Craig Andera]

An interesting suggestion; however, as the codebase matures, it makes sense that the LOC per dev would grow (it is far easier to maintain 50K than to hand code / test / and debug 50K). If anything, you would need to at least have a shift in the location of your LOC. For instance, you have a method called DoSomethingFairlySimple, which does something that is fairly simple, in fact, simple enough that the 20 lines of code in there can be 99.999% certified bug free. As this is a simple method, the implementation isn't going to change, so should I let it eat into my LOC? Additionally, what happens when my RssZeroDotNineOneParser class matures to a point where it is extremely stable and can parse any feed I throw at it? The RSS 0.91 spec isn't going to change, so can I remove this from my LOC, or do I spend time maintaining a piece of code that doesn't need refactoring / matinence. It is entirely possible to refactor something too much, just as it is entirely possible to overdesign the solution in the first place. As has been pointed out before, the entire point of refactoring is to make upcoming changes to your codebase easier, so if there are not going to be any modifications, why do the extra refactoring in the first place?

But, we must always keep in mind that if our code is self-documenting where things are clean, and manually documented where things are sticky, then we should be able to take just about any member of our dev team, plop them down, and have them add functionality or perform fixes on old code when the need arises--regardless of whether their max LOC is 100 or 100K.

1 Comment

  • LOC.Max might also be a context-switching thing. A programmer can keep only so much code in &quot;RAM&quot; where it's easy to remember what everything does; after that, things get pushed into the background.



    Suppose I'm developing the frobnicator 1.0, and the GUIFrob 1.0 which is a GUI for the back-end application. I might spend a year developing the frobnicator, then essentially abandon it and spend a year developing the GUIFrob (which cares only about a well-defined interface to the frobnicator). After the GUIFrob has gotten pretty big (and I've spend a couple months on it, and not looking at the backend), if I need to go make changes to the frobnicator it'll take me a little time to get back up to speed on the frobnicator code.



    But if I can code-and-forget--which I do a fair amount over time, though not as much day-to-day--then I can maintain several projects over my LOC.Max (Aka RAM) size. It's just if I have to actively look at all that code at once that I start thrashing. Spending 3 months working on project A and then 3 months on B is a lot easier than trying to fix a bug in A, then 2 days later something deep in B, then back to A for a feature request, etc.



    In other words, once you've got your head full of something, there's a pretty high context switching overhead to working on another task. You can improve your total productivity (bandwidth) close to what it would be on just one project by batching requests to various projects, at the expense of higher latency in responding to a particular request.



    It doesn't even have to be seperated on project boundries; if I've finished the http transport layer in my web browser, I can use that functionality but ignore the code behind it, and that code gets swapped out of my RAM.



    A corrollary is probably that the complexity of the interfaces to the libraries/systems you're using probably occupies some of this RAM. But similarites can reduce this (trivially, once you've seen that zlib offers a gzFILE *, gzopen, gzread, gzwrite, gzprintf, etc, then you don't need to expend much thought on how to use it if you already know stdio), which is why designing good interfaces and interfaces that are &quot;intuitive&quot;--especially in the sense of &quot;works like everything alse works&quot;--is such an important part of programming.



    There's even CPU cache, as every programmer knows: once you get &quot;in the zone&quot; on a given day, the code you're working on is all in short-term memory and you can easily add/change it. Take some time off and it takes you a little while to get back in the zone. At least for me, if I can program in large blocks of time I'm a lot more productive than if I have an hour to code, then a meeting, then another hour, then lunch, etc.



    Sumner

Comments have been disabled for this content.