# Correctness provability should be the goal, not testability

(edited. s/proof/prove/gc and some sentences which were 'out of wack' have been corrected as well)

Phil Haack today posted an article about Writing Testable Code Is About Managing Complexity. He says:

Of course, when I'm done un-bunching my knickers, I realize that despite all the benefits of unit testable code, the real benefit of testable code is how it helps handle the software development's biggest problem since time immemorial, managing complexity.

There are two ways that testable code helps manage complexity.

1. It directly helps manage complexity assuming that you not only write testable code, but also write the unit tests to go along with them. With decent code coverage, you now have a nice suite of regression tests, which helps manage complexity by alerting you to potential bugs introduced during code maintenance in a large project long before they become a problem in production.

2. It indirectly helps manage complexity because in order to write testable code, you have to employ the principle of separation of concerns to really write testable code.

I find that a little backwards. It's not that due to focusing on testability, you get separation of concerns, but by applying the principle of separation of concerns you get code that's easier to test as a bonus. For people who think that's the same thing, it's not. Getting easy testability as a bonus means that testability is not the goal of why you set up your architecture using the principle of separation of concerns, it's a side-effect; the main goal was to get separation of concerns to be able to prove code is correct.

Scientific research has shown that in the end, separation of concerns is inevitable to get solid software which is correct. (Peri Tarr, Harold Ossher, William Harrison, and Jr. Stanley M. Sutton. N degrees of separation: multi-dimensional separation of concerns. In Proceedings of the 21st international conference on Software engineering, pages 107-119. IEEE Computer Society Press, 1999. (PDF). Who said Computer Science research is out of touch with reality, eh?

When focusing on testability, one can fall into the trap of believing that the tests proof that your code is correct. That's not true: the set of tests prove that your code is correct in the solution space defined by the tests. Take for example this simple routine:

for(int i=0;i<100;i++) { if(SomeMethod()) { // code } else { // code } }

This routine has 2^{100} code paths. It's nearly impossible to test them all out with unit-tests in normal time. To say that this code is correct, one has to *prove* the code is correct, a set of tests won't cut it. Code can only be proven to be correct if it is predictable (deterministic). Dijkstra's famous rant against 'goto' statements is mainly based on that: a goto could create unpredictable code and therefore non-provable code.

**Proving code correctness**

Proving code to be correct isn't easy, but it should be your main focus when writing solid software. Your first step should be to prove that your *algorithms* are correct. If an algorithm fails to be correct, you can save yourself the trouble typing the executable form of it (the code representing the algorithm) as it will never result in solid correct software.

After you have proven your algorithms are correct, your second step should be to implement the algorithms in executable form by taking extra care in the projection that you are applying while doing it: you should write sourcecode which reflects the algorithm(s) you designed and of which you have proven to be correct. Writing sourcecode is making a *projection* of an algorithm on an executable form, as the sourcecode *is* the algorithm, but in another form: the executable form. This is essential knowledge, as you can use that to prove that your code you wrote to represent the algorithm or algorithms is indeed the projection of the algorithm in executable form. Typically this is done by using pre/post conditions for example, checking if the code indeed traverses from state n to state n+1 if event e occurs, etc. One way to do this is for example by writing other code which tests your code's algorithm implementation, however that creates in effect an infinite loop: to prove that your test is correct, you either have to prove that with conventional tools or with yet another test.

By applying the principle of separation of concerns, one will quickly learn that proving code-correctness is easier: algorithms are projected only once, in isolated form so the end result is deterministic and therefore a candidate to be proven to be correct.

There's another benefit of taking these steps and sticking to it no matter what: debugging an application becomes easy. As you've already proven that the algorithms are correct, you only have to confirm that the code you wrote is indeed the the algorithm in executable form. That doesn't even need a debugger at all, you can do that in the editor, whiteboard, sheet of paper or whatever rocks your boat. If it becomes complex, you can revert to the debugger but you know what to look for: you *know* what to expect, as you know the algorithm.

Testing your code can be a way to prove that code obeys a given set of standards, does things as expected. It's a *tool* to be used to reach the *goal* called correct software, it's not a goal on its own.