Contents tagged with English Postings

  • What´s in a Book?

    As I read Kevin Kelly´s "Fate of the Book" I come to wonder what this debate he´s referring to is all about? Is it about form or content? Is is about texts as opposed to video or audio? Is it about texts of a certain minimum length and/or structure as opposed to text snippets? Or is it about a certain physical container for texts as opposed to digital texts? Or is it about certain types of physical containers?


  • Code instrumentation with TraceSource - My personal vade mecum

    When writing more complex code you cannot really step through during debugging, it´s helpful to put stud it with statements tracing the execution flow. The .NET Framework provides for this purpose the System.Diagnostics namespace. But whenever I just quickly wanted to use it, it turned out to be a hassle to get the tracing running properly. That´s why I wrote down the following, to make it easier next time.


  • Inversion of Control Using Generics - Revisiting the Separation of Use and Implementation

    Martin Fowler in his famous article "Inversion of Control Containers and the Dependency Injection pattern" has compiled a number of ways how to dynamically bind a client to a service. I now would like to add two points to the discussion: firstly a distinction regarding what is injected, and secondly a new pattern for injection based on generics. During my discusson I´ll use the same sample scenario as Martin to make it easy to see what I´m trying to add. Here´s a quick recap using C#.


  • Easy .NET architecture analysis from within VS2005 - Visualizing your code dependencies with a Dependency Structure Matrix

    Constant software quality assessment might be deemed important by almost all developers. But it´s a long way from agreeing or preaching something to actually living it in your projects. It often needs good tools to make theory easy to implement in practice. NUnit, NMock, NAnt, ReSharper are examples of such tools which make testing, building, refining your code easier. And NDepend could be one such tool too for quality assessment - but at least in my view it tries to do too much to be easy to adopt while being under pressure in a tight project schedule.


  • .NET naked - More pictures, some clarifications

    I seem to have stumble upon something here with my look under the hood of the .NET Framework an other tools. Many readers where surprised and fascinated by what you can actually do with quality assessment tools like Lattix LDM or a simple concept like DSM (Dependency Structure Matrix) with its easy to understand and scalable depiction of the dependencies within a software.


  • .NET naked - See these hitherto unpublished pictures of the .NET Framework architecture

    Have you ever thought about the quality of your code? Well, I bet. Have you ever strived for a sound architecture of your software solutions? Well, I hope. Do you have a process in place to constantly monitor the quality of your software´s architecture and code? Well, you should. But not only should you. Every single software team should. Planning for quality and constant qualitiy monitoring should be the fundamental activities in any software project.


  • Dynamic component binding made easier - An easy to use Microkernel to help reap Contract-First-Design benefits in .NET programs

    If you like, you can download Ralf´s Microkernel here. It´s written in C# and comes with a couple of unit tests (NUnit style). Feel free to use the binary in your projects and play around with the source. Although it´s not much code, I hope you will be able to gain quite a bit from its essential Microkernel functions. Here is how it works...


  • Software as an autopoietic entity - or: Survival of the fittest

    What´s software anyway? Code, software is code, right? Well, I´d say, software is more than that. And to realize that is important for the whole software production process.

    Lets step back and take the big picture into view: Software is an entity "sitting" in an environment; you could even say, it´s living in an environment, because it´s struggling to survive. Software comes into existence and from then on is trying be liked by users and customers, to be kept alive by them through usage. As long as it is used, it lives. As soon as users or customers start favoring another software for whatever reason, it runs the danger to be dumped, i.e. to be killed. So, I guess we can say, software has to be fit, to survive.

    But what does "fittness" mean? It´s the right balance between efficiency and flexibility. If an organism would only strive for efficiency, it can easily be killed, if the environment changes. If, on the other hand, it strives for flexibility, it can be killed because it´s not fast enough or has a too weak armor, meaning it´s lacking efficiency with regard to a particular survival relevant aspect.

    During a single livetime, an organisms efficiency-flexibility-balance is pretty much set. However it´s highly variable over the lifetime of a whole species. But how does a species attain this variability, how can it adapt? By constantly re-creating itself as a continuous stream of new "versions". A species thus is surviving through autopoiesis.

    Back to software: If software is struggling to survive like any organisms, can´t we then also view it as an organism? What about MS-Word and OpenOffice and WordPerfect and Wordstar as specieses with particular versions as instances or organisms? Well, as the extinction of Wordstar and the rivalry between MS-Word and OpenOffice show, it´s a jungle out there and not every software or software species survives indefinitely. The better always is the enemy of the good.

    But how could software be capable of autopoiesis? If software equals code, then there is no self re-creation. Software does not modify itself, it cannot evolve itself. (Lets leave genetic programming aside for the moment ;-)

    So what´s wrong? I´d say, our picture of software=code is wrong. Software is not just code. There´s more to software than code, otherwise there would be no way to evolve.

    I think we need to switch from software=code to software=code+developer. We need to take the developer (or development team) into the picture to arrive at autopoietic systems which can evolve in the face of an ever hostile environment.

    The whole, software, consists of two inseparable parts. And only if the whole finds a balance between efficiency and flexibility it is going to survive.

    That means, fiddling with properties and qualities of either part always affects the whole. That means, whenever talking about functional or non-functional requirements, the whole has to be taken into account. How does tweaking performance affect the whole´s ability to survive? How does neglecting architecture affect the whole´s ability to survive? How does testing, documentation, GUI-design, code readability etc. affect the whole´s ability to survive?

    High performance and scaleability are efficiency concerns; good design and coding standards on the other hand support flexibility. Tight schedules: efficiency. Collective code ownership: flexibility. O/R Mapper: flexibility. Socket programming: efficiency.

    My conclusion from this new view on software: We need to step back more often, when planning a software project and take into account not only code, but the union of code and developer. We need to take into account the whole of software when talking about optimizations, schedule, design, documentation, project organization etc.

    Software is more than code!


  • Freeing Data From the Silos - A Relationistic Approach to Information Processing

    Current data processing is suffering from the bane of the many data silos. Data is locked up in a hierarchy of containers and can hardly be connected in new ways. Once data has been grouped into relational database tables or object oriented classes, it’s difficult to regroup it or set up new relations. Regrouping would either break existing code or mean duplication of data. New relations would entail schema changes and be limited to connections between containers on only a few levels of abstraction. Products like Microsoft’s WinFS, dynamic languages and database refactoring tools are trying to overcome this lamentable state of data processing while having their own perspective on it.


    This, of course, does not mean there is no value in current concepts and technologies any more. Relational databases and object oriented languages are very useful and will continue to be so for a long time. However, the general need to go beyond them to solve the data silo problem should be obvious.

    At the heart of the data silo problem are the basic and mostly unquestioned concepts of data container and container reference.

    For those of you who followed my musings on associative information processing or processing data by getting rid of data, here´s a new stab at explaining a completely different basic way of dealing with data. The above is an excerpt from a new paper I wrote on a concept called Pile (, which I find quite interesting (although still being in its infancy).