May 2005 - Posts
"Softwareentwicklung geschieht zunehmend in einem Spannungsfeld zwischen Zeitdruck und Technologieflut. Um unter diesen Umständen verlässlich höhere Qualität zu produzieren, müssen sich Menschen und Prozesse verändern."
Den kompletten Artikel finden Sie hier...
Das Thema ist in der Linie meiner Argumentation von "On the Future of Software Development".
Software Cells started out in the concrete and tried to make small to medium scale software architecture more tangible. Software Cells thus were a bottom-up approach.
But as it turns out, now I also find them usefull on a more conceptual and general level. The have become a top-down model for describing/building software and have the tendency to become more systematic on several levels. I realized this when I read about Arthur Koestler´s Holons the other day (see here for an introduction: http://www.worldtrans.org/essay/holarchies.html). The picture of an Holarchy - a system of several levels of nested Holons - immediately reminded me of my picture of the Software Universe (see picture at end of posting).
In Koestler´s terminology Software Cells, Solutions, but also Assemblies are Holons. Each kind of Holon on a certain level consists of Holons of the next lower level and at the same time is part of a Holon on the next higher level. The Software Cells model thus defines its own Holarchy with 7 levels from the smallest Holons to the largest ones:
(As you might have noticed, I neglected the level of statements which was part of the Software Universe. I think that´s ok, since statements themselves are predefined by a programming language. Their "shape" and interface is not under our control. But from Methods on upwards the Holarchy we are in control. Methods are the lowest level Holons we can "compose".)
To me, rooting Software Cells in the Holarchy concept is valuable since it brings more systematic thinking to them. And more systematic thinking is always good. It´s a step back from the concrete and brings into view the context. It allows you to reason about things you know less about by carrying forward knowledge. It allows you to find patterns and see developments which are broader than a particular "thing" you usually focus on. A systematic approach also let´s you find categories of things which raise the level of abstraction. Reasoning about few categories is easier than reasoning about a million different things falling into those categories.
Let me try to apply a little bit more systematic thinking to the Software Cells Holarchy. My first point of view is structure of Holons.
I´d say, each Holon (from Method to Society) can be viewed as a 2-tupel (dupel) consisting of (interface, implementation). That´s very easy to see if you look at the lowest levels of the Holarchy, methods and types. Methods have a signature as an interface and a body as an implementation. Types have public methods and fields as their interface and the method bodys as their implementation. An Assembly´s interface are the public types, and all types with their implementation make up the implementation.
With this generalization in place we suddenly see a difference in how concrete interfaces are on the different levels. Implementations are always easy to identify, but the interfaces as the (syntactic) contracts for interacting with the implementation get less tangible the higher you get in the Holarchy. A Method clearly ties the interface to the implementation. Even an Assembly today ties together interface and implemenation. In fact it was an important goal for Microsoft to get rid of the separation of metadata (interface) from implementation. But what about Components, Applications, and Solutions? Component metadata is distributed across several Assemblies. The same is true for Applications, and its worse for Solutions. There is no "one location" to find the metadata for a Component, or Application, or Solution.
And I think, that´s bad for the same reason it would be bad to separate a Method´s signature from its body. Systematic thinking thus leads us to the proposal to define a means for assembling the interface of a Holon on the levels from Component up to Society. For a Component this seems to be pretty straightforward: just put all public CTS interfaces (which are implemented by the Types distributed across the Component´s Assemblies) into a single Assembly. But what to do for an Application or Solution? I don´t know. Maybe a single large WSDL/XML Schema document would do? (Which could be retrieved from a single URI.)
This brings me to another view on the Holarchy: Control Flow between Holons. Holons can be split into two categories:
- Holons communicating via the stack.
- Holons communicating via streams.
This view is tightly coupled with another view: Holons can also be split into two categories regarding their "tangibility". There are physical Holons and virtual Holons. Physical Holons are those, that have clear physical "boundaries", Method, Type, Assembly, Component, Application. I can point at them and say "These lines of code make up the abc Method." or "Ths file is the def Assembly Holon." or "This process is the xyz Application Holon."
But where are the physical boundaries of a Solution or Society? I find it hard to to point at "something" and say "This is the Solution Holon."
The lynchpin of physical vs virtual Holons is the process and thus the Application. From Applications upward on Holons are virtual - and communication switches from stack to stream.
But what does it mean, when I say, "communication is stack/stream based"? It means, a stack/stream is the medium used to communicate between Holons on the same (!) level of the Holarchy:
With this picture we now can understand maybe a little better, why there is so much hype around technologies like Web services or Indigo. They are so to speak "growing pains" or "symptoms" of moving up software development in the Holarchy. Most software developers have lived until now in a world of single Application software. They thus needed to be concerned only with the stack based RPC style communication. And when they wet their feed in multi Application software (or simply Solutions), Microsoft tried to make it easy for them with DCOM and MTS/COM+. But nevertheless they moved from the old stack world to the new stream world where messages needed to be exchanged for communication.
However, streaming communication (across memory spaces) is very different from stack communication (within a single memory space). And not all the differences could be glossed overy by DCOM et al. Neither can .NET Remoting gloss over all differences or Web services (in their original form back in 1990). So what the Software Cell Holarchy allows us to see is the break or transition in the levels of hierarchy with respect to communication. (Please understand me correctly, I´m not claiming I´ve discovered something new here. I just claim, that the Software Cell Holarchy helps a great deal to make visible what exists and reason about it.)
Finally, let me assume another viewpoint: The history of software development can be seen as a continuous rise in abstraction. An abstract type is more abstract than a set of methods. A component is more abstract than a set of types. etc. Fundamental for the ever higher levels of abstraction are the interfaces of the Holons on each level. The interfaces hides the details of implementation. Now, since succinctly describing a service using an interfaces is so important for abstraction, and Holons are duples containing an interface and at the same time are "containers" for lower level Holons, the history of software development is also an evolution of aggreation and containers.
A Type is a container for Methods and OOP brought them about. So OOP is at the beginning of the Software Cells Holarchy. Then came DLLs as containers for Types. Strangely, Components lack a container, but Applications (or Hosts) are containers for Components. (Maybe .msi files could be at least the deployment containers for Components?)
With each new container, we were able to assemble our software using more coarse grained parts and on a higher level of abstraction. The end of this development, though, so far seem to be Components. 3rd party software like a WinForms data grid or O/R Mapping engine are Components. Since the old Visual Basic VBX components a flourishing market has come into existence, where prefabricated Components are sold. Building true Components within a project, though, is not yet equally popular.
But what will be the next step in this evolution? Clearly it is to manufacture whole Applications and then Solutions and sell them as prefab parts for new software. However, what makes this more difficult than stepping up from Type to Assembly, is, that Applications are on the stream based communication level of the Holarchy. That means, the whole (or Holon) assembled from different Applications is not contained within one process (memory space) and not even necessarily running within the same network or on the same platform. The assumptions a prefab Application can make about the whole are much less than a type can make about an Assembly. A Host as the ultimate container for Holons from Component down to Method is a very stable and straightforward environment. Heterogeneous networks, though, are a completely different environment. There are simple so many more "moving parts" and parameters to account for.
And that leads me to the prediction: Where true Component based programming has not taken off in most software shops until today, the next level of software modularization, where whole Applications are the components, will take at least the same time to materialize. For me that means: Despite all the SOA hype we´re another 5 years away from a broad and natural acceptance of Software Solutions. It took from ca. 1992 to 2005 to get developers ready for "ordinary" Components. The technologies are in place, IoC/Microkernel infrastructures allow you to stitch together your Applications from Components/Assemblies very easily. Eclipse is maybe the best example for how successful this approach ultimately can be.
If you see 2000 and Web services as the starting point for the evolution onto the next level and it will take developers almost the same time to solidly understand and broadly use the new concept of Applications as components, then not before 2010 we´ll see widespread acceptance.
But sure, this is speculation. We´ll see what happens.
The trend towards more and larger prefab parts is unbroken in the non-IT world as can be seen here: http://www.modularesbauen.com/berichte/Das_Haus_vom_%20Band.pdf. This document describes a concept for the family home of the future. An affordable house (ca. 50.000 EUR) composed of prefab modules. Likewise, I guess, we´ll see more coarse grained prefab parts in the IT- and software world. And the Software Cells Holarchy is showing us, on which levels that´s to happen and how they relate to existing concepts. Thus I think moving Software Cells little more in the direction of a systematic approach is valuable.
"Am Anfang war der Vertrag: Contract First Design und Microkernel-Frameworks", dotnetpro 6/05
"Sandbox: Ein Arbeitsleben lang", dotnetpro 6/05: Über die Notwendigkeit zum lebenslangen Lernen in der Softwareentwicklung
"dotnetpro.tv #6: Mobile Computing", dotnetpro 6/05: Video Interview in der dotnetpro.tv Serie mit Frank Prengel zum Thema .NET Compact Framework
"Spezialisierung tut Not", S. 34 in IT-Freelancer Mai 2005: Zur Notwendigkeit der Spezialisierung der Kenntnisse und Fertigkeiten für Softwareentwickler. Ein Artikel in der Linie meiner Argumentation von "On the Future of Software Development".
A (yet incomplete) list of my publications until April/May 2005. All other publications will be listet as individual posting.
dotnetpro.tv #5: "BizTalk", Experte: Christof Sprenger, dotnetpro 5/05
BizTalk in der dotnetpro? Was soll das denn? Sie wollen doch in der dotnetpro etwas über Programmierung und nichts über Applikationsintegration lesen. Schließlich wurde für den BizTalk-Server vor Zeiten mal die Abkürzung HST (Hooking Stuff Together) erfunden. Vielleicht haben Sie auch gehört, dass BizTalk-Server XML verarbeiten kann. Schön, das kann man mit dem .NET Framework auch. Andererseits gibtes auch Stimmen die BizTalk Server als Applikations Server darstellen. Wie passt das denn nun in die dotnetpro?
"O/R-Mapping in verteilten Anwendungen", dotnetpro 5/05
O/R-Mapping verheißt eine starke Codereduktion für viele typische Szenarien des Zugriffs auf relationale Datenbanken. Aber nicht nur Produktivität und Performance sind Kriterien für die Auswahl eines O/R-Mappers. Er sollte auch die Architektur Ihrer Anwendung unterstützen. dotnetpro zeigt am Beispiel Versant Open Access, wie der Einsatz von O/R-Mapping sogar in verteilten Anwendungen funktionieren kann.
"Sandbox: Panic Now!", dotnetpro 5/05
„Eine mehrschichtige Anwendung entwickeln“, dotnetpro 4/05
dotnetpro erklärt, wie Sie für eine Software die passende Architektur finden. Im ersten Teil des Artikels in der vorangegangenen Ausgabe haben Sie erfahren, wie Sie für eine Beispiel-Applikation den grundsätzlichen Aufbau erarbeiten. Nun geht es an die Implementierung.
dotnetpro.tv #4: „Flash und ASP.NET“, Experte: Christian Wenz, dotnetpro 4/05
„Sandbox: Schlag nach bei Knigge“, dotnetpro 4/05
[to be updated]
[to be updated]
2003 Softwareentwicklung für den Applikationsserver des .NET Frameworks, Teil 1 ; OBJEKTspektrum , Ausgabe 2003
| VS.NET 2003 Legacy Platform Extensions (Aprilscherz-Artikel) ; MSDN Online , Ausgabe 2003 |
| Seitenvariablen automatisch im Viewstate speichern ; ASP.NET Professional , Ausgabe 2003 |
| Deklarative Programmierung durch erweiterbare Metadaten ; OBJEKTspektrum , Ausgabe 2003 |
| MSDN Chat - Objektorientierte Programmierung mit Visual Basic .NET ; MSDN Online , Ausgabe 2003 |
| Datenbankprogrammierung mit dem .NET Framework, Teil 2 ; OBJEKTspektrum , Ausgabe 2003 |
2002 Datenbankprogrammierung mit dem .NET Framework, Teil 1 ; OBJEKTspektrum , Ausgabe 2002
| Kürbisgesichter mit ASP.NET schnitzen ; MSDN Online , Ausgabe 2002 |
| ADO.NET Datenbankprogrammierung (Probekapitel) ; Addison-Wesley , Ausgabe 2002 |
| Das .NET Framework als nahtlose Komponentenplattform, Teil 2 ; OBJEKTspektrum , Ausgabe 2002 |
| Das .NET Framework als nahtlose Komponentenplattform, Teil 1 ; OBJEKTspektrum , Ausgabe 2002 |
| Geschachtelte Daten darstellen - Teil 2 ; MSDN Online Expertenforum , Ausgabe 2002 |
| Geschachtelte Daten darstellen - Teil 1 ; MSDN Online Expertenforum , Ausgabe 2002 |
| Exzellenter Start von Visual Studio .NET in Deutschland ; MSDN Online , Ausgabe 2002 |
| Vom Segen der Regelmäßigkeit ; MSDN Online Expertenforum , Ausgabe 2002 |
2001 Und VB .NET hat doch einen Sinn ; MSDN Online Kolumne , Ausgabe 12 2001
| Implementing XmlReader Classes for Non-XML Data Structures and Formats ; MSDN Online (Corp.) , Ausgabe 2001 |
| Stabile .NET-Programme durch automatisches Speichermanagement ; MSDN Online Kolumne , Ausgabe 10 2001 |
| Code organisieren in .NET-Programmen ; MSDN Online Kolumne , Ausgabe 9 2001 |
| Fast wie im richtigen Leben - Teil 7 / Entwicklung einer Beispielanwendung Schritt für Schritt ; BasicPro , Ausgabe 4 2001 , S. 57-63 |
| Intermediate Language? Wofür? ; MSDN Online Kolumne , Ausgabe 7 2001 |
| Paradigmenwechsel mit ADO.NET ; MSDN Online Kolumne , Ausgabe 6 2001 |
| Die Qual der Wahl der Sprache ; MSDN Online Kolumne , Ausgabe 4 2001 |
| Bist du es, Visual Basic? - Teil 4 / Ein erster Blick auf Veränderungen an VB, die VB.NET bringen wird ; BasicPro , Ausgabe 3 2001 , S. 6-15 |
| Fast wie im richtigen Leben - Teil 6 / Entwicklung einer Beispielanwendung Schritt für Schritt ; BasicPro , Ausgabe 3 2001 , S. 73-81 |
| Einsteigen in die Entwicklung mit dem .NET Framework ; MSDN Online Kolumne , Ausgabe 3 2001 |
| Microsoft .NET ein Überblick ; MSDN Online Kolumne , Ausgabe 2 2001 |
| Bist du es, Visual Basic? - Teil 3 / Ein erster Blick auf Veränderungen an VB, die VB.NET bringen wird ; BasicPro , Ausgabe 2 2001 , S. 22-27 |
| Fast wie im richtigen Leben - Teil 5 / Entwicklung einer Beispielanwendung Schritt für Schritt ; BasicPro , Ausgabe 2 2001 , S. 81-91 |
| VB.NET: Ein großer Schritt, nicht nur in Richtung OOP ; OBJEKTSpektrum , Ausgabe 1 2001 , S. 80-85 |
| Bist du es, Visual Basic? - Teil 2 / Ein erster Blick auf Veränderungen an VB, die VB.NET bringen wird ; BasicPro , Ausgabe 1 2001 , S. 6-16 |
| WebServices jetzt! / Existierende Dienstleistungen im Internet als Komponente kapseln ; BasicPro , Ausgabe 1 2001 , S. 46-56 |
| Fast wie im richtigen Leben - Teil 4 / Entwicklung einer Beispielanwendung Schritt für Schritt ; BasicPro , Ausgabe 1 2001 , S. 68-75 |
2000 Bist du es, Visual Basic? - Teil 1 / Ein erster Blick auf Veränderungen an VB, die VB.NET bringen wird ; BasicPro , Ausgabe 6 2000 , S. 6-12
| Fast wie im richtigen Leben - Teil 3 / Entwicklung einer Beispielanwendung Schritt für Schritt ; BasicPro , Ausgabe 6 2000 , S. 54-64 |
| Fast wie im richtigen Leben - Teil 2 / Entwicklung einer Beispielanwendung Schritt für Schritt ; BasicPro , Ausgabe 5 2000 , S. 84-92 |
| XML lesen und erzeugen ; BasicPro , Ausgabe 4 2000 , S. 24-29 |
| 'Strong Tagging' als Ausweg aus der Interface-Versionshölle ; OBJEKTSpektrum , Ausgabe 4 2000 , S. 58-64 |
| Fast wie im richtigen Leben - Teil 1 / Entwicklung eines Beispielanwendung Schritt für Schritt ; BasicPro , Ausgabe 4 2000 |
| Alles wird gut! / Microsofts Vision für die Softwareentwicklung der nächsten Jahre ; BasicPro , Ausgabe 4 2000 , S. 6-10 |
| Ein Spiel programmieren / Tagebuch eines Profi-Programmierers ; PC Magazin Spezial 18 , Ausgabe Visual Basic - Kreativ programmieren 2000 , S. 84-87 |
| Asynchrone Methodenaufrufe / Multithreading einfach gemacht mit COM-Apartments ; BasicPro , Ausgabe 3 2000 , S. 56-72 |
| Automatische Skritpausführung unter Outlook / Wie ILOVEYOU auch Gutes bewirken kann ; BasicPro , Ausgabe 3 2000 , S. 84-94 |
| Blackbox hilft bei Rekonstruktion von Software-Abstürzen / Eine Annäherung an das "Debugging" mit BugTrapper ; BasicPro , Ausgabe 2 2000 , S. 72-81 |
| Vorsicht vor Spaß am Testen! / Schon die Arbeit mit einem einfachen Test-Framework kann Sie süchtig nach Tests machen ; BasicPro , Ausgabe 2 2000 , S. 23-34 |
| Visual Studio 7.0 - ein erster Blick / BasicPro im Gespräch mit Tony Goodhew und Chris Hargarten ; BasicPro , Ausgabe 1 2000 , S. 6-9 |
| Im Strom der Ereignisse / Event-Bubbling in VB-Objektmodellen ; BasicPro , Ausgabe 1 2000 , S. 24-30 |
| SQL XML Technology Preview / XML direkt aus SQL Server-Datenbanken erzeugen ; BasicPro , Ausgabe 1 2000 , S. 59-70 |
1999 Software automatisch aktualisieren / Laden Sie neue Versionen Ihrer Anwendungen oder Daten einfach aus dem Internet herunter ; BasicPro , Ausgabe 6 WildSite 1999 , S. 39-44
| Set... = Nothing oder nicht? / Die Zerstörung von Objekten unter die Lupe genommen ; BasicPro , Ausgabe 6 1999 , S. 21-29 |
| COM-Interface Casting / Zugriff auf Interfaces von COM-Objekten aus Scriptsprachen und ohne spezielle Objektvariablen ; BasicPro , Ausgabe 6 1999 , S. 60-69 |
| Sprachentwirrung mit VB / Internationale Software mit Hilfe von Ressourcendateien entwickeln ; BasicPro , Ausgabe 5 1999 , S. 64-73 |
| Typelib-Konstanten auf der Spur / Die Werte von Enum-Konstanten in ihre Namen rückübersetzen ; BasicPro , Ausgabe 5 1999 , S. 48-52 |
| Batchprogrammierung mit VBScript - Die nächste Runde / Überblick über die Version 2.0 des Windows Script Host ; BasicPro , Ausgabe 5 1999 , S. 20-26 |
| Using XML for Object Persistence ; xml.com , Ausgabe 8. September 1999 |
| Abenteuer WebClass-Projekt / Wie eine IIS-Applikation ziemlich daneben ging, dann aber doch noch alles gut wurde ; BasicPro , Ausgabe 4 WildSite 1999 , S. 10-21 |
| Wer sollte mehrschichtige Anwendungen entwickeln? / Nicht nur für Enterprise-Entwickler ; BasicPro , Ausgabe 4 1999 , S. 6-15 |
| Building a Better Metasearch Engine ; xml.com , Ausgabe 8. Juni 1999 |
| Programme leichter internationalisieren mit parametrierbaren Zeichenketten / Wie die C-Funktion sprintf() beim Übersetzen von Software hilft ; BasicPro , Ausgabe 3 1999 , S. 72-76 |
| MSDE: Der SQL-Server für jedermann / Ein Schnellkurs für den Einstieg in die MSDE als Alternative zu MS Access-Datenbanken ; BasicPro , Ausgabe 3 1999 , S. 54-62 |
| Das 1x1 der Rekursivität / Rekursive Probleme erkennen und mit rekursiven Algorithmen lösen ; BasicPro , Ausgabe 3 1999 , S. 25-44 |
| Eindrücke von der TechEd 99-Konferenz / Was hat die größte je von Microsoft in Europa veranstaltete Konferenz gebracht? ; BasicPro , Ausgabe 3 1999 , S. 6-7 |
| Objektorientierte Datenbanksysteme / Eine Einführung ; BasicPro , Ausgabe 2 1999 , S. 36-44 |
| Objektassoziationen / Beziehungen in Objektnetwerken verstehen und realisieren ; BasicPro , Ausgabe 2 1999 , S. 6-21 |
| Persistente Objektnetzwerke / Ein simples ODBMS im Selbstbau ; BasicPro , Ausgabe 2 1999 , S. 45-56 |
| Visual Basic 6.0 WebClasses / Internet-Applikationen mit VB entwickeln ; BasicPro , Ausgabe 1 WildSite 1999 , S. 16-26 |
| XML-Datenaustausch leicht gemacht / Vom Nutzen der IE5-XMLHttpRequest-Komponente ; BasicPro , Ausgabe 1 WildSite 1999 , S. 34-46 |
| COM im Internet / Mit den Remote Data Services (RDS) Recordsets und andere COM-Objekte über HTTP aufrufen ; BasicPro , Ausgabe 1 WildSite 1999 , S. 3-8 |
1998 Recordsets in der Middletier-Schnittstelle einsetzen / Datenbankinhalte zwischen Frontend und Businesslogik in verteilten Anwendungen austauschen ; BasicPro , Ausgabe 6 1998 , S. 34-40
| Batch-optimistische Recordsets benutzen ; BasicPro , Ausgabe 6 1998 , S. 44-54 |
| RDBMS-Daten nach XML umsetzen ; BasicPro , Ausgabe 5 WildSite 1998 , S. 2-13 |
| DDE-Schnittstellen mit COM-Komponenten kapseln ; BasicPro , Ausgabe 5 1998 , S. 49-56 |
| Die PDC 98 in Denver ; BasicPro , Ausgabe 5 1998 , S. 7-9 |
| Proprietäre Datenquellen mit VB6 ansprechen ; BasicPro , Ausgabe 5 1998 , S. 12-18 |
| Eingabevalidierung leicht gemacht mit Visual Basic 6.0 ; BasicPro , Ausgabe 4 1998 , S. 66-78 |
| Persistente Objekte mit Visual Basic 6.0 realisieren ; BasicPro , Ausgabe 4 1998 , S. 12-20 |
| Flexible Zugriffszahlenbestimmung für HTML-Seiten ; BasicPro , Ausgabe 3 WildSite 1998 , S. 18-22 |
| Internet-Diskussionsforen mit VB und ASP realisieren ; BasicPro , Ausgabe 3 WildSite 1998 , S. 3-15 |
| VB6 - Die Evolution ; BasicPro , Ausgabe 3 1998 , S. 3-10 |
| Objektidentität in relationalen Datenbanken ; BasicPro , Ausgabe 3 1998 , S. 11-18 |
| Realisierung eines Script-Sprachen-Interpreters in VB - Teil 3 ; BasicPro , Ausgabe 3 1998 , S. 24-38 |
| Server Scriptlets / COM-Komponenten in VBScript entwickeln ; BasicPro , Ausgabe 2 1998 , S. 60-65 |
| Der Windows Scripting Host / Batchprogrammierung mit VBScript ; BasicPro , Ausgabe 2 1998 , S. 18-27 |
| Revolution auf leisen Sohlen: XML ; BasicPro , Ausgabe 1 WildSite 1998 , S. 3-8 |
| Globale Sammelleidenschaft / Spider und Suchmaschine im VB-Selbstbau ; BasicPro , Ausgabe 1 WildSite 1998 , S. 14-15 |
| Realisierung eines Script-Sprachen-Interpreters in VB - Teil 2 ; BasicPro , Ausgabe 1 1998 , S. 33-38 |
1997 Das Microsoft Script Control / Scripting für VB-Programme ; BasicPro , Ausgabe 6 1997 , S. 18-23
| Realisierung eines Script-Sprachen-Interpreters in VB - Teil 1 ; BasicPro , Ausgabe 6 1997 , S. 56-61 |
Finally I find time to blog about a recent workshop I did with 12 developers in Austria and where I applied Microkernels in a training.
I was hired to bring the development team up to speed with the .NET Framework. The group was heterogenous: some VB6 developers, some C++ programmers, alls had only some experience using .NET.
Instead of going thru a prefab curriculum with them, what I did was to do a so called "Developer LAN Party" (DLP). Thats a training format I developed with Microsoft Germany´s Christof Sprenger and now also offer together with my .NET Twin brother Christian Weyer.
In a DLP the attendees are not bombarded with features of a certain technology, but rather learn how to use many technologies in a very pragmatic and hands-on way: by developing an application from start to finish. First a problem is described. Then the whole group defines the requirements and architects/designs the solution together. The picture shows a snap shot of one of the "design documents" on a whiteboard while I´m helping the attendees structuring their ideas.
Once the solution is designed, the attendees split up in groups to each implement one aspect of the solution. And that´s where the fun really starts :-) For the trainer ut´s always a challenge to get groups coordinated in their efforts; and for the attendees it´s a challenge to use technologies they are not very familiar with.
But as the results of all Developer LAN Parties show, every attendee in the end is impressed by the group dynamics that develop, the fun - and the knowledge he/she gains by researching the technologies him/herself. Of course, I someone has a question the trainer is there to help. But the focus is on learning while you go. It´s "project oriented learning" where you get familiar with a technology by applying it right away.
In addition - and that´s very important - the developers (maybe for the first time) see the development process in its entirety from start to finish, from requirements analysis to integration and deployment. An experience most of them find fascinating and enlightening, since it gives them a new perspective on what they do every day. Also, not to be neglected, a DLP requires the attendees to cooperate with other individuals (they sometimes don´t know before). A DLP thus is also about team playing and group spirit. (The following picture shows some of the attendees during a final wrap-up stand-up meeting.)
Now, what about Microkernels? The problem to solve during this DLP was, to steer a blind mouse thru a maze, i.e. build an application that can show and generate a maze and allows an experimenter to steer a robot mouse through the labyrinth. What made it difficult was, that the experimenter had no knowledge about the layout of the maze and the mouse only gave simple feedback like "hit wall" or "cheese found". I was inspired to pose this as a problem to the group by a blog posting by Mat Warren.
A Microkernel came into play, when we split the solution into components. We had 5 components that needed to be developed - but in what order? The data access component (for loading a maze) of course was most basic, so should one group start developing it and the others wait for it to finish? Clearly not! All groups should be able to start at the same time with their work. So what we did was Contract First Design (see here for a discussion of Contract First in the context of the Software Cell architecture model).
In the design phase we not only split up the solution into modules/components, but also defined precise interfaces (contracts) for those components. I coded them right away up front before the eyes of the attendees, and once we were satisfied with the contracts, compiled them into an assembly.
This contract assembly was handed to each group. It was the only assembly they were allowed to reference in their component projects (besides general infrastructure).
So what they needed to do was come up with mock-up implementations of components they needed during they development. E.g. the group responsible for the frontend component needed a mock-up implementation of a maze, because they needed to visualize it. So they code a very, very simple maze component which implemented the maze interface. At the same time the maze component was developed by some other group which needed not be concerned with the frontend groups progress - as well as the other way around. Each group was able to move forward at its own pace without hindering any other group.
All attendees where amazed how little they needed to communicated with the other groups. The only questions revolved around clarifying the semantics of the interfaces defined. But all in all there was little traffic between groups in the workshop room. They all simply ploughed thru their assigned problems, looking up technologies, coding mostly in pairs, sometimes consulting with me about implementation details. Clearly the most difficult component was the one for the algorithm to move the mouse thru the maze. There were two, sometime three groups working on competing implementations.
Now, in order to be able to easily exchange a mock-up implementation with a real implementation of a component and to later on integrate everything to form a running application, a Microkernel was used. Every group used it as part of the infrastructure and never instanciated types from another component directly. Instead they requested an instance of another component´s type from the Microkernel. This was a fundamental necessity to actually implement Contract First and move beyond just a lip service.
In the end, lo and behold, after there was at least one implementation for each component available, we stichted together the whole application by writing up one "mapping file" (which contained the mapping between interfaces and assemblies and types implementing the interface) - and it worked. It was so amazingly easy, all attendees were stunned. Of course, the problem/solution was comparatively simple, but still... To strictly develop an application according to the Contract First principle and live by it using a Microkernel was new to them and dispelled their scepticism.
For me it was another proof that software development can only move forward, if component orientation move from literature to the reality of projects. The technologies are there. But the developers must learn how to actually apply the theory. Again a Developer LAN Party has shown its value in doing exactly this: bridge the gap between theory and practice.
PS: Thanks to Gerald for the photographs!
After I put Software Cells into perspective with other architectural models, several people approached me with the question "Now, then, what are Software Cells for?" followed with some problem scenario. And I can tell you, those scenarios made me sweat for a while ;-) But I want to spare you the details and focus on my findings while trying to put Software Cells to good use on those problems.
First of all to answer the question: Software Cells currently are just about helping you to model a solution in software to a problem. That´s why at the center of Software Cells are physical Software Solutions as a set of Software Cells aka Applications. One problem = one Software Solution. For example the problem is "Build an online shop", then a Software Cells model would contain several Applications distributed across several computers, e.g.
Software Cells want to help you model the distribution of your logic. Since Hosts and technologies for communicating between Hosts (Adapters/Portals) are explicit entities in the Software Cell world, the force you to make concious decisions:
- Across how many hosts do I distribute my logic?
- Which Hosts do I choose?
- How do I do I/O in and between my Software Cells?
But what if I find, my solution needs some multi-protocol load balacing server? Or what if I find, that the interplay between several Cells in my application becomes very complex? Is there only one level of abstraction in a Software Cell diagram? Is there only one problem domain dimension in my model.
For simple to moderately complex problems I´d say yes. That´s where Software Cells came from. And that´s the problem space most of my customers usually live in. Their Solutions consist of maybe 1 to 4 Applications on average.
But if the problems become larger you need to be able to structure them more. Remember the motto of the ancient Romans: divide et impera ;-) You need not only to modularize one Solution but also the whole large problem. So as a first step I´d say: chop your problem up in sub-problems. For each sub-problem develop a Software Solution. So what you end up doing is developing a Software Society. The modules of very complex software are not assemblies or Software Components (consisting of several assemblies), but Software Solutions.
You have a small/medium problem, you develop one Solution:
You have a much, much bigger problem, you split it up into sub-problems and solve it with several Solutions:
But still, each Solution focusses on solving a customer problem - albeit this can happen on different levels of abstraction.
The first two questions above (concerning a multi-protocol load balancing server and the complex interplay of Cells), though, are not concerned with solving a customer problem. They are concerned with infrastructure. For most problems all the tools you need probably already exist. There is an array of powerful Hosts and there is a large number of communication technologies to choose from. Choose the right Host and write Adapters/Portals to make the I/O APIs palatable to the logic Cores of your Applications. Depict Hosts in your Software Cells diagram, label the connecting edges between your Hosts with the communication technologies used - but otherwise don´t show any infrastructure. The MSMQ server is necessary for using the System.Messaging API - but there is no place for it in a Software Cells diagram. It´s transparent to your Applications. Likewise a load balacing server. Software Cells in so far are a logical model and don´t show deployment configurations. If a IIS/ASP.NET based Software Cell later on runs on multiple hardware servers with a load balacer distributing requests among these instances is not of concern to the Software Cells model.
However, large problems can hit the limits of existing technologies. You could find an existing load balacing product too limiting (or even don´t know of it ;-) Or you could find repeating patterns in your Solutions with respect to the communication between the Cells that begs for a generalization.
So what can happen is, you try to fill a gap in the infrastructure. And although infrastructure programming usually should be avoided it sometimes might be necessary. So what can you do? How does the Software Cell model help you?
I´d say, Software Cells can help you in two ways:
1. Software Cells give you a clear framework to categorize the infrastructure you´re thinking about. Do you need a better communication infrastructure? Or do you need a better Host? Very often the lacking infrastructure belongs to one or the other category. The above missing multi-protocol load balancer falls into the category of communication infrastructure because it connects Software Cells. On the other hand, a relief for the complex interplay of Applications could fall into the category of Hosts and could be called a "Business Process Server" (something like BizTalk).
2. Software Cells diagrams let you depict your overall solution along several dimensions: customer problem dimension, communication infrastructure dimension, and Host infrastructure dimension. (And I guess, there could be even more dimensions.)
For each infrastructural entity you decide to develop you have to ask yourself:
- Is it just an API I need to write, e.g. a general abstraction on top of System.IO maybe?
- Or do I need to accompany the API with some kind of server, e.g. for load balacing? (A new server without its own API on the other hand does not make much sense, I´d say.)
- Or do I even want to have my own server run problem domain logic, i.e. do I need my own Host?
System.IO is an example for just an API. System.Messaging is an example for an API that also needed a server (MSMQ). And .NET Remoting is an example for an API and a server which also is a Host.
Once you identified the infrastructural pieces you need to solve the original customer problem, I´d say, set up separate projects for them and solve them independently from the customer problem. What you then get is some kind of Solution hierarchy along the above dimensions. Here´s a picture for a 2-dimensional Solution:
Along one dimension you see the "regular" Solution to the customer problem consisting of two Software Cells which communicate using some concrete I/O API. However, since you developed that API and provided an infrastructure for it, you know, it´s only a logical connection.
The physical connection runs along the infrastructural dimension or - so to speak - on a different level which is not directly seen by the problem domain Software Cells. For them the new infrastructure is transparent and hidden behind the new API.
The bottom line: Software Cells provide a model for any software. But beware to put too much into a single Solution diagram. If Solutions become too complex, split them up and start developing a Society. If you´re realy, really missing infrastructure, then separate these concerns from the customer problem domain and set up distinct projects with their own solutions - which you later fit into the "main" project as infrastructure "components".
In my previous posting I added the missing pieces to the Software Cell puzzle: communication technologies like .NET Remoting, MSMQ, but also ADO.NET. Now you know all the basic structures of any software:
- At the heart of your solution always is processing logic, the Core.
- Processing logic always needs to communicate with the outside, because software still is about processing input and producing output. I/O is done via Adapters and Portals.
- Core as well as Adapters and Portals run inside and under control of a Host. Logic + Host make up an Application or Software Cell.
- If non-functional requirements require it, you can split up your Core logic into several pieces that run in different Hosts. By distributing logic across severeal Hosts you produce a distributed Software Solution consisting of several Applications.
- Hosts can be chosen from a readily available set to which belong COM+, IIS/ASP.NET, BizTalk, and SQL Server, but also MS Word, MS Outlook, and even simple Console or WinForms EXE assemblies. Host differ in their offering of infrastructure.
- The communication technology for each Adapter and Portal can also be chosen from a set of technologies like Web Services, MSMQ, .NET Remoting, System.Net, System.IO, ADO.NET and many others. Which technology to choose depends again on non-functional requirements, but also on whatever the communication partner on the other end supports/requests.
So much for a wrap-up of the details of Software Cells. But what about the big picture? In how far Software Cells close a gap in the software universe, I tried to explain in a previous posting. But that still leaves the question, how Software Cells compare to the traditional Software Layer architecture and others?
First of all, Software Layers are a conceptual model of software, whereas Software Cells are very concrete. Software Layers distinguish between three fundamental parts within a software: frontend, business logic, data access.
This is all good and well, and it helps in structuring your thinking. But it does not help you much when you start implementing. Layers can´t really help you decide for example, where to put business logic. Also, the distinction between business logic and data access sometimes is not very clear. (Hence the ever recurring question from developers: "Should I put business logic in the frontend? Is it ok to have business logic in my database as stored procs?" Or to say it more generally: Software Layers don´t provide much help on deciding how many tiers your software needs and on which tier to put your code.
Also, Software Layers lack a sense of orientation. Where are they in terms of the software universe? Are Software Layers about a single application? Or are they about multiple applications?
According to the Software Cell terminology, though, Software Layers suddenly fall into place: Software Layers are a conceptual model for Software Solutions. If anything, they help you to structure your thinking about a whole solution - which possibly may later on consist of several Applications.
However, in my opinion, the importance of Software Layers is largely exaggerated. They are a concept from the mid 1990s when there was only single or two tier software and we needed a model to describe software with a third tier: an application server like MTS. But since then the world has moved on and we have a lot more choices for hosts of logic and we have a lot more connections between parts within the same Software Solution as well as different Solutions.
I´d even say, Software Layers stand in the way of flexible software architecture because they suggest, a Solution mainly consists only of three tiers (or Applications in Software Cell speak) and those tiers are strictly building upon each other (plus some orthogonal functionality like security, caching etc.). But this view is too limited. Why shouldn´t I distribute (business) logic across 5 Software Cells if this leads to a good solution? Why shouldn´t I encapsulate security functionality in a separate Application? Software Layers either don´t provide answers to these questions or they suggest this kind of thinking as inappropriate.
Now for my last blow to Software Layers: I think it is absolutely contraproductive Software Layers stack the frontend on top of the other layers. This has led to countless misunderstandings and bad designs, because it lures developers into giving the frontend a higher priority than other parts of the software. Again and again it seems to suggest starting development by thing about the frontend was ok. But it isn´t! "Frontend first" is maybe the most widespread reason for bad architectures/designs.
I see only one way to save the spirit of Software Layers, that´s by giving up the layering...
...and instead arrange the logical party of an application in a way so none has a higher priority than the others. This way, we´d retain whatever is helpful from Software Layers and get rid of their limitations and invalid suggestions.
To face new challenges brought upon by Web services, fellow Microsoft Regional Director Clemens Vasters and Microsoft´s Steve Swartz have come up with a variation and evolution of Software Layers: the Onion.
They wrapped the layers around a center which led to concentric circles, and renamed the layers.
And this is indeed a step forward, I´d say.
But first, let´s see, what Onions are about: They are about services, that means, they are about software that´s accessible via SOAP/WS-* From the point of view of Software Cells that means, they are about Software Solutions and collections of Solutions aka Software Societies. How a service is structured (if it´s a single Application or several) is not important to the Onion model. Onions thus are on the next higher level of abstraction above Software Layers. And what they are trying to say is: Several cooperating solutions are structured like a single solution, there is some frontend, some inner logic and some data/resource access:
This really makes for nice drawings, I´d say. The Software Universe almost looks fractal: You can drill down and down and down and on each level you find more details. (However, as opposed to the usual fractal structures, each level of detail looks the same. It´s always public interface, implementation, resource access.)
Theoretical this has some appeal, but in the end I don´t find it very enlightening. What does this notation add to software layers? Why use circles instead of layers? The only benefit I see is a circle has a clear surface. This suggests all access to the implementation has to go thru the outer Onion layer. Hm... ok, that´s not bad. But other than that... the fractality of the model could have been depicted with real layers, too.
And I´d even say, the circular nature of Onions is not helpful in depicting larger systems.
Can you imagine to draw a Society consiting of several Solutions with their Applications using these Onions? I can´t. So where I see the Onions is at the level of very coarse grained architectural thinking. Thinking of a Solution or Society as a shape with a clear surface is beneficial.
However, I think also the Onion suffers from a wrong suggestion of priorities: At their center is the resource access, at their rim is the frontend/public interface. The logic, that what really is the heart of any Solution is just a thin layer wedged between input and output.
From the point of view of Clemens this might be understandable. He´s concerned with SOAP, WSDL; WS-*, Indigo and the like which are all commuication technologies. So in his mind the outer interface service layer and the resource access at the Onions´ core are on his mind all the time. But in the end they are of comparatively little importance. What is most important is the processing logic of any Application or Solution or Society. It´s what needs to be done, not where the data comes from or goes to.
Please, get me right: Without I/O there is no processing. But the whole purpose of software is processing and not I/O. So I´d say, we should put processing at the heart of our thinking, models, and designs. Just because communication technologies need to mature for real world cross-platform interop does not make them more important than the processing logic.
Also, I don´t see, why Onions list COM on the public interface and not as a resource access technology. Or why isn´t "WS" (for Web service) a public interface option? Why do Onions put communication technologies into two categories at all? As Clemens says "On the next higher level of abstraction, presentation services may very well play the role of data services to other services. And so it all repeats." But why doesn´t this insight show up in the model? It means on service´s output is another service´s input, and that means, both ends need to use the same I/O technology. If one service outputs data to a database (resource access and "SQL" in the Onion picture), then another service is going to read it from there thru it public interface.
Or, hm, do Onions want to limit us to communicating between services via the technologies listed on the public interface? I don´t want to believe that.
So, what do I think about Onions? I´d say, I like them for trying to provide a model for describing cooperating Solutions. The step up Software Layers from the Solution-only view to a Software Society view. But I don´t like them for being inconsistent in their statement about communication technologies and limiting in their visualization capabilities.
Microsoft´s Pat Helland has introduced the term "Fiefdom" into the discussion about software architecture. Fiefdom stands for an autonomous piece of software that performs a task, guards its resources and allows access from the outside to its service in a controlled way. Hence Fiefdom is a metaphorical description of a service as in SOA (Service Oriented Architecture).
Fiefdoms thus are a pure conceptual construct and are orthogonal to the notion of Software Cells etc. A Fiefdom can be a single Application, or a Solution, or a Society. It depends all on your point of view, the level of abstraction you´re on in the Software Universe.
As long as Fiefdoms are not misunderstood as implementation guidelines, I think they really help to "think in services." Or to say it the other way ´round: Fiefdoms are not concerned with implementation. They don´t want to guide you in choosing how to distribute logic across Hosts like COM+ or IIS. They don´t guide you in choosing between SOAP over HTTP vs MSMQ. They just say: Think clearly about your boundaries! And view whatever is within the boundaries as an autonomous entity.
And that´s a very helpful message - right in the line of Software Cell´s Contract First thinking.
Side note: Interestingly Pat Helland has had some difficulties with the term "application" like I had before Software Cells. And he has come to almost the same conclusion: "I would propose that we use the word application exactly as it has meant in the past. An application is a collection of functionality that is tightly interrelated. For the most part, applications have historically stood alone." Yes, let Application be something that stands alone and can be clearly identified. But we differ fundamentally in our final conclusion! Pat says "An application may be implemented out of services.", but Software Cells map services to a group of Applications. Since the world´s software still mostly is not a service and Applications as the Software Cells model defines them fills a gap in the Software Universe I´m sorry to say, but I don´t see any value in Pat´s use of the term "application" and his discouragement of using it. "Application" is a valuable term if clearly defined.
Now, at last a word about SOA. What is SOA? Or is there SOA at all? Clemens Vasters in the meantime has come to the conclusion, there is no such thing as SOA - and I agree with him. The SOA hype was/is around communication technologies. But communication does not equal architecture (which is also about how to arrange structures).
Nevertheless let me describe what SOA could have ment in terms of Software Cells: In Software Cell terminology a service is a Software Solution, or: a service is a special case of a Solution. Not any Solution is a service, but every service is a Solution. And it´s only a special case again, if a Solution consist only of a single Application. So in general a SO architecture looks like this:
Several Solutions communicate with each other to fulfill an overall purpose thereby forming a Software Society.
Since there is not physical container for a solution, each application in a Society could belong to several Solutions at the same time (or even several Societies). Solutions and Societies are just logical concepts, whereas Applications are tangible entities.
What Clemens is saying: SOA never made a statement about how code is hosted or arranged but mostly talks about how Solutions are connected (e.g. by using SOAP/HTTP or MSMQ). SOA thus talks about communication technologies and maybe Adapters/Portals, but nothing more. From a SOA perspective a service (Solution) has its own contract(s). To translate this into the Software Cell world I maybe could depict a Society like this:
Communication always flows thru Adapters and Portals, so Solutions should have Adapters and Portals, too. Or shouldn´t they? Logically they sure define contracts. But physically they don´t implement them, since Solutions themselves are no physical entities. The contract of a Solution (service) is the sum of the contracts published by its Applications for inter-Solution communication. So I´d rather say, the Adapters and Portals of a Solution are logical or even virtual, whereas the Applications´s Adapters/Portals are real and tangible:
Any Application within a Solution can contribute a Portal to the Solutions contract. And any Application within a Solution can access another´s published Portals via an Adapter that matches the contract. I don´t think it is necessary to introduce the logical notion of special "surface applications", who are the only ones allowed to publish a contract to other Solutions. I don´t think, it´s benefical to categorically assign Applications to one of the Onion´s layers.
However, I think communication between (!) Solutions should be limited to cross-platform technologies, which today means to contracts defined by XML Schema/WSDL and WS-*. Why? To grant Solutions full autonomy in terms of implementation. Which means they are free to choose the number of Applications needed to provide their service(s). And they are free to choose the Application Hosts. And they are free to choose how they keep state. (This is along the lines of Fiefdoms.)
So, my bottom lines for SOA is: It´s true, the SOA discourse does not really revolve around architecture but around communication. But if you like, you can perfectly well and easily map SOA´s concepts to the Software Cells model. And I´d even go further and say, Software Cells make it clear, that SO lacks a clear definition of what is the contract of a service (consisting of several Applications all publishing contracts). There is no physical place where the potentially many pieces of a service´s contract come together or are defined. A service´s contract is just a set of several Application contracts logically tied together. And it´s even worse than the set of public types in an assembly which are at least bound by the assembly manifest.
A general word about architectural patterns
Besides popular patterns like Software Layers or Onions there are many others, e.g. Pipes and Filters or Enterprise Bus. What do Software Cells mean for them? Or do those patterns stand in the way of Software Cells and are incompatible with them?
No, right to the contrary! Software Cells embrace all those patterns and provide a vocabulary to guide their implementation. Take Pipes and Filters for example:
The pipes&filters picture in the middle is from the Microsoft document. And as you can see, Software Cells allow for a variety of implementations. (You could even think about pipes as Applications/Solutions if you like.)
Software Cells are a very general model for software. You can use it to depict any architecture. Just be clear about:
- Where/what are my Applications? How do I map conceptual entities to physical entities?
- What´s the best Host for an Application?
- Where are Solution boundaries? Think about Contracts!
- Which communication technologies do I choose?
In my last posting I tried to put Software Cells into perspective. And as it turned out, they prefectly fill a gap that existed between COP/OOP and SOA. With Software Cells we can seamlessly stack levels of abstraction onto each other from single statement to groups of Solutions which I call Software Societies.
However, I one piece of the Software Cell puzzle is still missing: How do Applications communicate with each other? So far I´ve depicted Software Cells as bordering on one another when they use each other´s services.
There was no gap between the Applications because I concencrated on intra-Application structures and Hosts. But talking about Core logic and SQL Server as an Application Host is not enough. Technologies like System.Net or .NET Remoting need find a place in the Software Cell model. So let´s step back and see, what we´re talking about:
I think it´s very beneficial to go back to the roots of programming and state once more:
All software is about I/O.
A single IL Add op-code is about IO like a service in a SOA is about IO. On all levels of abstraction of the software universe as depicted in my last posting the structures described take some input and produce some output. Without input and output software simply makes no sence.
An IL op-code takes its input from the stack and produces output on the stack. An Application might get its input from the user thru the frontend and send its output to a database. Or a service might read input from a database and send output via SMTP. Input and output of course can be as simple or as complex as needed. The simplest case of input is the "signal" to start producing output, i.e. an empty input. But output is never empty, otherwise a software structure would have no purpose (besides heating the environment by exercising the CPU :-).
But software of course also is about processing information. Something has to be done with the input to produce the output. So in the middle between input and output there is some logic:
input ---> processing logic ---> output
Sure, this is by no means some grand new insight. But sometimes it´s important to remind ourselves of the very basics to understand the brand new.
Fortunately tranlating this old truth to the Software Cells world is easy:
As you can see, Software Cells map the general input-processing-output to concrete structures in an Application: Processing is done in the Core of an Application. Adapters and Portals are responsible for collecting input and distributing output. (This very clear separation of concerns I find a real advantage of Software Cells compared to the layering model. Whereas for many developers it is difficult to decide, what code belongs into a data access layer, it is easy to know, what´s logic and what´s pure I/O.)
I´d say, it´s very obvious that Console.WriteLine(...) sends output to the console. The same is true if WriteLine() is called on some TextWriter() connected to a stream over TCP/IP. Input on the other hand can be collected using ReadLine() on some stream. But what about a WinForms frontend? It´s the same. mytextbox.Text = ... sends output to a GUI frontend and calling mylogic.Process(mytextbox.Text) sends input to the processing logic. Even though modern UIs are event driven, they nevertheless are I/O "devices".
But what about System.Net or .NET Remoting or even ADO.NET? To put it very bluntly and a little provocatively: These technologies, too, are only just I/O technologies. They just differ in the level of abstraction, the transport technology, data format and the programming model. But nevertheless they are all just about I/O. However, where the input data exactly comes from - whether from a hardware data sampling device or from a database - and where output exactly goes to - on a TCP/IP connection or a database or a file - is not important for the processing logic. The respective APIs of the I/O Adapters/Portals are responsible for managing all this.
This brings me back to the purpose of the Adapter and Portal structures: Their task is just to connect the logic to an I/O "device" and bridge the gap between the data models of Core logic and I/O "device". If the Core´s domain is insurance contract management, then it want´s to "think" on that level and does not want to concern itself with whether those contracts later on are stored in a RDBMS or a file or are send to some other service for further processing. The Core logic just want´s to use an Adapter/Portal on its level of abstraction and e.g. call myAdapter.StoreContract(mycontract). The purpose (interface) of the call is clear, the implementation is opaque, it´s a black box.
To make this more tangible let´s look at a set of Applications and how the communicate with each other.
Sofware Cells - or to be more precise: the Core logic - always communicate with the outside world via a stack of I/O layers. If you think of the 7 layers of OSI you´re right on. It´s the same here with Adapters and Portals:
- Each layer in the I/O stack has the same purpose to send/receive data.
- Each layer is on a different level of abstraction, e.g. the top level is concerned with objects, the lowest level with bytes on a stream.
However, the number of layers within an Adapter/Portal is not fixed. A minimum of 2, though, makes sense: The bottom layer is always the raw API, the top layer is on the abstraction level of the Core. (To find an API on the level of the Core probably will be rare.)
Also, please don´t limit your thinking to the typical network I/O scenarios when you think about the communication stacks of Adapters/Portals. I´m serious when I say, .NET Remoting and ADO.NET are to be considered the same from the point of view of the Core logic. For me there is no essential difference between calling remote logic via .NET Remoting or via an ADO.NET DbCommand. And there is no essential difference between returning results from such a call as a parameter on a stack or as a resultset via a SQL Server pipe from within a C# stored procedure. It´s all the same.
Communication technologies like System.Net, .NET Remoting, Web services, MSMQ, and ADO.NET of course differ in several respects, e.g. level of abstraction (System.Net: low, .NET Remoting: high) or data model (.NET Remoting: objects, ADO.NET: simple data types or resultsets) or programming model (.NET Remoting: RPC, MSMQ: message based) or synchronicity (ADO.NET: (primarily) sync, MSMQ: async) or persistence (ADO.NET: persistent, System.Net: transient). But that does not make those technologies fundamentally different. Again: it´s all just about I/O or communication.
The implication for software development, though, is important: Don´t try to figure out complicated things like "What is business logic?" or "Is it ok to put business logic in the frontend?". Just clearly separate logic from I/O. Separate Core from Adapters/Portals.
And don´t get confused with all those technologies/standards like Web services, System.IO, System.Net, .NET Remoting, MSMQ, Serviced Components, Indigo, ADO.NET, SQL XML - or even SMTP, POP3. Which to choose from this ever growing set of options, is determined by several factors, which should be pretty clear, if you know your problem domain and have chosen the Hosts for your Software Cells. E.g. if an Application needs to call logic within another whose Host is SQL Server, then ADO.NET is the likely choise - but you also could choose SQL XML or SOAP. But if you need to connect two Console Hosts then .NET Remoting probably would come to your mind first.
Equally possible, though, would be MSMQ or - and this might surprise you - ADO.NET. MSMQ is an established communication technology for message based async communication. But what about ADO.NET? Isn´t that a database API, not a communication API? You´re right. That´s how it´s positioned usually. But what is a database? To put it provocatively: A database is like a message queue (which is often implemented using a database engine), which is like a file (which sure is the container for all database data), which is like a message (which is just a serialized form of data) on a wire. So, a database can be viewed as a container for data to be "exchanged" between different applications in space and time. The main difference between a message in a queue containing customer information and the same customer information in a database is, that the message has a target, whereas the data in the database "just sits there and waits to be picked up." But still, the data an Application outputs to a database sooner or later will be the input for some other (or the same) Application. The database is "just" an intermediate store between processing steps.
Or to put it differently: Today there is so much talk about interfaces/contracts between software on different platforms and versioning of those interfaces. But what usually is not seen is, that the most long lasting interfaces between solutions are RDBMS databases. Their lifetime is often measured in decades whereas Web service interfaces are comparatively much more volatile. (By the way, this emphasizes the importance of sound information modelling in databases and hiding their internal structures behind a strict contract.) Communication thus is not only in space, but also in time.
But now back to Software Cells. What does this all mean for designing Solutions consiting of Applications consisting of Core and Adapters/Portals?
Applications can be connected to resources and other Applications by a large number of different I/O or communication technologies. That means drawing Software Cells like I did so far, pretty quickly will become unwieldy. Thus, despite the compact look of a Software Cell "tissue" you´ll sooner or later want to move them apart and connect them with lines:
This notation is more flexible than the original one, I´d say. It easily lets you connect Applications over larger distances within a drawing, allows several connections to/from an Application, and gives you space to add technology descriptions to the connections between Adapter and Portal. Separate concerns are clearly separated: domain knowledge and functions are located within the Core, infrastructure for I/O is located at the Applications´ boundaries and in the inter cellular space. And the "ecosystem" (infrastructure) in which the Core works is represented by the Hosts.
I´d say that pretty much wraps up Software Cells. The technological puzzle pieces fall clearly into their places. No more wondering about what an NT Service is compared to COM+. No more wondering about how the ADO.NET and Web service API for SQL Service differ. Fundamentally they are in the same categories: Hosts and communication technologies. And you choose between them by comparing their features like programming model, data model, communication model, infrastructure, deployment etc.
And you never need to wonder again whether SQL Server 2005 is an application server not that you can write C# stored procedures. The simple answer is: Yes, it is an application server, or better: it is an Application Host like COM+ is and IIS and a console EXE.
So you can choose freely among all those options to find the best layout for your Core logic. Fear not to distribute "logic" across several Applications. That´s perfectly ok. Put it in the "frontend", put it in a database server. Just draw a clear line between logic and I/O, set up contracts between your Applications and components.
This is the essential message of Software Cells.
Lately there has been some discussion on if and how Google might be a threat to Microsoft. Bill Gates even said "[they] kicked our butts". Right now I´m not interested in the details of where exactly Microsoft feels hurt and if Google can do substantial damage to Microsoft´s position. Rather, what now surprises me is the surprise everybody felt when Google came out with desktop search functionality and Google maps etc. I´m surprised about this, because I just met with Craig Silverstein, Google´s director of technology and first employee of Google - or Google´s man behind the curtain (as some call him).
Craig (foreground) with his two employers, the Google founders Larry Page and Sergey Brin.
I found Craig to be very straightforward, modest, credible, and honest. And thus, I fully believe him, when he chants his mantra "don´t be evil" and explains Google´s mission "to organize the world's information and make it universally accessible and useful." He is equally open and serious about everything that I can´t believe, nobody had thought of Google entering the realms of Microsoft sooner or later: the desktop and organizing information.
Although Google currently is not into actually producing information or helping to produce it like Microsoft is with Office, its development tools and data servers (e.g. SQL Server, Exchange, Sharepoint). But once the information has been produced, Google´s very, very, very serious about making it available to all interested parties. So of course Google enters the Enterprise and sells "Google for the Intranet". And of course Google´s interested in what´s stored on your desktop. And of course Google´s interested in letting you search for a local pizza place and not only provide a link to its website (if existing at all), but show you its location on a map.
Microsoft started on the desktop, is trying to get into the enterprise market, and extends to the Internet.
Google started on the Internet, is selling a very focused familiy of products to the enterprise market - and now is slowly extending to the desktop.
So the location where Microsoft and Google inevitably meet is the (enterprise) desktop. But, as I now think after meeting Craig, this should have been clear since long.
And I´d even say, Google´s mission is very clearly defined and they are very focused on it, so it´s somewhat predictable, where we´ll see products coming out from them in the future. But on the other hand, Microsoft has less of a focused mission. They started out with Basic and MS-DOS, now they are selling not only development tools and an operating system, but Office, SQL Server, BizTalk, Games, and even the Xbox. So if those two worlds of Microsoft and Google should collide in the future and anybody claims surprise, I´d say it can only be because Microsoft has extended again into another area.
PS: Even though I´m a firm believer in the benefits of electronic communication, the meeting today for me was a proof to how important personal meetings are. It´s one thing to read about a person. It´s a completely different thing to actually meet the person. Seeing Craig speak conveyed so much more information; but I mean not about Google´s technologies and goals, but about what´s driving them. Because in the end, companies are still the sum of the people working and living for them.
More Posts Next page »