Building a System-of-Systems

Building a System-of-Systems infrastructure in .NET environment

Authors: Natty Gur, Gabriel (Gabi) Levi

Suggested Table of Contents :

1.      Vision

1.1.   The path toward system of systems.

1.1.1.      Information is power just if it processed.

1.1.2.      From apps to systems.

1.1.3.      Information is power just when it gathered together.

1.1.4.      Information overflow and SOS motivation.

1.2.   You are not alone.

1.2.1.      Other technologies in your organization

1.2.2.      Other Operation systems. (MF, AS 400). / Linux.

1.2.3.      Data storage options. Databases databases. Oriented Databases. Directory.

1.2.4.      And you never now what will come tomorrow.

1.3.   Enterprise architecture overview.

1.3.1.      Adopt one of available industry architecture.

1.3.2.      Create Enterprise Architecture.

1.3.3.      Create System architecture.

1.3.4.      Design System.

1.4.   Exploring standard of building distribute application maze.

1.5.   Open Distributed Processing - ODP

1.5.1.      The Enterprise Viewpoint

1.5.2.      The Information Viewpoint

1.5.3.      The Computational Viewpoint

1.5.4.      The Engineering Viewpoint

1.5.5.      The Technological Viewpoint

1.6.   Enterprise Viewpoint – our organization needs

1.6.1.      organization needs

1.6.2.      IT department needs and constraints

1.7.   Information Viewpoint

1.7.1.      types of Data and storage

1.7.2.      Treat all data types as one data.

1.7.3.      Methods to Access Data.

1.7.4.      Data and Transaction.

1.7.5.      Data and Security.

1.7.6.      Data Availability (Storage )

1.7.7.      Audit


2.      .Net and current windows

2.1.   Architecture and design

2.1.1.      Computational Viewpoint or Remoting system to entities. blueprint.      Data      Logic      Enterprise Logic      View and other design pattern servers      MTS      Others as system base over chatty classes        Handling exceptions

2.1.2.      Engineering view point Application as platform. tiers on physical hardware to gain performance      Visualization and Business      Business and Data      Data tier and data source      all tiers components as physical independent. clustering in account. and isolation level Load balancing, components and systems. Batch processing and other components that doesn't part of flow.

2.2.   Building Infrastructure

2.2.1.      Tiers      ORM      DataSet      DataReader Creators

2.2.2.      technological viewpoint 


3.      .Net and Longhorn

3.1.   Architecture and design

3.1.1.      Computational Viewpoint and Longhorn. the right Indigo service. system to entities. blueprint.


3.2.   Building Infrastructure



The Microsoft .NET is edging toward an architectural revolution, the new standards, methods and provisions has raised the need to discuss adaptability of our current architectural designs and prefigure our new applicative infrastructures. Within the globalization era, our systems need to work through generalization, which will save a lot of future alterations and will significantly ameliorate any designated computer system applications. This article can be much of a help for each and every application developer participating in a grouped development, regardless to his experience and degree.




There are two major objectives for this article; the first, a broad-minded discussion about the architectural outline for systems which are planned be developed under .NET environment and the second objective is to lay practical foundations for engaging .NET architecture in developing a the enterprise applications infrastructure. By laying architectural outlines we generally aim to a compilation of; architectural rules, needs and standards which regulating the infrastructure software assemblies used to develop different user applications that are part of the organizational information system. One of the most significant standards for this outline is the Open Distributed Processing standard (ODP) which is designated to determine the communication architecture for the OSI application layer The ODP standard have become a modern aspect when formulating the infrastructure of complex applications which designated to distribute a significant amount of information over different computer systems.


Understanding the ODP five viewpoints:


The OSI-ODP standard recommends five major viewpoints to examine the application layer and to design an enhanced and robust application:


  1. The Enterprise Viewpoint – guiding us to deeply investigate the goal of the system application in perspective of the organization, e.g. who such system will serve and what will be the contributions of this system.


  1. The Information Viewpoint – asks what kind of information and data will be transformed and handled within the designated system and how different kinds of data will be retained and transferred through the system information pipes.


  1. The Computational Viewpoint – inquires the computer interfaces and their implementations in order to create a mechanism in which all the components of the system will interoperate to realize the best system potential and will carry the load of such system on the launch event and on its later growth.


  1. The Engineering Viewpoint – asks how will the different components of the system will be dispersed and distributed, and which infrastructure will be set to achieve maximum productivity from each and every system.


  1. The Technological Viewpoint – refers to what kind of technology breeds will participate in the application structure, is the system includes purchased applications and third party technologies, do we already have ready-to-use technological components that can be integrated for the benefit of the designated system.



* In order to synergize our inquiry, it is recommended to examine our system according to each and every viewpoint separately, and then to combine the results with the inspection of our system in much wider view.


The ODP Distribution Transparencies:


For each and every Viewpoint the ODP standard defines eight transparency regulations which obligate each and every open-system:


  1. Access – Transparency of data and information reacquired to interoperate with

other system objects within the parent system or with other objects and components in interactive systems.


  1. Failure – Transparency of failures and exceptions within the system and transparency of the way to track them and recover from them.


  1. Location – Transparency of pointers, aliases and physical locations of the system components and interfaces used to connect between inner and outer components.


  1. Migration – Transparency of platform changes and migrates for system-objects which will enable fluency in accessing specific objects within theirs new platform or location.


  1. Relocation – Transparency of the regulated host relocations (e.g. storage clusters) of system entities to enable load-balancing of the application system.


  1. Replication – Transparency of duplication procedures of data and information within the system. (Databases, State controls)


  1. Persistence - Transparency of consistence to enable redundancy of objects and entities in cases of inaccessibility.


  1. Transaction – Transparency of the intermediations and transactions between the system data components. (e.g. XML schemes)



Wearing our brand new .NET outfit


The Enterprise Viewpoint


In order to realize the Enterprise Viewpoint we will use an illustrative organization which represents most of the issues and the basic needs rise in developing a system-of-systems. This organization is getting prepared to build a new designated application which will enable its users to browse through deferent kinds of data and information (Tables, Articles, Geographical Maps, Charts, Diagrams etc) these data is compounded from different sources and formats and uses the organization users in theirs day-to-day tasks mostly in achieving business conclusions (a knowledge system).


The initial demands have already been presented to the senior management of the IT department, the budget has been arranged and the "green-light" has been given.





Our IT department is divided to different developing units in which each unit is responsible for the development, the distribution and the maintenance of its assigned projects. The computer and application infrastructure in the organization allows users to interface and wander from one application to another (ERP, CRM, Business Knowledge) and at the same time to retain functionality, history and data which are common to the application sessions (Gained by the System-of-systems). It should be noticed that in this illustrated organization the computer systems are mainly based over Mainframe and AS400 (i.e. centralized systems).

As appropriate to a progressed organization, and thanks to Microsoft efficient sales forces, our illustrated organization acquired new workstations with Microsoft operating systems on them (open systems). These systems decentralized the applications and caused disengagement of the former business systems. The result was hard to bare, no one in the IT department was experienced enough to prefigure this migration, the organization system have become slower and unstable and the so called upgrade caused a lot of unplanned expenses for outsourcing staffs who worked days and nights in order to stabilize the systems.

A proactive thorough architectural analysis is essential; engaging open-systems with centralized computer environment must be done considerably and in a way that this hybridization will result with maximum cost-effectiveness. Even in already pure open-system organizations, blessed decisions (e.g. migrating to .NET) can be disastrous if not properly prefigured.


Orientation, compliancy, control of data and information, commonality and uniformity are the major standards an organization expects from its IT department. Those productions can only be achieved by exteriorizing interfaces and functionalities which are transparent to other authorized systems within the business system; those interfaces must allow data and information interchange, functionality retention and uniformity in the user interfaces overall the different systems.


In addition to the major productions there are more standards expected from the IT department, those standards are vary from the efficiency and speed of the computer applications to the tolerance of these applications in handling radical loading conditions. A robust system-of-systems synergize the performance of all systems within the parent system and should be ready for failures within daughter systems to prevent regression in the total organization productivity (the domino principle). Time reactions and No Single Point of Failure (NSPOF) design are two of the most important factors for the durability of the system. The ability to promptly generate an informative report can be critical factor in many opportunities. Some times, in critical situations, (e.g. September Eleven events) the loading over the systems can create extreme reduction of reaction time, which can cause the organization and its missions a severe damage.


Another aspect should be well prefigured is the permissions and compartmentalization attributions. Permissions define which entities (commands and controls) of the systems the users can operate and which they can not. (add, copy, update etc) Compartmentalization organizes clusters of data and information which a defined group of users can access while other users can not. Compartmentalization and Permissions plan mostly determined respectfully to the organizational hierarchy and its security policy. Each and every application should support permissions and most of them should be enabled with compartmentalization features.




Beyond the demands of the organization, the IT department has its own inner demands and regulations, some of them derived from the organization standards and some of them derived from the IT department inner policy. The existence of development outlines for predicted problems can very much curtail the time it takes to cope with development issues.


Another common R&D setback is the development of same application components within few separate branches in the IT department; this setback is mostly perceivable in organizations which suffer from tasks-overloading and lake of correlation.

This can be abridged by regulating communication and correlation between programming teams and companioning the code with "crystal clear" remarks and observations.


Another aspect should be given a serious thought is the deployment of organizational applications over affiliated communication lines (e.g.T1, Frame Relay, etc). Most of the organization affiliates connected to the organization headquarters through relatively slow and dedicated communication lines or VPNs, these connections are usually encrypted in order to protect the interchanged data (this making it even slower), a development unit of server-based application which will not take into consideration this limitation might find herself in big troubles at the application launch event.


Last but not least difficulty is the multi-version distribution of application components over the organization servers. Distribution of different components versions (e.g. DLLs) can cause many version collisions. To solve this we will need to prevent or at least reduce the use of client distributions especially when the organization is affiliated in different geographical locations.


Major Objectives of the Enterprise systems

§   One organizational system standardizing and mechanizing the different systems and applications.

§   Differed Systems which addressing the organizational needs.

§   Efficient and relatively prompt systems.

§   "Smooth and silent" wandering of users between systems.

§   Ability for users to retrieve data from other systems.

§   Visual uniformity between the different environments. (GUI standardization)

§   Exteriorization of visual interfaces, data and functionalities.

§   Stabilized systems.

§   High tolerance in critical conditions.

§   Compartmentalized systems and permissions support. (e.g. single sign on)

§   Server based applications that support affiliated communication lines.

§   Proper correlation and version-control.

§   Controlling the distribution of application components.(e.g. Multi-version DLLs)





The Information Viewpoint


Getting back to our illustrating organization, lets assume this organization compound from different information systems engaging a few major databases, a major part of the organizational data is stored on a Mainframe (MF) system and an AS/400™ server with Adabas™ and DB2™, the open systems in the organization uses relational databases of Oracle™ MS-Sql™, and the geographical data (GIS) is stored over a designated ESRI ™ System, our organization also considers to buy audio and video database technologies.


An ideal system is a system which enables its applications to share and interchange information between one and another, of course each application can retain its own data differently from another, but the interchanging should be transparent and considerable to appropriate security policy. For instance, our organization might demand a data interchange between Oracle and Adabas. ODP emphasize one good solution for a regulative data interchange, this is the rule of keeping a consistent interface which receives and transfer data between the different application layers, and this way will be able to share data between applications within the system. In order to supply a consistent interface it is important to implement in all of the classes that transact data, with a consistent function of data interchanging (i.e. interface).


In the case of a process accessing a database and initiate the transaction it is necessary to perform two rubrics: first, it is necessary to keep the transactions "light" as possible, "heavy" transactions tend to lock the accessed data to any update command of other applications or users from the same application, this limitation is a major performance reducer (an undesired use of accesses queue). The second rubric will be to keep the transaction inside the database for applications that handle data from one designated database (singular DB application). If one application needs to handle transaction over several different databases then a XA protocol should be use in order to manage the transaction.


In consideration of performance, accessing data to relational, open system databases is a preferred option then accessing to data from legacy databases. Using a relational database will bring different fractions of data that randomly change or set on intervallic change to a reduction in the time portions needed to retrieve the data.


When accumulative information, Permissions and Compartmentalization should be regulated and implemented. Users may be permitted to view, insert, update and delete application data but not every user is allowed to do every one of them. Furthermore, not all users allowed seeing all of the data and information, some of the users should be compartmentalized from different parts of information and data in the database, this can be based on ethics, organizational benefits and etc. To fulfill Permissions and Compartmentalization every application must implement a mechanism which enables management to execute and update the permissions for the different organizational users in addition to compartmentalization enforcement.





The Computational Viewpoint


In order to supply the desired requirement of integrative systems, the development must consist on components orientation, this means that the application bricks should be components and not classes. Although every component is built from different classes, and designed under Object Oriented Programming (OOP) it is the components which later become the application and together with others components will realize the entire system.






As mentioned, each and every system is compiled from components. Every component is responsible to maintain one of the application entities. Each application entity should be compiled from three different assemblies which will be the exteriorized interface and functionality of the application entity:



  1. Operational Interface: This interface enables the functional integration with other systems that are part of the SOS and written in the same development environment as the interfacing application. This interface should contain all the common functionalities that need to be implementing by any system in order to enable other systems interaction with the system data and visualization. Usually this Interface includes abstract methods like IsEntityExist, GetEntity, GetEntries, ShowEntry and so forth. This interface should be base on the development tool implementation of connecting assemblies across processes.


  1. External Interface: This interface will use as the functional integration point for applications that are not develop under the same development tool or they are not part of the SOS. (e.g. J2EE application interfaces to a .NET application) This interface should be base on SOAP and in fact calls the Operational Interface to actually engage the desired function. (i.e. an interface gateway)


  1. The Application assemblies: contain the assemblies that handle the application data, application, logical and the GUI.



Our approach will always be to support component oriented developing and to enable a more comfortable maintenance of dispersed applications, in order to achieve this goal we need to work according to the N-Tier model.


The Data Tier:

This tier will be built from all classes which handle the Database Management System (DBMS). This is the only tier which will have a direct connection between the database and the application. The classes within this tier will include; DB connection methods, SQL queries (in case we handle a SQL-based DB) which will establish the SQL insert, query and update procedures, these classes will also return a data replication of the desired database range to be used by the application. A database replication can be structured arrays, Collections or a XML with the query results.





The Business Logic Tier:

The Business Logic Tier includes the logical functions and procedures of the application. Complicated calculations, validity checks, engaging different data sources for a result and more will set in the Business Logic Tier. Some refer to this tier as the brain of every application.






The Enterprise Logic Tier:

This is the systems commonality tier, a logical procedure and functionalities relevant to the organization logic will be engaged here. This tier needs a thorough designing when a System of Systems is desired. For example this tier will bare the user permission classes.


The Presentation Tier:

This Tier will include the HTML pages, ASPs, windows forms and all visualization components relevant to the GUI. Notice, when designing a web application, we occasionally need to use client side scripts in order to gain performance (avoiding server calls and events) in this cases the client side code will be consider as integral part of the presentation tier.



Components Connectivity:

An Interface assembly with a minimized number of calls, many parameters and Stateless is preferred over an interface assembly with hyper-calls, fewer parameters and state management.

This is because of two major reasons:


  1. Interface assembly can distributed over different computers or registered in COM+ as Server application, in this case every call to an object forcing an out of the process access (Network, DCOM) this is why a minimized number of calls is preferred.
  2. Managing interface assembly with State is less efficient from managing Stateless components. A State managed components committing us to implement data preservations between calls and events, these preservations offend.


The preferment of "Thick and Quiet" over "Thin and Chatty" refer to assembly interfaces only (Some are also prefer that in a woman). Assemblies are built from a different number of classes, and classes can, and even rather to, keep the State together with the functionalities as the conservative Object Oriented Programming (OOP) defines.

Components Oriented programming helps a lot in building distributive applications.

The organizational interfaces are implementing several classes in the applications that hold the application entities. This implementation of classes is based on several programming approaches which are not corresponded to the OO approach. These approaches provide faster and reliable systems build from collections of assemblies that their physical location isn’t known in advance or can be change in the application lifetime.


Assemblies and classes relationships and connections should be based on Interfaces that are implemented within the component classes. Interface implementation has two main benefits to the development process and application architecture. First relaying on interface enable programmers to develop their code that interop with other programmer’s code even they didn’t start to write their code. Second Interface ensures that programmers won’t change their class interface as they programming. Changing the class interface while developing causes many integration problems. Beside those reasons Interfaces are excellent tool for designing and building application. Interfaces provide the same benefits of polymorphism as OO and let us declare the application behavior and decide which class or classes will implement interface or interfaces.


In the end of the developing stage all Assemblies will be compiled to DLLs which will distributed over the organization machines. .NET brings two methods to assemble classes in DLLs. The first method allows binding the classes according to a process or an entity within the system. (This method loading the DLL which is handling the process and use them) the second method assembles classes by layers, by this method a call to a class within the DLL effecting with loading the DLL into memory, and every call will be direct to the DLL memory address. There is not clear preference for neither of the methods but no matter which method is taken, it is important to prevent from creating heavy DLLs (less the 200KB), heavy DLLs will cause memory overloading and will harm performance.

Every class should handle run time exceptions. Part of the application blocks should be collection of application exceptions for applicative exception that might occur in run time. The application should handle every exception firstly by writing the exception to log and then applying application specific behavior (raise the exception to higher level or handle the exception and continuing application flow).


Developing System of Systems required set of infrastructure that will be building prior the systems developing phase. Those infrastructures supply set of common abilities that should be use by application programmers to decrease the development phase and eliminate duplicate code writing. The infrastructure is hierarchical inheritance tree of classes. The base class will serve as the root for every class in the enterprise and it should supply base methods and properties. As we climb the inheritance tree the classes are more and more specific (Classes for data layer, business logic, visualization and so forth). Insisting on inheritance of every class from infrastructure classes not just supply all the OO benefit but also supply one point infrastructure interfering.


The Engineering Viewpoint

The common and optimal configuration for an enterprise system is a web application based on tiers. The Presentation Tier will be based on HTML pages which will conclude client side scripts, the rest of the tiers (Logics and Data) will be set on a server with connection to the Database server.


Keeping the visualization layer with the business and data layers on the same machine yield the best application performance. But as the application consume more and more resources (CPU, memory) and the applications or assemblies start competition on resources its better to separate the application between physical layers. This kind of change usually take place when users running against the application therefore the application should be build in such a way that moving logical layers between physical layers wont cause any changes in the application code. To be more specific every assembly should be support local or remote activation without recompiling the assembly or one of its dependent assemblies.


To enable the application to be more scalability, availability and adaptability the application components needs to build in such away that enable them to work in clusters. Such clusters should contain dynamic number of components that can be route to handle coming request. The access to cluster should be through façade that handle request from the cluster and activating components in the cluster.


SOS is actually collection of systems that created by different development team and form as single application. The global application availability is crucial to the enterprise operational and as such robust of SOS is one of our concerns. To keep the SOS robust every application must run in its own memory space that can be shutdown if the application or component run fail and start to impact other applications. Bearing this in mind application should take in account that calls to other application can return error due to the application temporary unavailability and handle this situation.


System stability is one of the important goals of a robust system, most of the organization will dispersed the application components over several servers and will implement a Load Balancing component which handles and routes the system requests.

Applications developing most consider different server dispersing, the application should transparently handle different accesses on different servers.


System components that aren’t part of the application flow, usually batch process, should be activate from other server then the applications servers. Those component should also activate in a-synchronize manner to reduce hit in application performance and robustness.





The Technological Viewpoint


To implement connectivity between systems using minimum efforts and enabling interaction with other application whether based on web or on operation system SOS systems will be develop as intranet applications using Microsoft tools. Using the web as application environment prevent the need to deploy SOS applications to each user machine, regardless the user geographic location. Using web paged the application easily combine text, video, audio and other data on one page. Nevertheless HTTP enables using existing HTTP application in the SOS and combining SOS in existing application. The SOS application will be develop as intranet applications based on IIS 5 or higher as web servers and Internet Explorer 5.0 and above as browser.


The last development tool for developing intranet application lunch by Microsoft is .NET. .Net development environment and run time services provide technological jump compare to the previous MS development tools. Developing new systems that are part of SOS will be done using .NET.


If and when connection should be establish between two .NET processes in order to share data between them or to use functionality from one of them remothing should be use. Using Remoting over TCP channel yield the best performance results but if the connection cross firewall or need to use security then remoting over HTTP channel preferred. Creating remoting types using interfaces is good practice that prevents the need to deploy the remoting type to the client machine. If the process need to connect other process develop by other development tool using web service is the right choice. In addition to web service there are others methods that can be used to call non .Net processes. The first method uses remoting with third party product such as to call Java beans. This solution provides enhanced performance solution over web services. The last option is to use one of the queue products usually MQseries, since it support more operation systems, to communicate between processes. This method is best practice when the two applications aren’t on the same timeline.


As I state before Interface is the glue between software components whatever they write in .Net Java or every other language. That way every application entity must implement standard interface for every external connection whether they relay over remoting or web service, which will be set by the organization. This interface enables every programmer one known entry point into the system.

Web pages are one aspects of the SOS connectivity. System can call other system page to show relevant data on the calling system page or to transfer the SOS path to other application page. Pages that part of the connectivity mechanism should be build in such a way that enables user navigate a page without going through the regular call sequence of the page and enable the user to continue working in the application he navigates to both in the same time. Visualization might be needed by other application as a full functional page or as visual portion to be embedded in page. If application needs to provide visualization as a component the visualization component should be build as custom control.


Calls to existing enterprise software package will base on COM. Due to the bad impact of COM on performance, especially STA COM components (VB), application should minimize as much as possible COM usage. Calling COM object should be made by using proxy assembly that can be creating by using TlbImp.exe. To minimize the impact on the system when rewriting the COM component in .NET the application should wrap the access to the proxy so adjustment that will be need to be made will be made is single place. To enable using .NET component From COM components the Components should create Interface (only C#) and use Regasm.Exe to create TLB for the .Net component and register it.


While dealing with different data types from different sources there is need from time to time to reformat the input data to application format and reformat it again when the data leaves the application. This task can be done easily using MS BizTalk server. BizTalk can automatically locate files in specific directory; reformat them by using .Net/Com objects and apply operations on the reformat data. The classic scenario for using BizTalk is application that receives different files formats (EDI, XML with different schemas), reformat it to application schema XML and insert the data to DB. In addition to BizTalk reformatting abilities BizTalk can be used to create the business rules layer of the application. BizTalk define and run business rules and organization workflow that can be activated from the application. BizTalk enable agile development due to the fact that end users can constantly improve the application business logic without changing the existing code.


Queue servers needs mainly to enable two applications to transfer data between them if the wont “live” in the same timeline. Microsoft queue server calls MSMQ and application should use it for every one of the reasons that specify here. If application need to ensure that task or data will be made regardless to software or hardware failure MSMQ is good candidate to achieve this behavior. MSMQ store messages and ensure that they will stay in the queue until application removes it. MSMQ provides synchronic behavior that can be use in certain application needs. Application that receives request that required long process time can use MSMQ to store the request and return immediate response to the client. Other application component can retrieve the data from MSMQ and process it while updating application about process status. MSMQ can be also used to achieve parallel computing. Application can write messages to queue and other processes on dynamic number of machines can read the messages and process them simultaneous. That parallel work paradigm improves drastically application performance and production.

MSMQ supply good solution but just to application working on MS operation systems. MQseries on the contrary can be used from any operation system and therefore MQSeries is wide spread. If application needs to use one of the queue abilities over operation system MQSeries should be the choice. MS aware to the MQSeries popularity and one of the Host Integration Server component is a bridge that enable messages transfer from MSMQ to MQSeries and vice versa.


Every application should be run in separate memory area to enable system shutdown if the application hart other applications (consume CPU or memory) and to eliminate unavailability of applications. ASP.NET creates AppDomain for every web application but MS don’t let administrators or programmers to stop one AppDomain and restart it. The only way is to restart aspnet_wp process that restarts all the web applications. The easiest way to achieve this isolation level is to register the application interface assemblies as COM+ server application. When component running under COM+ as server application administrator can restart the process that host the component without harm other components and see COM+ components statistics that can be use track possible problems. The major drawback using COM+ server application is 50% performance hit due to using remoting and DCOM. It is important to notice that not every assembly needs to be register as COM+ application but there shouldn’t be any situation that ASPX page will call assembly that running under the web process. If the web server used is IIS 6 then the ApplicationPools can be set to each web application and used to restart the application instead of using COM+. Note that using the IIS 6 won’t let you see any statistics about the application component (as COM+ supplies)


Com+ can also utilize to supply transaction over several different databases. Application that works against several data bases might need an atomic operation against all the databases. COM+ enables this demand by using XA protocol against data bases that support XA. Using just this COM+ ability (without using the administrative tasks) can be done by registration the assembly as library application. Application library has neglected effect on the application performance since they didn’t need remoting or COM. Note then using XA to maintain transaction is slower then maintaining transaction in the Database. Use COM+ transactions just for transaction over several data bases.


Application that use product that enforce limitation on number of license can use COM+ to enforce licensing, prevent errors due to utilization of all available licenses and enables more users then license. COM+ pooling enable limitation of license and prevent errors due to utilization of all the licenses. COM+ pooling let the administrator to set the minimum and maximum objects that should be in the pool. If request to create object extend the pool size then the request will be queued until object in the pool will be free. To enable more users then license COM+ Just In Time Activation (JITA) can be used. JITA connect object to caller just when the caller actually call one of the object methods and free the object to serve other requests when the function finished. This architecture let several clients to use single object that holds one license. To use this option efficiently the COM+ application should be register as application so the limitation apply all the application that call that component and not just one AppDomain (that should be the case if the COM+ application will be library).

One of the problematic issues that open distributed systems need to deal with is to advertise and deliver data and/or notification when certain situation meats in given object to one or more objects without knowing prior their identity. COM+ events supply such mechanism with minimum programming effort. COM+ let class to register as event and COM+ components can subscribe to that event. A publisher, which is an object at run time, can publish (raise) the event. As a result COM+ calls all the components that registered as COM+ event subscribers. Using this paradigm fits situations when application entity or batch process needs to notify other application entities (no matter from which application). Remember that applications will be added from time to time so we can’t know in advance who the notification subscribers are.


COM+ also equipped with feature called Queued components. Queued components enable programmers to send message that holds component name, method and parameters to MSMQ. As a result COM+ will activate the desire method of component. Queued components can be use for asynchronous, disconnected scenarios.


COM+ can be use as library application to handle system permissions. COM+ let you set roles of users and then grant access to roles for activating component or every component method. Role base security can also be applied to ASPX pages. Storing and managing the users and roles should be done by using Active Directory. Using Active directory is the optimal solution for applying Single Sigh On mechanism in the organization.


When working with COM+ server application its better to minimize call to COM+ server application objects and between them due to the performance hit of that calls. Calls between two COM+ server applications can be minimize by gathering the assemblies with numerous calls between them in the same COM+ application. For the same reason calls between COM+ server applications and ASPX should be minimize also. Chunky over chatty principle should be use to minimize calls between pages and COM+ server application.


To implement the visualization of SOS so the user fill that he is working with one application the organization should set rules that define the visual interface (buttons locations by functionality, Colors, fonts, etc’). Every page must implement those rules and enterprise template should be developing to help programmers. In addition to enterprise templates base page class that implement visual aspect will be create and must be use by all pages as their base class. Generally speaking it’s a good practice for every project to create its own base page that will inherit from the enterprise base page. Note that if the base class doesn’t contain any state its better to create class with static function and use them without inheritance involved.


Requirement to create sites to share data between organization unit’s members Share Point could be the perfect solution, Share Point enables using visualization components create by ASP.NET and supply sharing of files, web pages and peoples, news, searching, indexing and personalization.


Visualization components that might be use in more then one page should be create one time and used in pages. ASP.Net doesn’t supply visual inheritance yet, but there are two ways to apply this visual requirement. The easiest way is user controls. User controls are like regular pages that can be embedded in regular pages. User control content won’t be seen in design time and can’t be share amongst web application. Custom controls are time consuming tasks but their content will be seen in design time and they can be share between applications.


As a rule application will be based on web pages and HTML pages. In irregular cases application will use COM component that will be down load to the client. Deployment of such COM components will be done just by using the OBJECT tag. If application need visualization abilities that can’t be achieve by using HTML or VML application might use rich clients. Rich clients based on windows user control and or windows custom control that embedded into the web page to supply superior GUI. Note that using rich client demand CLR on the clients. In most cases VML supply enough features that can be used in regular HTML paged to apply drawing abilities and its always better not to down load assemblies to the client.


To reduce network load application should minimize as much as possible the use of ViewState. ViewState keep the state of page controls before page render and it used to recreate page control with state while page post back. Enabling to many controls to support viewstate might be end with heavy page. IE support dynamically unzipping zip content. That feature can be use to zip all the page content on the server side and send it to the browser that will unzip it and render the HTML this process might look complicated but it got innumerous influence over the network load. Application should use gzip by compressing page output (can archive easily by using or internal IIS settings.


Due to that fact that all the browsers will be IE application can take advantage of this fact to optimize performance and network load. ASP.NET shipped with mechanism called server events. That mechanism calls the page every time certain client side events happened. The server recreates the page and run code that write by the programmer on the server side to handle the event on then server. The output is send back to the client and MS build mechanism called SmartNavigation that responsible to fake behavior to the user like nothing really happened, just new data appear on page. The problems with mechanism are: 1) recreate page and send it to client even if one text box changed. 2) SmartNavigation buggy 3) Server events use ViewState that loads the network. By using XmlHttpRequest or the application can retrieve from the server just the data needed and by using javascript and DHTML update the right control on page.


The regular session store session data in the application process and can cause memory problems or data loose in load balancing environment. In addition to the regular session the .NET CLR supplies different ways to handle state retention, this ways consist on using a dedicated host or database. The main problem with the default session and state handling is the performance, and this problem is growing when handling applications serving fifty requests simultaneously. Another way to cope with this problem is to send the session data to the client (as a form or a cookie) and reading them on every server request, this is more performance oriented but with heavy requests and data it can over load the server and network. It is recommended that 10KB or less data entities will be send to the client and data entities greater then 10KB will be retained wit state and session mechanisms of the CLR. Anyhow, it is recommended to reduce the amount of state data in a web application.


Application can use several ways to access the legacy data stored usually on MainFrames or AS/400 machines. The first way use MQseries that we already talk about. Using MQSeries MF transaction can read or write from queue and SOS application can also read and write from the queue. To enable pseudo-synchronize behavior IBM supply triggers that fire when message enter the queue. The second option is part of Microsoft integration Host call COMTI. Thos option simply shows Cobol, DB2, Natural or Adabas transactions as COM+ packages. Third option demand installation of WebSpear on MF. WebSpear enable creation of web service that can be called by clients and activate Transactions to get data and return it to the calling client.


Access to relational Databases will be done by using ADO.NET. Natural driver to database is allays better option over using OLE-DB driver. When connecting database the connection string should be always outside the application to enable changing the target DB without recompiling the application.


ADO.NET supplies two ways to transfer data from DataBase to the client. The first way is based on DataSet. Dataset filled with the Database data and metadata, disconnected from the database, enable the programmer to manipulate the dataset data and optionally may deal with updating the database with the changes made to the dataset. Dataset is easy to use by programmers but gets some limitations. Dataset is slowest then the other option because Dataset store the data and metadata that he needs in order to manipulate the data and update the database. This requirement cause other layers that use the dataset to know the Data structure and types and break interdependency between application layers. Data set is not so adaptive to changes in selections and updating of data to\from database.

The other way of data retrieving is DataReader. DataReader is fast forward cursor on the database that sends data to the client by demand. Actually Dataset is using Datareader to retrieve its data. DataReader yield better performance but regarding to update mechanism against the database the programmer got lot of work to do since he needs to create them. Usually that updates mechanism achieved via creation of classes that responsible to get the data, transfer it to other layers in data structure and update the underlying DataBase. These classes called DAL and they are usually time consuming. To reduce programmer time Microsoft shipped (Beta version) ObjectSpaces. ObjectSpaces create the DAL classes for the programmer. Note that there are other commercial tools that create DAL classes. Due to the flexibility and performance advantage that DataReader provide application should use them when possible.


When developing classes within the Data Access Tier it is recommended to engage several calls through DataReader. The DataReader enables sending groups of SQL queries and receiving their data. This feature reduces the number of requests to the database server and improves the object performance. Normally, when working with a database it is acceptable to perform the database queries as Stored Procedures. Stored Procedures are compiled in database server and optimally accomplished in the database server, this improves the total performance. It should be mentioned that the use of Stored Procedure is for the purpose of data queries and not for business logic implementation.


In the case of a class retrieving numerously identical sets of data, they should be cached in order to save multiply retrievals. The organizational infrastructure should supply





Comments have been disabled for this content.