June 2011 - Posts

A year ago, we released SO-Aware as our first product in Tellago Studios. SO-Aware represented a new way to manage web services and all the related artifacts like configuration, tests or monitoring data in the Microsoft stack. It was based on the idea of using a lightweight SOA governance approach with a central repository exposed through RESTful services.

At that point, we thought the same idea could be extended to enterprise applications in general by providing a generic repository for many of the runtime or design time artifacts generated during the development like configuration, application description or topology (a high level view of the components that made up a system), logging information or binaries. It took us several months to give a form to that idea and implement it as a product, but it is finally here and I am very proud to announce the release today under the name of “TeleSharp”.

Telesharp provides in a nutshell the following features,

1. Configure your application topology in a central repository. Application topology in this context means that you can decompose your application and describe it in terms of components and how they interact each other. For example, you can tell that the CRM system is made up of a couple of WCF services and a ASP.NET MVC front end.

2. Centralize configuration for your applications and components.  You can import existing .NET configuration sections into the repository and associate them to the different components. In addition, environment overrides are supported for the configuration sections. We provide tooling and extensions in Visual Studio for managing all the configuration, and a set of powershell commands for automating the configuration deployment.

3. Browse all the assemblies and types remotely in your application servers in a web browser using an interface similar to any of the existing .NET reflection tools. You can easily determine this way whether the server is running the correct version of your applications.

4. Centralize logging and exception management into the repository. You get different reports and a pivot viewer experience for browsing all the different logging information generated by your applications. In addition, TeleSharp provides different providers for pushing the logging information to the central repository using well-known frameworks like ELMAH, Log4Net, EntLib or even Windows ETW. 


The central repository itself is implemented as a set of OData services that any application can easily consume using regular Http. You can read more details in this introductory post

If you think this product can be a good fit in your organization, you can request a trial version in our Tellago Studios website.

Posted by cibrax
Filed under: ,

I am happy to announce that one of the projects in which Microsoft and Tellago have been collaborating together during the last few months was released as part of wcf.codeplex.com. The primary aim of this “WCF Interop Bindings” project is to simplify the work needed for making WCF services interoperable with other service stacks like Oracle WebLogic, IBM Websphere, Metro or Apache Axis. 

The main goal behind web services has always been to enable communication between heterogeneous systems. However, we all know that archiving such level of interoperability in the context of web services and WS-* is really complicated and requires good knowledge of the different message stacks to generate compatible messages at the wire level.

For that reason, the WCF team came up with this interesting idea of providing a set of pre-defined bindings with a limited set of settings that you can use out of the box for supporting the most common interop scenarios with the mentioned stacks using protocols like WS-Security, WS-SecureConversation or WS-ReliableMessaging.

Bindings provided out of the box in the current bits

The current bits provides a set of custom binding implementations for Oracle WebLogic, IBM Websphere, Metro and Apache Axis. The name of those bindings basically matches the message stack you want to use. For example, the following configuration illustrates how a service can be configured to be compatible at wire level with Oracle WebLogic using Username over certificates as the security scenario.

      <service name="Microsoft.ServiceModel.Interop.Samples.HelloWorldService" behaviorConfiguration="helloWorld">
        <endpoint address="Username_WebLogic" binding="webLogicBinding" bindingConfiguration="helloWorld_UsernameOverCertificate" 
        <endpoint address="Mex" binding="mexHttpBinding" contract="IMetadataExchange"/> 
        <binding name="helloWorld_UsernameOverCertificate">
          <security mode="UserNameOverCertificate"/>
        <behavior name="helloWorld">
          <serviceMetadata httpGetEnabled="true"/>
          <serviceDebug httpHelpPageEnabled="true"/>
        <add name="webLogicBinding" type="Microsoft.ServiceModel.Interop.WebLogic.Configuration.WebLogicBindingCollectionElement,
 Microsoft.ServiceModel.Interop, Version=, Culture=neutral, PublicKeyToken=4fc38efee625237e"/>

Visual Studio 2010 Integration

The binding implementations are not the only thing you get out of the box with the current bits. The WCF team also wanted to provide developers with a rich experience for configuring and creating the services with one of the pre-defined interop scenarios. Having that in mind, this project also integrates in Visual Studio 2010 by providing a set of project templates for creating WCF services for an specific scenario and a configuration wizard for generating the binding configuration.


As you can see in the image above, two new project templates are automatically registered under the "WCF" category that you can use to create a new service configured with one of the provided interop bindings. You can use the "WCF Interoperable Service Library" template for creating a new WCF service as part of the service library, or "WCF Interoperable Service Application" for creating a new WCF service as part of a Web Application. The templates will automatically launch a configuration wizard for selecting the desired interop scenario as it is shown in the images bellow.


In the wizard you can pick one of the existing interop scenarios, and specific settings for those scenarios like Security or Reliable Messaging to give an example. At the end of wizard, the configuration file associated to the WCF project will be automatically updated with the chosen binding and settings.


Posted by cibrax
Filed under: ,

Web Sockets is a relatively new specification introduced as part of HTML 5 to support a full duplex-communication channel over http in web browsers.  This  represents a great advance toward real-time and event driven web applications. Before Web Sockets jumped in scene, the only available  solutions for emulating real time notifications in web applications were different variants of Http Long polling. Real time notifications in this context became particularly important for specific scenarios, such as reporting stock pricing updates, online gaming or news reports to name a few.

All the Http polling variants were pretty much similar in nature. They all try to emulate a bidirectional connection over http by keeping client connections open for a period of time until some notifications becomes available and can be sent as part of the response or the connection times out.  However, these techniques usually require the use of two connections for streaming data to and from the client. Another common issue with this approach is that developers need to implement the server side carefully to make an efficient use of the server resources.

There is, however, an area where Http polling has proved to very effective over the years, and that is pub/sub. We can find in this area simple usages of pub/sub over http like syndication feeds or more complex solutions for business to business integration, but as you can see, they are not related to real time notifications in a web browser at all.

Web Sockets on the other hand is going to be supported natively on any web browser compliant with HTML 5, so  removing the need of relying on different workarounds with HTTP Ajax and timers for emulating real time notifications in the browser.  As I said before, the specification is relatively new so the support you find today in all the major browsers is partial or incompatible in some cases. However, I am pretty sure Web Sockets will become the standard technology for pushing real time data to web browsers in the upcoming years. 

Web Sockets in Detail

The Web Socket interface definition according to the spec looks as follow,

[Constructor(in DOMString url)]
interface WebSocket {
readonly attribute DOMString URL;
// ready state
const unsigned short CONNECTING = 0;
const unsigned short OPEN = 1;
const unsigned short CLOSED = 2;
readonly attribute int readyState;
// networking
attribute EventListener onopen;
attribute EventListener onmessage;
attribute EventListener onclosed;
void postMessage(in DOMString data);
void disconnect();

As you can see, this API is very straightforward and simple to use. There is a constructor you can use to create a new Web Socket instance or connection, a callback for receiving messages (onmessage) and two additional methods for sending new messages (postMessage) or close the socket instance (disconnect) respectively. .

A Web Socket connection is established by upgrading from the HTTP protocol to the Web Socket protocol during an initial handshake between the client and the server, over the same underlying TCP/IP connection. Once the connection is established, the data can be sent back and forth between the client and the server in full-duplex mode. Here is how you create a new Web Socket connection in the browser,

var mySocket = new WebSocket("ws://weblogs.asp.net/cibrax");

The “ws” prefix indicates a Web Socket connection. There is also a “wss” prefix for secure connections. Once the connection has been opened, you can associate a handler for the “onmessage” event for start receiving messages,

mySocket.onmessage = function(evt) { alert( "Message Received:  "  +  evt.data); };

Or send messages with the “postMessage” method,

mySocket.postMessage("Hello World!!!!");

There are a couple of implementations already in .NET for implementing the server side part required for pushing notifications, and some of there are Nugget (Not really a good name for an open source project given the existing NuGet project from Microsoft) or Fleck.   The WCF team is also working on an implementation as part of the WCF Web Apis framework and you can see some announcements here.

Web Sockets in the Cloud

There is a particular implementation in the cloud that caught my attention in the last few months, and that is “Pusher”. “Pusher” is a service hosted in the cloud that provides all the service side infrastructure for pushing notifications to the different clients (browsers running in all kind of devices or personal computers) using web sockets if it is available or a flash-based socket communication otherwise.


Your server code publishes notifications in an specific “Pusher” channel using a simple REST Api, and they take care of broadcasting the notifications to all the subscribers for that channel. They provide a javascript API  that you can use in a web page for subscribing to a channel, and the REST API for publishing messages on the server side. There is also different open source implementations for wrapping up the REST API in a simple object model, and you can find here for example to “PusherDotNet”, a .NET implementation in C#. The pricing model is also very easy to understand, you pay for a flat rate that gives you access to a specific number of messages and connections per day.

Posted by cibrax
Filed under: ,

Publish/Subscribe in the cloud has became relatively important lately as an integration pattern for business to business scenarios between organizations. The major benefit of using a service hosted in the cloud as intermediary is that publishers and subscribers don’t need to be publicly addressable, be in the same network  or be able to talk each other directly. The cloud infrastructure allows this intermediary service to scale correctly as the number of publishers or subscribers increase, and also to act as a firewall for brokering the communication (Publishers or subscribers need explicit permissions to connect, send or receive messages from the intermediary service).

This pattern can  be used in workflow systems to relay events among distributed computer applications, update data in business systems or as a way to move data between data stores. For example, in an order processing application, notifications must be sent whenever a transaction occurs; an order is placed in a system, the order details are forwarded as a message to a payment processor service for approval, and finally, an order confirmation message is sent back to the system where the order was originally created.


This infrastructure typically supports the idea of “topics” or named logical channels. Subscribers will receive all the messages published to the topics to which they subscribe, and all subscribers to a topic will receive the same messages.

I am going to discuss today two available solutions in the cloud, the “AppFabric Service Bus”, which is part of the Microsoft PaaS cloud strategy known as Azure and also a relatively new implementation “PubNub” hosted in the  Amazon EC2 cloud infrastructure.

AppFabric Service Bus

The AppFabric Service Bus is a service running in Microsoft data centers. This service acts as broker for relaying messages through the cloud to services running on premises behind network obstacles of any kind, such as firewalls or NAT devices. The Service Bus secures all its endpoints by using the claim based security model provided by the Access Control service (Another service available as part of Azure AppFabric). You can find a lot of interesting features as part of the service bus such as federated authentication for listening or sending to the cloud,  a naming mechanism for the endpoints in the cloud, a common messaging fabric with a great variety of communication options, or a discoverable service registry that any application trying to integrate with it can use.
In the first release, the service bus originally provided a relay service for integrating on-premises applications with services running on the cloud. At that time, the integration with the relay service could be done in two ways, a message buffer in the cloud accessible through a REST API or using the traditional WCF programming model with special channels talking to the relay service on the cloud. By using the WCF programming model, the interaction with the relay binding was almost transparent for applications, as all the communications details were handled at channel level by WCF.  This message buffer was a temporary store for the messages, so they disappeared after being consumed or when they expired.

The AppFabric team recently announced the availability of a new feature for supporting durable messaging at the service bus level. Durable messaging in this context comes in two flavors,  reliable message queuing and  durable publish/subscribe messaging. The main difference between them is the number of parties that can consume a message published in the service bus. While a message is consumed by a single party when a queue is used, the publish/subscribe model relies on topics, which allows multiple parties to subscribe to the messages received in an specific topic (Every party receives a copy of the message basically).

The pricing model for the service bus is currently based on the number of used connections. Every message sent to the service bus usually involves two connections, one connection for sending or publishing the message and another connection for receiving it (this might change for the model where you have multiple subscribers for a message). This thread in the MSDN forums discusses the model in details, and I have to admit it takes some time to digest.


  • The service bus supports a good isolation level based on service namespaces.  A service namespace represents a level of isolation for a particular set endpoints, and you can associate multiple service namespaces to an Azure account. For example, you can have two different applications associated to your Azure account and each one them listening on a different service namespace address.
  • The great number of communication options you can find as part of the service bus.
  • The REST API, the .NET APIs and the WCF bindings makes the service bus really easy to use from any application.


  • The pricing model is to complex to understand and it is hard to predict. Microsoft does not currently offers a good monitoring option for determining the number of used connections or predicting costs before receiving the monthly bill.
  • The number of service namespaces that you can create in an specific Azure account is limited (I believe the number is 50 namespaces, and that number can be increased if you make an explicit request). This is still a big problem if you want to use the service bus to route messages to several machines listening on different namespaces, or support a multitenant schema in which a different namespace is assigned per tenant. 
  • There is not an API for managing the service namespaces, which represents an inconvenient if you want to allocate service namespaces dynamically.


PubNub is a relatively new push service hosted in the cloud. It’s currently hosted in the Amazon EC2 infrastructure, and provide a set of APIs for pushing or receiving messages in almost all the languages and platforms you can imagine. All those APIs are also available as open source in GitHub.   While the main purpose of this service is to serve as a mechanism for pushing data to different devices (mobile devices, web browsers, etc) via Http, I can also a find a good use case of this service for pub/sub in the enterprise.

PubNub pushes data to the different subscribers using a BOSH comet technology. The idea BOSH comet is to define a transport protocol that emulates the semantics of a long-lived, bidirectional TCP connection between a client and a server by efficiently using multiple synchronous HTTP request/response pairs without requiring the use of frequent polling or chunked responses.

Subscribers must issue a API call to begin listening for messages on an specific channel (similar to a topic), automatically keeping the connection open until the application is closed. Every message sent by a client application to an specific channel will be forwarded to all the subscriber listening on that channel.

One the main disadvantages probably is the maximum size for the message payload that you can send or receive, which is 1.8 Kb (This limit might be increased, or otherwise, you might need to implement a chunking channel on your end).


  • Extremely fast and easy to use.
  • The pricing model is very easy to understand, you pay for every message that you sent basically. This model scales well for a great number of clients and servers as well as the price you pay for every message is relatively cheap.
  • They offer an API for managing accounts, which is the mechanism they use for billing.
  • Client API available in a great number of technologies and languages.


  • The supported message payload size, which is by default, 1.8 kb.
  • They don’t have an exclusive isolation level like the service bus does. The only isolation level here is the account.
Posted by cibrax | 1 comment(s)
Filed under: , , ,
More Posts