June 2009 - Posts

A common misconception is to think that a WCF proxy can be used and disposed as any other regular class that implements IDisposable. However, the IDisposable implementation in the WCF client channel is not like any other, and it can throws exceptions. A phrase I got from this thread in the WCF forum will give you a better context about this scenario,

“I think a key thing to notice is that Close() often implies doing "real work" that may fail, including network communication handshakes to shutdown sessions, committing transactions, etc.”

As consequence of this, we should add some code to handle any possible exception for network errors when a proxy is closed/disposed.

The common pattern for cleaning up a proxy is the following,

try
{
...
client.Close();
}
catch (CommunicationException e)
{
...
client.Abort();
}
catch (TimeoutException e)
{
...
client.Abort();
}
catch (Exception e)
{
...
client.Abort();
throw;
}

The problem is, I do not want to have that code everywhere in my application. A good solution for that is to use a helper class like this one.

Michelle recently announced a new project that automatically generates a proxy helper class like that in Visual Studio. http://wcfproxygenerator.codeplex.com/

That is a feature I would like to see out of the box in the next version of WCF :).

Posted by cibrax | 2 comment(s)
Filed under: ,

Do you want to see something very ugly ?. Try to send a complete EF entity directly as a data contract over the wire with WCF,

<Order xmlns:i="http://www.w3.org/2001/XMLSchema-instance" z:Id="i1" xmlns:z="http://schemas.microsoft.com/2003/10/Serialization/" xmlns="http://schemas.datacontract.org/2004/07/ClassLibrary2">
  <EntityKey xmlns:d2p1="http://schemas.datacontract.org/2004/07/System.Data" i:nil="true" xmlns="http://schemas.datacontract.org/2004/07/System.Data.Objects.DataClasses" />
  <Description>My new order</Description>
  <OrderId>1</OrderId>
  <OrderItem>
    <OrderItem z:Id="i2">
      <EntityKey xmlns:d4p1="http://schemas.datacontract.org/2004/07/System.Data" i:nil="true" xmlns="http://schemas.datacontract.org/2004/07/System.Data.Objects.DataClasses" />
      <Currency>USD</Currency>
      <Description>item 1</Description>
      <ItemId>1</ItemId>
      <Order z:Ref="i1" />
      <OrderReference xmlns:d4p1="http://schemas.datacontract.org/2004/07/System.Data.Objects.DataClasses">
        <d4p1:EntityKey xmlns:d5p1="http://schemas.datacontract.org/2004/07/System.Data" i:nil="true" />
      </OrderReference>
      <UnitPrice>10</UnitPrice>
    </OrderItem>
    <OrderItem z:Id="i3">
      <EntityKey xmlns:d4p1="http://schemas.datacontract.org/2004/07/System.Data" i:nil="true" xmlns="http://schemas.datacontract.org/2004/07/System.Data.Objects.DataClasses" />
      <Currency>USD</Currency>
      <Description>item 2</Description>
      <ItemId>2</ItemId>
      <Order z:Ref="i1" />
      <OrderReference xmlns:d4p1="http://schemas.datacontract.org/2004/07/System.Data.Objects.DataClasses">
        <d4p1:EntityKey xmlns:d5p1="http://schemas.datacontract.org/2004/07/System.Data" i:nil="true" />
      </OrderReference>
      <UnitPrice>145</UnitPrice>
    </OrderItem>
  </OrderItem>
</Order>

That is the result of serializing a complete object graph. I do not know what the folks in Microsoft were thinking when they decided to enable a feature like this. They made a good work teaching us about how evil Datasets were for interoperability with other platforms, and now they came up with a solution like this, no way.

I always like the idea of making boundaries explicit, that is the approach the WCF team took with the first version of the framework (Every data contract should be decorated with DataContract and DataMember attributes).  If you break that rule, you will end up with things like this.

I know this a solution for people that want to develop things quickly with no need to create DTOs, and do all the mapping stuff, but there are good tools and frameworks like AutoMapper to help with that. Or you can use ADO.NET Data Services today, which is also prepared to handle this kind of scenarios, all the transformations between layers are made by the framework itself, and still you get a very nice RESTful api. We still need some more features in ADO.NET DS to have a more complete framework, there is not good support today for injecting business logic, it can be done but it is complicated, it looks like the next version will address that aspect better.

Como Jesus menciono en su post “We are still hiring..”, en Tellago estamos buscando gente que quieran unirse a nuestro equipo de desarrollo. En estos momentos tenemos algunas posiciones para Arquitectos y Desarrolladores con buenos conocimientos de Biztalk e idioma ingles.

Si queres ser parte de una empresa joven, jugar con las ultimas tecnologias o involucrarte en proyectos con diversos equipos de trabajo en Microsoft, enviame tu curriculum a pablo [dot] cibraro [at] tellago [dot] com

Las posiciones pueden ser tanto en Buenos Aires como en los Estados Unidos.

Posted by cibrax
Filed under: ,

ADO.NET Data services represent today one of the most powerful alternatives to build RESTful services in the .NET platform. This framework basically creates a RESTful API on top of any IQueryable data source. Most of the steps required to publish a set of resources through http and make them available for any client are automatically implemented. 

Only Entity Framework data models are supported out of the box as read/write data sources, and any other IQueryable data source is considered by default read only. If you want to perform write operations on any other data source different from Entity Framework, you have to implement an additional interface IUpdatable as it is showed in this post. (Update: The new CTP 1.5 seems to have a new class to implement custom data providers, I have not had a chance to play with it yet)

Usually, you will need to inject custom logic in the services for performing validations or implementing business rules.

ADO.NET services comes with two built-in extensibility points for adding custom logic or code to a service, Service Operations and Interceptors.

Service operations represent a simple way to define custom read/write views on top of the data source exposed by the service. It is intended to be used in scenarios where you need to create complex queries, or simple data aggregations that are not supported out of the box in framework. If you want to expose those queries directly as another resource in the service, you have to use service operations.

The classic example given to show this feature in action is a query that returns all the customers in an specific city. That query would be translated in a service operation as follow,

public class MyService : DataService<MyDataSource>
{
    // This method is called only once to initialize service-wide policies.
    public static void InitializeService(IDataServiceConfiguration config)
    {
        config.SetEntitySetAccessRule("Customers", EntitySetRights.AllRead);
        config.SetServiceOperationAccessRule("CustomersInCity", ServiceOperationRights.All);
    }

    [WebGet]
    public IQueryable<MyDataSource.Customers> CustomersInCity(string city)
    {
        return from c in this.CurrentDataSource.Customers
               where c.City == city
               select c;
    } 

}

As you can see in the code above, the service operation for this example is a simple method decorated with a WebGet attribute that returns an IQueryable implementation (With a pre-defined query in this example). The WebGet attribute specifies that  is a read-only view. As soon as we have that service running, you can navigate to that service operation with an URI like this,

http://localhost/MyService.svc/CustomersInCity?city=BuenosAires

Since you are returning an IQueryable implementation, you are also free to define additional transformations on top of this base query, such as projections, filtering, or ordering among others.

WebInvoke can be also used in case you want to implement a write operation. Since a service operation works as a black box, only the post verb is supported for WebInvoke, which means that you can not use other verbs like DELETE or PUT. Regarding this limitation, this is the answer you can find in the forums

“We currently support only POST and GET verbs on the service operations (even in the recently released CTP). Is there a reason why you want to support  DELETE verb on service operation? The main reason for not supporting all the verbs is that from the client side, discovery becomes a issue - there is no way to know what verb to send for a service operation. Also service operations are black box to us - so we try and categorized them into side-effecting and non-side effecting ones. We recommend to use GET for non-side effection service opertions and use POST for side-effecting ones.

Please let us know if you think there is a good reason why DELETE verb should be supported.

Thanks
Pratik”

Another problem with service operations is that only scalar values are supported as arguments. Therefore, you can not pass entities or complex data types as arguments to a service operation.

These limitations make hard to use service operations as a way to inject custom logic like validations or business rules into a data service.

Fortunately, there is another extensibility point that becomes handy at the moment of implementing this kind of logic, the Interceptors. There are two kind of built-in interceptors, Query Interceptors, and Change Interceptors.

A query interceptor represents a mechanism to change or transform the initial projection over an entity set. For example, if an user only allowed to see a subset of all the available data in an entity set, a query interceptor is the right place to filter the data for that user.

[QueryInterceptor("Orders")]
public Expression<Func<Orders, bool>> QueryOrders() {
  return o => o.Number > 100;
}

The syntax of a query interceptor is quite simple, you basically returns an expression that describes a function (the Func), which takes an input value (an order being evaluated) and returns a bool value (whether the order passes the filter or not). Since an expression is used there rather than a piece of code, that expression can be passed directly to the query provider, which ultimately will generate the appropriate SQL in the database server.

A change interceptor on the other hand, intercepts all the calls that generates changes in the data source exposed by the service. Therefore, this interceptor is the right mechanism for performing some validations before the data is changed in the data source. This is the place where we can enforce the business rules or validate the data for an specific entity set in our application (Entity set in this scenario also represents a collection of resources, where each resource is the entity itself).

[ChangeInterceptor("Orders")]
public void OnChangeOrders(Order o, UpdateOperations operation)
{
     if (operation == UpdateOperations.Add || operation == UpdateOperations.Change)
     {
        if (o.Number > 100)
        {
          throw new DataServiceException(400,
            "You are not allowed to create or change an order with number greater than 100");
        }
    }
}

A change interceptor method receives two arguments, the entity that will be persisted, and an enumeration UpdateOperations that specifies the operation currently performed on the entity. Possible values for this enumeration are, Add, Change and Delete.

This interceptor is not just to perform validations, you are still free to call additional services or change some data in the entity before it is persisted. However, this method call is synchronous, so you should be careful about executing long running code there.

Posted by cibrax | 4 comment(s)
More Posts