August 2010 - Posts

WIF is an excellent framework that allows you to develop an STS in just a few minutes if you know exactly what you are doing of course :). In my role as consultant and architect in Tellago, I went through several projects in which some level of customization was required at wire level to accomplish some interoperability between a STS built with WIF and existing federation solutions like ADFS 1.x and OpenSSO.

The idea of this post is to show some of extensibility points that you will find in WIF to customize the WS-Trust messages, and issued tokens.

1. Making WIF to speak WS-Trust Feb 2005

WIF uses by default WS-Trust 1.3, which means that all the generated WS-Trust messages will use that spec unless you specify a different one. ADFS 1.x and OpenSSO both uses WS-Trust Feb 2005 (http://schemas.xmlsoap.org/ws/2005/02/trust) for the passive profile to support single sign on over the web. Therefore, if you want to generate WS-Trust messages that follow that spec version in your WIF passive STS, you need to modify a little bit the code you use for processing the RST messages.

class FederatedPassiveSecurityTokenServiceOperations
{
public static void ProcessRequest(HttpRequest request, IPrincipal principal,
SecurityTokenService sts, HttpResponse response,
WSFederationSerializer federationSerializer);
public static SignInResponseMessage ProcessSignInRequest(SignInRequestMessage requestMessage,
IPrincipal principal, SecurityTokenService sts,
WSFederationSerializer federationSerializer);
}

The methods ProcessRequest and ProcessSignRequest in the FederatedPassiveSecurityTokenServiceOperations class both support an additional overload for passing the WS-Trust version (WSFederationSerializer instance).

You can force another WS-Trust version by passing the right serializer instance. For example, the following code uses the WS-Trust Feb 2005 specification.

SignInResponseMessage responseMessage = FederatedPassiveSecurityTokenServiceOperations.ProcessSignInRequest(
requestMessage, User, sts,
new WSFederationSerializer(new WSTrustFeb2005RequestSerializer(),
new WSTrustFeb2005ResponseSerializer()));

2. Changing the SAML token’s signature algorithms

WIF uses by default a combination of RSA and SHA 256 for generating the SAML signature. You can notice this in the generated SAML token,

<saml:Attribute AttributeName="name" AttributeNamespace="http://schemas.xmlsoap.org/ws/2005/05/identity/claims">
<saml:AttributeValue>MyName</saml:AttributeValue>
</saml:Attribute>
</saml:AttributeStatement>
<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:SignedInfo>
<ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#" />
<ds:SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256" />
<ds:Reference URI="#_cecf3c23-824e-4064-846c-b90c03d29700">
<ds:Transforms>
<ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature" />
<ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#" />
</ds:Transforms>
<ds:DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256" />
<ds:DigestValue>1flG08Axm71C0isY2wLR0C9jqgfIebNoG2nlIO+jO+s=</ds:DigestValue>
</ds:Reference>
</ds:SignedInfo>

ADFS 1.x and OpenSSO use a combination of RSA and SHA, so that’s also something else you need to customize. The “X509SigningCredentials” instance that you pass in the constructor of the Secure token service configuration also contains an overload to change the signature algorithm. The following code creates a new instance of “X509SigningCredentials” that uses SHA rather than SHA 256.
 
new X509SigningCredentials(
CertificateUtil.GetCertificate(StoreName.TrustedPeople,
StoreLocation.LocalMachine,
"CN=Test"),
"http://www.w3.org/2000/09/xmldsig#rsa-sha1",
"http://www.w3.org/2000/09/xmldsig#sha1"))

3. Adding an authentication statement to the issued SAML token
 
This part is very tricky, as WIF does not add by default an authentication statement in the SAML token unless you use an specific claim type (NameIdentifier) with some custom properties.
 
Claim nameIdentifier = new Claim(System.IdentityModel.Claims.ClaimTypes.NameIdentifier, 
"foo@test.com");
nameIdentifier.Properties["http://schemas.xmlsoap.org/ws/2005/05/identity/claimproperties/format"]
= "http://schemas.xmlsoap.org/claims/UPN";

outputIdentity.Claims.Add(nameIdentifier);
outputIdentity.Claims.Add(new Claim(ClaimTypes.AuthenticationMethod, "http://microsoft/geneva"));
outputIdentity.Claims.Add(new Claim(ClaimTypes.AuthenticationInstant, XmlConvert.ToString(DateTime.Now, XmlDateTimeSerializationMode.Utc)));

The code above will generate an authentication statement like this in the SAML token, which is something equivalent to what ADFS 1.x or OpenSSO would generate.

<saml:AttributeStatement>
<saml:Subject>
<saml:NameIdentifier Format="http://schemas.xmlsoap.org/claims/UPN">
foo@test.com
</saml:NameIdentifier>
<saml:SubjectConfirmation>
<saml:ConfirmationMethod>
urn:oasis:names:tc:SAML:1.0:cm:bearer
</saml:ConfirmationMethod>
</saml:SubjectConfirmation>
</saml:Subject>
</saml:AttributeStatement>

 
Posted by cibrax | 6 comment(s)
Filed under: , , ,

Configuring a WCF service to use federated authentication in an organization is not something trivial as it requires some good knowledge of the available security settings, and more precisely, how to talk to the existing security token services with the right WCF bindings.

This is something that usually only a few people in the organization knows how to do it right, so having a way to centralize all this configuration in a central location and have the rest of the developers to use becomes really important.    

SO-Aware plays an important role in that sense, allowing the security experts to configure and store the bindings and behaviors that the organization will use to secure the services in the service repository.

Developers can later reference, reuse and configure their services and client applications with those bindings from the repository using a simple OData API, or the WCF specific classes that SO-Aware also provides for configuring services and proxies.

A WCF binding for configuring a service with federated authentication usually looks as follow,

<customBinding>
<binding name="echoClaimsBinding">
<security authenticationMode="IssuedToken"
messageSecurityVersion="WSSecurity11WSTrust13WSSecureConversation13WSSecurityPolicy12BasicSecurityProfile10"
requireSecurityContextCancellation="false">
<issuedTokenParameters tokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0">
<claimTypeRequirements>
<add claimType="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name" isOptional="false"/>
<add claimType="http://SOAwareSamples/2008/05/AgeClaim" isOptional="false"/>

</claimTypeRequirements>
<issuer address="http://localhost:6000/SOAwareSTS"

bindingConfiguration="stsBinding"
binding="ws2007HttpBinding">
<identity>
<dns value="WCFSTS"/>
</identity>
</issuer>
<issuerMetadata address="http://localhost:6000/mex"></issuerMetadata>
</issuedTokenParameters>
</security>
<httpTransport/>
</binding>
</customBinding>

You basically have there, the information required by WCF to connect to the STS (or token issuer), and the claims that the service is expecting. This binding is also referencing another existing binding “stsBinding”, that the client will use to connect and secure the communication with the STS. if you want to store the same thing in SO-Aware, you will need a way to configure a binding in a way that can reference existing bindings. That can be done using the “Parent” property as you can see in the image below,  

parentBinding[1]

Once you have the binding stored and correctly configured in the the repository, it’s a matter of using the SO-Aware service host for configuring existing services with that binding.

[ServiceContract()]
public interface IEchoClaims
{
[OperationContract]
List<string> Echo();
}

public class EchoClaims : IEchoClaims
{
public List<string> Echo()
{
List<string> claims = new List<string>();

IClaimsPrincipal principal = Thread.CurrentPrincipal as IClaimsPrincipal;

foreach (IClaimsIdentity identity in principal.Identities)
{
foreach (Claim claim in identity.Claims)
{
claims.Add(string.Format("{0} - {1}",
claim.ClaimType, claim.Value));
}
}

return claims;
}

}

<serviceRepository url="http://localhost/SoAware/ServiceRepository.svc">
<services>
<service name="ref:EchoClaims(1.0)@dev" type="SOAware.Samples.EchoClaims, Service"/>
</services>
</serviceRepository>

As you can see, the configuration is very straightforward. The developer configuring the service does not need to know anything about how to configure the WCF bindings or federated security. He only needs to reference an existing service configuration in the repository. This assumes the service was already configured in the Portal or using the OData API.
 
ServiceConfig[1]
 
The same thing happens on the client side, no configuration is needed at all. The developer can use the “ConfigurableProxyFactory” and the “ConfigurationResolver” classes that SO-Aware provides to automatically discover and resolve all the service configuration (service address, bindings and behaviors). In fact, the developer does not know anything about where the STS is, which binding uses, or which certificates are used to secure the communication. All that is stored in the repository, and automatically resolved by the SO-Aware configuration classes.
 
static void ExecuteServiceWithMetadataResolution()
{
ConfigurableProxyFactory<IEchoClaims> factory = new ConfigurableProxyFactory<IEchoClaims>(
ServiceUri,
"EchoClaims(1.0)",
"dev");

var endpointBehaviors = resolver.ResolveEndpointBehavior("echoClaimsEndpointBehavior");
foreach (var endpointBehavior in endpointBehaviors.Behaviors)
{
if (factory.Endpoint.Behaviors.Contains(endpointBehavior.GetType()))
{
factory.Endpoint.Behaviors.Remove(endpointBehavior.GetType());
}

factory.Endpoint.Behaviors.Add(endpointBehavior);
}

factory.Credentials.UserName.UserName = "joe";
factory.Credentials.UserName.Password = "bar";

IEchoClaims client = factory.CreateProxy();

try
{
string[] claims = client.Echo();

foreach (string claim in claims)
{
Console.WriteLine(claim);
}
}
catch (TimeoutException exception)
{
Console.WriteLine("Got {0}", exception.ToString());
((IContextChannel)client).Abort();
}
catch (CommunicationException exception)
{
Console.WriteLine("Got {0}", exception.ToString());
IContextChannel channel = (IContextChannel)client;
((IContextChannel)client).Abort();
}
finally
{
((IContextChannel)client).Close();
}


}

In addition, as the Secure Token Service could also be implemented with WCF and WIF, you can also resolve the configuration for that service from the repository by reusing the “stsBinding” in the given example (WSTrustServiceContract is one of the service contracts that WIF provides for implementing a STS).
 
<serviceRepository url="http://localhost/SoAware/ServiceRepository.svc">
<services>
<service name="ref:STS(1.0)@dev"
type="Microsoft.IdentityModel.Protocols.WSTrust.WSTrustServiceContract,
Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral,
PublicKeyToken=31bf3856ad364e35"
/>
</services>
</serviceRepository>

Posted by cibrax
Filed under: , , ,

WCF Data Services ships with two built-in query providers, a query provider for Entity Framework that uses the CSL model to infer all the service metadata for the exposed entities and their associations, and another provider, a Reflection Provider that uses .NET reflection over the exposed object model to infer the same metadata.

The Entity Framework provider is usually the one that most people use for its simplicity. The simplicity of this provider resides on the fact that you actually don’t need to do much for getting a data service up and running. You only need to define a entity framework model and expose it in the data service using the DataService<T> class.

The Reflection Provider on the other hand requires some more work, as you also need to implement the IUpdatable provider if you want to make your model read-write (otherwise, it’s read-only by default).

While the Entity framework provider is simple to use, the resources you want to expose in the data service gets tied to all the limitations you find in an Entity Framework model (for instance, you might entities or properties you don’t really want to persist in a database). This provider also implements IUpdatable and that implementation can not be customized, extended or replaced for providing some additional business logic functionality in the data service. And although you can use interceptors, I find that technique very limited as they represents aspects that are applied to a single entity. There is no way to inject cross-cutting aspects that affect all the entities (Entity based Authorization for instance). You can use the Data Service Pipeline for injecting that logic, but you don’t have entities at that point, only the messages on the wire (atom feeds or json messages). A technique I used in the past was to extend the EF entities with partial classes to add some additional business logic to the entities, and attach the data service to the Saving event in the Entity framework to run some logic before the entities were created, updated or deleted. However, I still don’t like this technique much because you end up with a model totally limited to what you can define in EF.

The advantage of using the reflection provider is that you can expose a much rich object model in your data service, even if you need to write some more code. In addition, as you are also writing an IUpdatable implementation, you can inject all the business logic or cross-cutting concerns that are common for all the entities in that class. However, the problem with the reflection provider is to make the service implementation efficient enough to resolve the queries in the data source and not the memory. It does not make sense at all to use a rich object model on top of entity framework for instance, if you still need to load all the object in memory to use linq to objects to perform the queries (unless the number of entities you manage is really small). So, the only possible solution to implement a service that manages a large number of entities and expose a rich object model at the same time is to use an ORM other than EF, like Linq to SQL or NHibernate.

NHibernate in that sense is a much mature framework, you are not tied to a single db implementation (sql server), and the number of features that this framework can offer is obviously higher, making NHibernate a good technology for implementing data services. In addition, NHibernate 3.0 already ships with a Linq provider out of the box that works really well (Entity Framework Code Only looks promising too but it is a CTP at this point).

In order to use NHibernate in a data service, you need to provide an IUpdatable implementation for making it read/write (support for POST PUT and DELETE). Otherwise it will behave as read only by default (only GETs supported).

This is how the IUpdatable implementation looks like,

public abstract class NHibernateDataContext : IUpdatable
{
protected ISession Session;

private List<object> entityToUpdate = new List<object>();
private List<object> entityToRemove = new List<object>();

public NHibernateDataContext(ISession session)
{
this.Session = session;
}

/// <summary>
/// Creates the resource of the given type and belonging to the given container
/// </summary>
/// <param name="containerName">container name to which the resource needs to be added</param>
/// <param name="fullTypeName">full type name i.e. Namespace qualified type name of the resource</param>
/// <returns>object representing a resource of given type and belonging to the given container</returns>
object IUpdatable.CreateResource(string containerName, string fullTypeName)
{
Type t = Type.GetType(fullTypeName, true);
object resource = Activator.CreateInstance(t);

entityToUpdate.Add(resource);

return resource;
}

/// <summary>
/// Gets the resource of the given type that the query points to
/// </summary>
/// <param name="query">query pointing to a particular resource</param>
/// <param name="fullTypeName">full type name i.e. Namespace qualified type name of the resource</param>
/// <returns>object representing a resource of given type and as referenced by the query</returns>
object IUpdatable.GetResource(IQueryable query, string fullTypeName)
{
object resource = null;

foreach (object item in query)
{
if (resource != null)
{
throw new DataServiceException("The query must return a single resource");
}
resource = item;
}

if (resource == null)
throw new DataServiceException(404, "Resource not found");

// fullTypeName can be null for deletes
if (fullTypeName != null && resource.GetType().FullName != fullTypeName)
throw new Exception("Unexpected type for resource");

return resource;
}


/// <summary>
/// Resets the value of the given resource to its default value
/// </summary>
/// <param name="resource">resource whose value needs to be reset</param>
/// <returns>same resource with its value reset</returns>
object IUpdatable.ResetResource(object resource)
{
return resource;
}

/// <summary>
/// Sets the value of the given property on the target object
/// </summary>
/// <param name="targetResource">target object which defines the property</param>
/// <param name="propertyName">name of the property whose value needs to be updated</param>
/// <param name="propertyValue">value of the property</param>
void IUpdatable.SetValue(object targetResource, string propertyName, object propertyValue)
{
var propertyInfo = targetResource.GetType().GetProperty(propertyName);
propertyInfo.SetValue(targetResource, propertyValue, null);

if (!entityToUpdate.Contains(targetResource))
entityToUpdate.Add(targetResource);
}

/// <summary>
/// Gets the value of the given property on the target object
/// </summary>
/// <param name="targetResource">target object which defines the property</param>
/// <param name="propertyName">name of the property whose value needs to be updated</param>
/// <returns>the value of the property for the given target resource</returns>
object IUpdatable.GetValue(object targetResource, string propertyName)
{
var propertyInfo = targetResource.GetType().GetProperty(propertyName);
return propertyInfo.GetValue(targetResource, null);
}

/// <summary>
/// Sets the value of the given reference property on the target object
/// </summary>
/// <param name="targetResource">target object which defines the property</param>
/// <param name="propertyName">name of the property whose value needs to be updated</param>
/// <param name="propertyValue">value of the property</param>
void IUpdatable.SetReference(object targetResource, string propertyName, object propertyValue)
{
((IUpdatable)this).SetValue(targetResource, propertyName, propertyValue);
}

/// <summary>
/// Adds the given value to the collection
/// </summary>
/// <param name="targetResource">target object which defines the property</param>
/// <param name="propertyName">name of the property whose value needs to be updated</param>
/// <param name="resourceToBeAdded">value of the property which needs to be added</param>
void IUpdatable.AddReferenceToCollection(object targetResource, string propertyName, object resourceToBeAdded)
{
PropertyInfo pi = targetResource.GetType().GetProperty(propertyName);
if (pi == null)
throw new Exception("Can't find property");

IList collection = (IList)pi.GetValue(targetResource, null);
collection.Add(resourceToBeAdded);

if (!entityToUpdate.Contains(targetResource))
entityToUpdate.Add(targetResource);
}

/// <summary>
/// Removes the given value from the collection
/// </summary>
/// <param name="targetResource">target object which defines the property</param>
/// <param name="propertyName">name of the property whose value needs to be updated</param>
/// <param name="resourceToBeRemoved">value of the property which needs to be removed</param>
void IUpdatable.RemoveReferenceFromCollection(object targetResource, string propertyName, object resourceToBeRemoved)
{
PropertyInfo pi = targetResource.GetType().GetProperty(propertyName);
if (pi == null)
throw new Exception("Can't find property");
IList collection = (IList)pi.GetValue(targetResource, null);
collection.Remove(resourceToBeRemoved);

if (!entityToUpdate.Contains(targetResource))
entityToUpdate.Add(targetResource);
}

/// <summary>
/// Delete the given resource
/// </summary>
/// <param name="targetResource">resource that needs to be deleted</param>
void IUpdatable.DeleteResource(object targetResource)
{
entityToRemove.Add(targetResource);
}

/// <summary>
/// Saves all the pending changes made till now
/// </summary>
void IUpdatable.SaveChanges()
{
using (var transaction = Session.BeginTransaction())
{
Session.FlushMode = FlushMode.Commit;

foreach (var entity in entityToUpdate)
{
Session.SaveOrUpdate(entity);
}

foreach (var entity in entityToRemove)
{
Session.Delete(entity);
}

transaction.Commit();
}

}

/// <summary>
/// Returns the actual instance of the resource represented by the given resource object
/// </summary>
/// <param name="resource">object representing the resource whose instance needs to be fetched</param>
/// <returns>The actual instance of the resource represented by the given resource object</returns>
object IUpdatable.ResolveResource(object resource)
{
return resource;
}

/// <summary>
/// Revert all the pending changes.
/// </summary>
void IUpdatable.ClearChanges()
{
}
}

This implementation receives an instance of a NHibernate session, and implements all the required methods for creating, updating and deleting existing entities. As you can see, this is an abstract class that you can reuse for any NHibernate Data service. The concrete implementation is the one that you will need to expose in the data service, and it is tied to your entities. For instance,
 
public class MyNHibernateDataContext : NHibernateDataContext
{
public MyNHibernateDataContext(ISession session)
: base(session)
{
}

public IQueryable<Customer> Customers
{
get
{
return new NhQueryable<Customer>(Session);

}
}

public IQueryable<Person> People
{
get
{
return new NhQueryable<Person>(Session);

}
}
}

Finally, the data service exposes the concrete implementation of the data context, and provides some code to initialize the NHibernate session.

[ServiceBehavior(IncludeExceptionDetailInFaults=true)]
public class MyDataService : DataService<MyNHibernateDataContext>, IDisposable
{
public static void InitializeService(DataServiceConfiguration config)
{
config.SetEntitySetAccessRule("*", EntitySetRights.All);

config.DataServiceBehavior.AcceptCountRequests = true;
config.DataServiceBehavior.AcceptProjectionRequests = true;
config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;

config.UseVerboseErrors = true;
}

ISession session;

protected override MyNHibernateDataContext CreateDataSource()
{
var factory = CreateSessionFactory();

this.session = factory.OpenSession();
this.session.FlushMode = FlushMode.Auto;

return new MyNHibernateDataContext(this.session);
}

private static ISessionFactory CreateSessionFactory()
{
return Fluently.Configure()
.Database(MsSqlConfiguration.MsSql2008
.ConnectionString("Data source=.\\SQLExpress;Initial Catalog=Samples;Trusted_Connection=yes")
.Cache(c => c
.UseQueryCache()
.ProviderClass<HashtableCacheProvider>())
.ShowSql())
.Mappings(m => m.FluentMappings.AddFromAssemblyOf<Program>())
.BuildSessionFactory();
}

This example uses Fluent NHibernate for mapping the entities to the database structure. All the code for initializing the NHibernate session is in the CreateDataSource method that the DataService calls for serving any Http Request.  
 
The mapping for the entity “Customer” looks using Fluent NHibernate looks as follow,
 
[DataServiceKey("Id")]
public class Customer
{
public virtual Guid Id { get; private set; }
public virtual string FullName { get; set; }
public virtual string Country { get; set; }
public virtual IList<Person> People { get; set; }

public virtual void AddPerson(Person p)
{
p.Customer = this;
this.People.Add(p);
}
}

public class CustomerMap : ClassMap<Customer>
{
public CustomerMap()
{
Id(x => x.Id).GeneratedBy.GuidComb()
.UnsavedValue("00000000-0000-0000-0000-000000000000");
Map(x => x.FullName);
Map(x => x.Country);
HasMany(x => x.People)
.KeyColumn("CustomerId")
.Inverse()
.Cascade.All();
}
}

The DataServiceKey in the entity is something required by the reflection provider for resolving queries by key (The dataservice will throw exceptions if you don’t provide that attribute for your entities).
 
UPDATE!!: One colleague asked me an interesting question after I submitted this post. How do you deal with lazy properties or collections on the client side ?. This is not a problem at all with WCF Data services, as the default behavior is to retrieve the entities that you only asked for. That means that the associations (the lazy properties and collections) are not retrieved unless you ask for them explicitly with an expand (You are always in control of what you actually want to get). The code on the client side that uses an expand is showed here,
 
var context = new MyNHibernateDataContext(new Uri("http://localhost:8080"));
var customer = context.Customers
.Expand("People")
.First();

Of course, this technique might not be optimal if you perform several expands as you will be reproducing the common N+1 problem. I think the only solution in that case is to implement a custom query provider that knows how to submit a single query for returning all the associations at once rather than performing different queries for each association.
Posted by cibrax | 6 comment(s)

My colleagues Jesus Rodriguez and Dwight Goins were talking about many of challenges you might find for managing WCF services in the enterprise, and how SO-Aware can help you out in all those aspects. Check it out here.

Posted by cibrax | 1 comment(s)

A common requirement that we received from some customers while we were in the early design stages of SO-Aware was the ability of tracking static dependencies between services. For instance, Service A calls Service B and Service B calls Service X. This feature is not only useful for documentation but also for helping administrators to determine which services are going to affected with a change in one of the existing service. (In that example, a change in the service X would affect Service A and B).

A service dependency is represented in the repository as a “ServiceDependency” resource, which contains two simple properties “ServiceVersion” and “DependantServiceVersion”. Therefore, creating a new service dependency with the OData API is quite straightforward, and it’s illustrated in the code above

   1: var serviceDependency = new ServiceDependency
   2: {
   3:     ServiceVersion = serviceA,
   4:     DependantServiceVersion = serviceB,
   5: }

That code is creating a new Dependency resource that represents a dependency between service B and service A (Service B is using Service A).

As the OData feed might not be the right thing for visualizing the dependencies between services, we have also added a nice dependency tree as part of the web portal to visualize those. So, when you visualize the dependencies for the service A for instance, you will see something like this.

Posted by cibrax | 1 comment(s)

Service testing is another interesting feature that you will find in SO-Aware. Having a tool for testing a service in the repository is very important for the following reasons,

  • You can make sure the service up and running (The service deployment and configuration was done correctly).
  • The service is responding to the different requests generated by the tests.
  • You can have a better idea of the service availability over the time

As SO-Aware is mostly oriented to WCF services, and also supports artifacts like bindings and behaviors, makes this tool very appealing for testing secure services or services with specific bindings elements (like net.tcp services for instance) that are really hard to test with other existing tools in the market.

The testing tool in SO-Aware supports two models, On-Demand testing and Scheduled testing. In the On-Demand testing model, you basically want to test the service at any moment with an specific message to make sure the service is working or for reproducing an issue, but you don’t want to save that test for being executed later. The Scheduling testing model is the opposite, you want to save the test so it can be executed for the test scheduler service and have results about the service health over the time.

A test and the associated results are also exposed as OData resources in the repository, so you can manage the different tests or query results using simple http requests.

As all the service artifacts (endpoints, contract, operations, schemas) are also stored in the repository, the definition of a new test is quite straightforward, you only need to specify which service version you want to test, the operation and provide a request sample message (which can also be automatically inferred from the schema associated to the operation request message in soap services). 

The code below creates a new service using the client API.

   1: var customersService = repository.ServiceVersions
   2:     .Expand("Soap/Endpoints/Contract/Operations")
   3:     .Where(s => s.Service.Name == "Customers" &&
   4:                 s.MajorVersion == 1 && s.MinorVersion == 0)
   5:     .FirstOrDefault();
   6:  
   7: var endpoint = customersService.Soap.Endpoints.First();
   8: var operation = endpoint.Contract.Operations.First(o => o.Name == "GetCustomer");
   9:  
  10: var testManager = new TestManager(RepositoryUri);
  11:  
  12: var sampleMessage = testManager.CreateExampleStringMessageForOperation(customersService.Id,
  13:     endpoint.Name, operation.Name);
  14:  
  15: test = new Test
  16: {
  17:     Name = "GetCustomer",
  18:     UsesWindowsAuthentication = true,
  19:     Username = "TEST",
  20:     Password = "password",
  21:     SchedulingTime = 1,
  22:     SchedulingUnit = "MINUTE",
  23:     RequestMessage = sampleMessage
  24: };
  25:  
  26: repository.AddToTests(test);
  27: repository.SetLink(test, "Operation", operation);
  28: repository.SetLink(test, "Endpoint", endpoint);

As you can see, the code for creating a new test for a SOAP service is quite straightforward (Testing of REST services is also supported). Only the endpoint and operation that are going to be tested needs to specified. In that example, I am using an endpoint configured with Windows Authentication, so I am passing the credentials in the test definition as WCF does not support Username/Password credentials as a behavior. Otherwise, the test definition could also receive a behavior with the client credentials (A behavior with the certificate definition for instance).

The TestManager is an utility class that can be used for multiple purposes in the definition of a new test, or also for executing an existing test. The TestManager.CreateExampleStringMessageForOperation returns an string representing an xml message inferred from the operation message schemas.

Once the test definition is stored in the repository, it’s going to start being executed by the Test scheduler according to the scheduling options for that test (SchedulingTime and SchedulingUnit properties in the test definition). You can execute the test at any time with an two specific Http Endpoint that SO-Aware publishes for executing tests.

“Testing.svc/Tests/{testId}” for executing an individual test with a simple http get (The response represents the test results)

“Testing.svc/TestGroups/{groupName}” for executing an set of tests with a simple http get. You can associate a group to the test when this is one is created.

   1: WebClient client = new WebClient();
   2: client.Credentials = CredentialCache.DefaultCredentials;
   3: var response = client.DownloadString("http://localhost/SOAware/Testing.svc/Tests/" + test.Id.ToString());
   4:  
   5: Console.WriteLine(response);

Only one line of code is required for executing a test (an http get), so this becomes handy for integrating the tests as part of a build process too.

The test results are also available as a resource in the repository, so you can query for specific test using the traditional OData query options. The example above illustrates how the most recent test execution result can be got for an specific test.

   1: var testResult = repository.TestInstances.Where(t => t.Test.Id == test.Id).Last();

You can always browse the tests and the results in SO-Aware portal for getting a more user friendly representation, and see some statistics about the test executions.

The sample code is available at this location.

Posted by cibrax | 2 comment(s)
More Posts