Archives / 2009 / February
  • Carrying sensitive information in SAML assertions

    When SAML is used in conjunction with WS-Security, only an small piece of the token is encrypted, the proof key for the relying party. The rest of the token goes in plain text, that also includes the user's claims.


      <saml:Conditions NotBefore="2009-02-24T19:48:20.500Z" NotOnOrAfter="2009-02-24T19:53:20.500Z"></saml:Conditions>






            <KeyInfo xmlns="">...</KeyInfo>



      <saml:Attribute AttributeName="displayName" AttributeNamespace="">

          <saml:AttributeValue>John Foo</saml:AttributeValue> <--Attribute value-->



      <Signature xmlns="">...</Signature>


    Knowing this, you should never include sensitive information as claims in a SAML token. This is also related to the identity law #2, "Minimal Disclosure for a Constrained Use". The Identity provider should only disclose the least amount of identifiying information for executing the operation on the relying party.

    Some examples are,

    • A winery only needs to know whether the customer is in a legal age for buying alcohol according to the law, a claim like "over21" should be enough for that purpose, there is not need to know the customer birth date at all.
    • An online store that sells products does not necessary need to know the number of every credit card owned by a customer, a friendly name representing the card and optionally the available balance could be enough for completing a purchase.

    SAML 2.0 introduces the concept of "encrypted attribute", which clearly states its purpose, encrypt individual assertions in a SAML token. In this way, a token can now carry the encrypted proof key and optionally one or more encrypted assertions with sensitive information.

    You can take a look at this page for more information about the differences between SAML 1.1 and 2.0.

    Geneva Framework Beta 1 already implements a subset of SAML 2.0, however, it looks like this feature has been left out in the current release. Not sure either whether this feature will be included as part of the final release (Last quarter of 2009). I created a post in the forums some time ago, I haven't received any feedback yet.


  • Contract Projections in WCF declarative services

    As my friend Jesus mentioned in the post "Using XAML serialization in WCF 4.0", WCF 4.0 introduces a new way to implement services that are totally defined in XAML, which receive the name of "declarative services". In the past, creating a simple service involved three basic steps,

    1. Define the service contract
    2. Implement the service contract
    3. Host the service implementation

    #1 and #2 were all done in an imperative .NET programming language such as C# or VB.NET. Today, thanks to this new feature, we will able to define the service interface (#1) using XAML and implement the service (#2) using a declarative workflow (XAML too).

    This was announced as part of the Microsoft's Oslo modeling vision in the last PDC.

    Aaron Skonnard has recently written an excellent article for the MSDN, "WCF and WF Services in the .NET framework 4.0, and Dublin", where he discusses all these new features more in detail, and the role of dublin in that vision. Something I found interesting in that article was the fact that he mentioned "Contract Projections" as part of "declarative services".

    A contract projection allows separating the logical contract definition from the representation of the messages that are sent or received. We will able to have a single contract definition and specify different messaging styles like "SOAP" or "REST/POX" using a contract projection.

    As in the example shown in the article, a regular WCF service definition for a calculator service made in C# would look like this,

    public interface ICalculator
        int Add(int Op1, int Op2);
        int Subtract(int Op1, int Op2);

    The equivalent representation in XAML (using declarative services) would look like this,

    <ServiceContract Name="ICalculator">
        <OperationContract Name="Add">
            <OperationArgument Name="Op1" Type="p:Int32" />
            <OperationArgument Name="Op2" Type="p:Int32" />
            <OperationArgument Direction="Out" Name="res1" Type="p:Int32" />
       <OperationContract Name="Subtract">
            <OperationArgument Name="Op3" Type="p:Int32" />
            <OperationArgument Name="Op4" Type="p:Int32" />
            <OperationArgument Direction="Out" Name="res2" Type="p:Int32" />

    And finally, the projection of that contract at wire level like SOAP,

        <SoapContractProjection Name="ICalculatorSoapProjection">
            <!-- service contract definition goes here -->

    As you can see, we will able to have a single service implementation (or XAML workflow), and multiple contract projections or "KnownProjections" (for the different messaging styles) to get access to that service.

    With this new feature, it looks like REST/POX will be officially supported for consuming declarative services. (I talked in the past about exposing workflow services with REST).

    As part of the PDC bits (The code that was distributed in a VPC), there is a interface "IContractProjection" for defining new kind of projections,

    public interface IContractProjection
        // Methods
        void ApplyEndpointBehavior(ServiceEndpoint endpoint);
        ContractDescription GetContractDescription();

        // Properties
        string ConfigurationName { get; }
        ServiceContract Contract { get; set; }

    For the moment, there is single implementation for Soap, "SoapContractProjection". I do not know, we will see if the WCF/WF team provide more implementations in the future.


  • Issues to subdivide an entity framework model

    The other day, my team and I ran into some design issues while trying to split a big Entity Framework model into smaller pieces according to areas of functionality. At first glance, it seemed to be a common design problem, something easy to overcome, but it did not result that way. I could not find much information about people having the same issue either.

    In the system we are currently designing, we use modules or packages to group classes that belong to the same area of functionality. As you can guess, these modules do not represent more than a set of use cases or stories that are tightly coupled in design time.

    We also have, however, some classes that are common for all those modules, so we put them in a shared module. In few words, we have a entity model shaped like a star, being the shared module the center of the star, and the rest of the modules just leafs.

    The problem appeared when we tried to assign a different entity framework model to any of these modules for persisting the classes in a store. We could not find an easy way to reference the model with the shared entities from the rest of them, so we gave up after a while.

    A single model with all the entities was not an option for us, because it simple does not scale up. We took a similar approach in the past with another project, the result was a huge model full of entities and really hard to maintain. This solution also required having a extra service layer on top of the entity model to encapsulate functionality specific to each module, which made very difficult to have a rich domain model at first glance.

    After a search in several blogs, I came across this post "Working with large models in entity framework" from the Ado team, where they discuss some possible techniques or workarounds for an scenario like this.

    They basically proposed the following solutions,

    1. Reference one conceptual model (CSDL) from another. You create different models, and manually modify each CSDL to reference the model (CSDL) with the shared entities.

    The CSDL model supports a clause "using" for including types defined in another model, it is basically a simple way to reuse types. This sounds good, however, it lacks of design time experience, which makes this feature totally useless for the scenario we have (and really complicated to implement according to this post written by Julie Lerman)

    2. Duplicate metadata for the shared entities in each conceptual model (CSDL). Again, this approach sucks from a point of view of maintainability, typically you will have the same problems that you would see with duplicated code. One change in an shared entity will require changes in every model that reference it.

    3. Expose foreign keys as scalar properties because you do not want to pull in all the shared entities into every Entity model. Not a good solution either, the shared entities get out of the unit of work or context (ObjectContext) generated by the designer. As result, you can not execute queries that require joins between the two contexts. Unless you include some helper methods to retrieve the shared entities using the scalar properties, you also lost the possibility of having a rich domain model.

    As you can see, we could not find any workaround that totally met our expectations, so we might stick to the solution #2 (Duplicating the entities). The solution #1 would be perfect with some design time support, I hope the ado team consider this for a future release.

    Do you have any thought or experience to share ?. I would be more than happy to hear your feedback regarding possible solutions for this scenario.


  • Some thoughts on OpenID and OAuth for Desktop clients

    OpenID and OAuth are today excellent solutions for "Single Sign On" (SSO)  and "Authorization Delegation" respectively. They are, however, based on Http Redirections and therefore, tied to passive clients or commonly called web browsers.

    An interesting research was made by google some time ago, it can be found here. After reading that article, it looks like they could not get rid of a browser at all :(.

    If that does not work for you, another solution could be WS-Federation Active Profile. 

    "SSO" is an inherent feature of WS-Federation, not doubt about it.

    "Authorization Delegation" can also be emulated with a combination of "SSO" and authorization claims. In this scenario, we always give our credentials to an identity provider we trust, there is no need to give away our credentials to any site or service involved in a transaction. The authorization claims also represent fine-granular permissions of what we are allowed to do on the service side, and again, they can provided by identity provider itself or a resource STS. I discussed this approach in my last post, "Addressing Authorization with OAuth or the .NET Access Control Service", the resource STS in this case would be the ACS service.


  • Addressing authorization with OAuth or the .NET Access Control Service


    As I mentioned in the post, "OAuth Channel for REST services", OAuth allows a client application to obtain user consent (as access tokens) for executing operations over private resources on his behalf. Resources in this context represent anything from the user that the service provider make public through services, they could be for instance contacts, pictures or personal information to name a few.

    The access token that the consumer gets from the service provider represents in some way an Access Control List that maps directly with permissions granted by the user over his resources. For example, John provides read/write access to his contacts on Windows Live (Service provider) to a third party service (consumer).

    A previous direct trust relationship must exist between the consumer and the service provider in order to make all this happen, that relationship in OAuth also takes the form of a Request Token. If the consumer can not get a request token from the service provider, the user is not even redirected to this last one for negotiating the access token.

    As more service providers get involved in a simple scenario, more access tokens the consumer will have to negotiate. In addition, the user should have registered in each one of those service provider prior to use the consumer application, unless he was  lucky enough to have Open ID authentication in some of those services.

    In the image above, the user is authenticated by the service provider during the request token/access token exchange.

    .NET Access Control Service

    The Microsoft .NET Access Control Service was recently announced in the PDC as part of the Windows Azure platform. (It was formerly part of Biztalk services). Today, it is complemented by two other services, the Microsoft .NET Service Bus and Microsoft .NET Workflow Service.

    This service in addition to be a valid Secure Token Service (WS-Trust) that can participate in the identity metasystem, it is a claim transformer. It was conceived with the idea of mapping some input claims (Identity claims usually) into authorization claims that represent an ACL for the service running on the relying party.

    The SAML token containing these output claims would be equivalent to the access token in OAuth. 

    As we saw in OAuth, a prior trust agreement must exist between the relying party and the .NET Access Control Service. In this case, A X509 public key for the relying party must be registered on the .NET ACL service (for encrypting the SAML token), and the relying party must have a X509 public key coming from the .NET ACL for verifying the token signature. As part of this agreement, also some control rules for mapping claims must defined  in the .NET ACL service configuration (The .NET ACL will use the appliesTo header in the WS-Trust RST message to determine which rules have to be used).

    If we analyze now the scenario discussed before with OAuth, a single consumer and multiple service providers, the client always authenticates against the same identity provider, no matter the number of the service providers involved, which is a pretty good thing. OAuth depends on Open ID for getting the same effect.

    The picture below show the complete scenario with a single relying party,




  • WS-TRUST profiles and Cardspace

    Geneva framework supports today the two WS-Trust profiles, Active and Passive.

    The active profile deals specially with applications that are able to make soap request to any WS-Trust endpoint. On other hand, the passive profile is for clients that are unable to emit proper SOAP (a web browser for instance) and therefore receive the name of "passive requestors". This last one involves browser-based communication with several http redirects between the different parties (client, STS and relying party).

    Cardspace embedded in a web browser page however is not a Passive client. Once the user decides to be authenticated in a website with an information card, the Cardspace identity selector will negotiate and get the issue token from the identity provider using the active profile. Finally, the identity provider will pass the token to the browser using some Inter-Process communication, and the browser can later submit the token to the server using an standard http mechanism like a web post.

    As you can see, Carspace in a browser is actually an hybrid between Active and Passive. Vittorio has also discussed this scenario in the past, he called it "Passive-Aggressive".



  • Security Token Handlers in Geneva Framework

    According to the Geneva documentation,

    "SecurityTokenHandler defines an interface for plugging custom token handling functionality. Using the SecurityTokenHandler you can add functionality to serialize, de-serialize, authenticate and create and specific kind of token"

    I can see dead people ..... :)


    Haven't we seen this before ? Oh, yes, I think we did. The token managers in WSE, they are pretty much the same thing. It looks like the Geneva team came up with a solution that worked well in the past with WSE. One token manager for each kind of token we want to consume in our application. If your app needs to consume a custom token or customize an existing one, just derive the SecurityTokenHandler base class or one of the existing SecurityTokenHandler implementations and override some of its methods with custom functionality. For instance, the Geneva Framework now comes with two built-in token handlers for Username tokens, a MembershipUsernameSecurityTokenHandler for validating users against a membership provider and a WindowsUsernameSecurityTokenHandler for doing the same against a windows account store.

    Most of the code we had in the past as part of authorization policies (IAuthorizationPolicy) for mapping claims or validating tokens in a UsernamePasswordValidator or X509CertificateValidator has now moved to token handlers in the Geneva framework.

    I like this way of extending a custom handler for supporting new kind of tokens, it is quite more straightforward to me than the model currently supported by WCF.


  • Buenos Aires MSDN and Technet Briefing 2009

    I just got an email from Miguel Angel Saenz confirming the date of the next biggest Microsoft event in Buenos Aires Argentina, "MSDN briefing", which will take place on March 25th.

    They are now accepting proposals or suggestions for possible sessions in the event, or to give an specific name to event itself :).

    If you are interested in participating, check out this website.


  • Some thoughts on Portable STS (P-STS) and Geneva Cardspace

    The other day and friend of mine asked me about portable STS implementations, if I knew about any available solution that he could use on his company. That reminded me of a conversation I had like two years ago with another developer working on custom .NET CLR framework version for portable devices (like smartcards). As part of that project, his team was also working on a TCP/IP communication stack for the device, and a http handler for accepting raw WS-TRUST messages. One goal for that project was to have a P-STS that could be interoperable with WCF. The idea seemed very promising at time.

    So, what is a PSTS after all ?. In a few words, it is a service running on a portable device that exposes WS-TRUST endpoints and can issue security tokens of any kind (e.g, SAML tokens).

    Making a search today on google will drop several P-STS products or solutions,  some of them also claim to be interoperable with WCF and Microsoft Cardspace V1.

    In terms of identity management, A P-STS really makes a great different over existing authentication mechanisms like username/password, X509 certificates or any other kind of two-factor authentication device. Most of these authentication mechanisms are widely accepted and used today in applications within corporate environments or applications that requires off-line support. However, sometimes they lack of a truly identity support, which means that they do not represent the user identity at all in the context of those applications, they are just a way of identifying returning users, or they are hard to extend with additional user's identity claims.

    I can not deny that X509 certificates have demonstrated to be a very effective and secure way to authenticate users. In addition, X509 certificates can be extended with some custom attributes, the space is limited, but at least there is a possibility. However, X509 certificates represent hard tokens, the claims stored on a certificate can not be changed once it has been issued. Therefore, they are a good solution as long as their information do not change frequently over a period of time.

    Issue tokens (e.g SAML tokens) on other hand are more dynamic and cheaper to create. They usually have a short expiration time, they can issued and used  on the fly, but what is more important, they can carry custom information or claims about the subject it has been issued for.

    Some good news is that the Geneva Cardspace team has also announced some support for roaming scenarios in Cardspace V2. There will be a way to store our identity cards on a device (or somewhere in the cloud), which will be great to combine with a P-STS, no need to export/import the cards anymore. This scenario was not possible in Cardspace V1, and here is the explanation. According to what Rich Randall mentioned in the PDC talk "BB44 Identity: Windows CardSpace "Geneva" Under the Hood ", the future Cardspace interface could look as follow,



    As you can see, it will not be long until we have complete and portable identity solutions for roaming scenarios.