Contents tagged with Geneva

  • Negotiating SAML tokens for REST clients with the HttpClient class

    Continuing my post “Brokered authentication for REST active clients”, I will show today how the client code can be simplified using the new HttpClient (WCF REST Starter kit 2) and some custom http processing stages attached to its pipeline.

    The first thing we have to do is to implement a custom processing stage (a class that derives from HttpStage) to centralize all the logic needed to negotiate a SAML token from an existing STS.

    The pipeline contains basically two kinds of stage, a regular http stage that can be injected through the HttpClient.Stages collection, and a more specialized implementation HttpWebRequestTransportStage, which runs last in the pipeline and has access to all the transport settings. This last one can only be replaced with a custom version of the HttpClient that overrides the protected method “CreateTransportStage”,

    public class HttpClient : IDisposable


      protected virtual HttpStage CreateTransportStage();


    Having said this, two possible options for implementing the token negotiation in a pipeline stage could be,

    1. A regular http stage that can be initialized with the STS address and the user credentials through the class constructor or a property setter.

    2. A custom HttpWebRequestTransportStage and the corresponding HttpClient (FederatedHttpClient) implementation to return that stage.

    From my point of view, the second approach seems to work better because the HttpClient instance does not get tied to the user credentials. This is the approach I will use for this example.

    public class NegociateTokenStage : HttpWebRequestTransportStage


    private string stsUri = "";

    public NegociateTokenStage(string stsUri) : base()


        this.stsUri = stsUri;


    protected override void ProcessRequestAndTryGetResponse(HttpRequestMessage request, out HttpResponseMessage response, out object state)


        string token = GetToken(stsUri, request.Uri.AbsoluteUri, this.Settings.Credentials);

        request.Headers.Add("Authorization", token);

        base.ProcessRequestAndTryGetResponse(request, out response, out state);


    The custom transport stage derives from the built-in transport stage “HttpWebRequestTransportStage” and adds some custom code in the ProcessRequestAndTryGetResponse to negotiate the SAML token from the STS before the final service gets called (This is being done in the GetToken method). After that, the SAML token get passed to the final service through the authorization html header.

    The custom implementation of the HttpClient application is quite simple, only returns our custom transport stage in the CreateTransportStage method,

    public class FederatedHttpClient : HttpClient


        public string StsUri


            get; set;


        protected override HttpStage CreateTransportStage()


            NegociateTokenStage stage = new NegociateTokenStage(this.StsUri);

            stage.Settings = this.TransportSettings;

            return stage;



    Now, the client application can use our custom version of the HttpClient for consuming the final service, only a few lines are required.

    FederatedHttpClient client = new FederatedHttpClient { StsUri = "http://localhost:7481/STS/Service.svc/Tokens" };

    client.TransportSettings.Credentials = new NetworkCredential("cibrax", "foo");

    string response = client.Get("http://localhost:7397/RestServices/Service.svc/Claims").Content.ReadAsString();

    The SAML negotiation is totally transparent to the client application, it does not even know that a SAML token exists, sweet :).

    The code is available to download at this location.

    UPDATE: As John Lambert from the WCF team pointed out, a custom transport stage also needs to override the BeginProcessRequestAndTryGetResponse and EndProcessRequestAndTryGetResponse to support async scenarios. I will try to update the example to override these methods any time soon. Thanks John for the feedback!!!.


  • Brokered authentication for REST active clients with SAML

    I have been thinking for a while about what could be a good way to support brokered authentication for active REST clients. Something I did not want to do was to force the use of WS-Trust Active profile, which is in essence SOAP based.

    Some of the qualities attributes that are easy to reach with REST services, such as simplicity, interoperability and scalability can definitely be affected with the introduction of a additional SOAP stack for negotiating an identity token. WS-Trust passive requestor profile, on the other hand, was designed for dumb clients like web browsers, clients that do not have capabilities to handle cryptographic materials or the SOAP stack itself.  This profile basically hides most of the WS-Trust details from client applications through a sequence of http redirections, which could be helpful in this scenario for negotiating a token and still keep simple REST clients. However, as some user interaction is required, this profile is not suitable for consuming REST services from desktop applications or other active client applications.

    If we take a deep look at the functionality provided by a Secure Token Service (STS), it is not more than a service that handle the lifecycle of a identity token, it knows how to issue a token, renew it or finally cancel it when it is not longer need it.  If we see all these scenarios from a point of view of REST, an identity token is just a resource, something that can be created, updated or even deleted. Of course, there is not any spec available yet for this scenario, all I will show here is just an possible implementation of a Restful STS.

    The mapping of supported Ws-Trust actions to http verbs for my Restful STS is defined below,

    • Issue = POST, creates or issues a new token resource (A SAML token)
    • Renew = PUT, renew an existing token
    • Cancel = DELETE, cancel an existing token
    • GET, gets an existing token (There is not such thing in Ws-Trust)

    I leave out the "Validate" action as part of this implementation.

    What I have created for this example is a REST facade layered on top of a STS implementation with the Geneva Framework. The definition of service contract for this Restful STS for supporting that mapping should look like this,


    public interface IRestSts



        [WebInvoke(UriTemplate="Tokens", Method="POST", RequestFormat=WebMessageFormat.Xml, ResponseFormat=WebMessageFormat.Xml)]

        RequestSecurityTokenResponse IssueToken(RequestSecurityToken request);



        [WebInvoke(Method = "PUT", UriTemplate = "Tokens/{tokenId}", RequestFormat = WebMessageFormat.Xml, ResponseFormat = WebMessageFormat.Xml)]

        RequestSecurityTokenResponse RenewToken(string tokenId);



        [WebInvoke(Method = "DELETE", UriTemplate = "Tokens/{tokenId}", RequestFormat = WebMessageFormat.Xml, ResponseFormat = WebMessageFormat.Xml)]

        void CancelToken(string tokenId);



        [WebGet(UriTemplate = "Tokens/{tokenId}", RequestFormat = WebMessageFormat.Xml, ResponseFormat = WebMessageFormat.Xml)]

        RequestSecurityTokenResponse GetToken(string tokenId);   


    As I mentioned before, the client has to first acquire a token from the STS, that can be done with a regular Http POST containing a RequestSecurityToken message.


    The message embedded in the request body to the STS looks like this,

    <RequestSecurityToken xmlns="">

    And the corresponding response like this,

    <RequestSecurityTokenResponse xmlns="" xmlns:i="">

    Both calls, the first one to get the token from the STS, and the second call to invoke the service in the Relying party should be protected with transport security to avoid any middle in the man attack.

    In this sample, the STS is using basic authentication to authenticate the user trying to get access to the token. If the authentication succeed, the STS implemented with Geneva will provide the necessary claims associated with that user.

    The code on the client side to ask for a new token is quite simple,

    static string GetToken(string address, string appliesTo, string username, string password)


        RequestSecurityToken request = new RequestSecurityToken


            TokenType = "",

            AppliesTo = appliesTo


        DataContractSerializer requestSerializer = new DataContractSerializer(typeof(RequestSecurityToken));

        WebRequest webRequest = HttpWebRequest.Create(address);

        webRequest.Method = "POST";

        webRequest.ContentType = "application/xml";

        webRequest.Credentials = new NetworkCredential(username, password);

        using (var st = webRequest.GetRequestStream())


            requestSerializer.WriteObject(st, request);



        WebResponse webResponse = webRequest.GetResponse();

        DataContractSerializer responseSerializer = new DataContractSerializer(typeof(RequestSecurityTokenResponse));

        using (var st = webResponse.GetResponseStream())


            var response = (RequestSecurityTokenResponse)responseSerializer.ReadObject(st);

            return response.RequestedSecurityToken;



    It creates a new RequestSecurityToken message, provides the user credentials and post that information to the STS. The response from the STS is a RequestSecurityTokenResponse containing the issued token, that's what this method returns in response.RequestedSecurityToken.

    Once the client gets the issued token from the response, it can include it as part of the request message to the relying party's service. For this sample, I decided to include the token in the "Authorization" header, which is a common mechanism to attach authentication credentials in a request message to a REST service (Basic authentication, and other authentication mechanisms use the same approach).

    WebRequest webRequest = HttpWebRequest.Create(address);

    webRequest.Method = "GET";

    webRequest.Headers["Authorization"] = token;

    Now, the hard part, the Relying Party needs a way to parse the token and authenticate the user before calling the service implementation. Fortunately, the guys from the WCF REST Starter kit have provide an excellent solution for this kind of scenarios, message interceptors. What I did here was to implement a message interceptor for SAML tokens, which internally used the Geneva Framework for performing all the validations and parsing the token.  An easy way to inject message interceptors in a service implementation is through a custom service factory (Zero config deployment),

    class AppServiceHostFactory : ServiceHostFactory


        protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses)


            WebServiceHost2 result = new WebServiceHost2(serviceType, true, baseAddresses);

            result.Interceptors.Add(new MessageInterceptors.SamlAuthenticationInterceptor(new TrustedIssuerNameRegistry()));

            return result;



    The "TrustedIssuerNameRegistry" is a just a simple implementation of a Geneva "IssuerNameRegistry" provider that validates the issuer of the SAML token.

    All this stuff is of course transparent to the service implementation, it only receives a bunch of claims representing the user identity. Those claims can be got accessed through the current user principal. In the code below, the service generates a feed with all the received claims.

    IClaimsIdentity identity = (IClaimsIdentity)Thread.CurrentPrincipal.Identity;

    var feed = new SyndicationFeed()


        Id = "http://Claims",

        Title = new TextSyndicationContent("My claims"),


    feed.Items = identity.Claims.Select(c =>

        new SyndicationItem()


            Id = Guid.NewGuid().ToString(),

            Title = new TextSyndicationContent(c.ClaimType),

            LastUpdatedTime = DateTime.UtcNow,

            Authors =


                    new SyndicationPerson()


                        Name = c.Issuer



            Content = new TextSyndicationContent(c.Value)



    The complete sample is available to download from here. Note, it uses the latest Geneva Framework bits (And also the X509 certificates included with the samples, just run the certificate setup file included with the framework).


  • Carrying sensitive information in SAML assertions

    When SAML is used in conjunction with WS-Security, only an small piece of the token is encrypted, the proof key for the relying party. The rest of the token goes in plain text, that also includes the user's claims.


      <saml:Conditions NotBefore="2009-02-24T19:48:20.500Z" NotOnOrAfter="2009-02-24T19:53:20.500Z"></saml:Conditions>






            <KeyInfo xmlns="">...</KeyInfo>



      <saml:Attribute AttributeName="displayName" AttributeNamespace="">

          <saml:AttributeValue>John Foo</saml:AttributeValue> <--Attribute value-->



      <Signature xmlns="">...</Signature>


    Knowing this, you should never include sensitive information as claims in a SAML token. This is also related to the identity law #2, "Minimal Disclosure for a Constrained Use". The Identity provider should only disclose the least amount of identifiying information for executing the operation on the relying party.

    Some examples are,

    • A winery only needs to know whether the customer is in a legal age for buying alcohol according to the law, a claim like "over21" should be enough for that purpose, there is not need to know the customer birth date at all.
    • An online store that sells products does not necessary need to know the number of every credit card owned by a customer, a friendly name representing the card and optionally the available balance could be enough for completing a purchase.

    SAML 2.0 introduces the concept of "encrypted attribute", which clearly states its purpose, encrypt individual assertions in a SAML token. In this way, a token can now carry the encrypted proof key and optionally one or more encrypted assertions with sensitive information.

    You can take a look at this page for more information about the differences between SAML 1.1 and 2.0.

    Geneva Framework Beta 1 already implements a subset of SAML 2.0, however, it looks like this feature has been left out in the current release. Not sure either whether this feature will be included as part of the final release (Last quarter of 2009). I created a post in the forums some time ago, I haven't received any feedback yet.


  • WS-TRUST profiles and Cardspace

    Geneva framework supports today the two WS-Trust profiles, Active and Passive.

    The active profile deals specially with applications that are able to make soap request to any WS-Trust endpoint. On other hand, the passive profile is for clients that are unable to emit proper SOAP (a web browser for instance) and therefore receive the name of "passive requestors". This last one involves browser-based communication with several http redirects between the different parties (client, STS and relying party).

    Cardspace embedded in a web browser page however is not a Passive client. Once the user decides to be authenticated in a website with an information card, the Cardspace identity selector will negotiate and get the issue token from the identity provider using the active profile. Finally, the identity provider will pass the token to the browser using some Inter-Process communication, and the browser can later submit the token to the server using an standard http mechanism like a web post.

    As you can see, Carspace in a browser is actually an hybrid between Active and Passive. Vittorio has also discussed this scenario in the past, he called it "Passive-Aggressive".



  • Security Token Handlers in Geneva Framework

    According to the Geneva documentation,

    "SecurityTokenHandler defines an interface for plugging custom token handling functionality. Using the SecurityTokenHandler you can add functionality to serialize, de-serialize, authenticate and create and specific kind of token"

    I can see dead people ..... :)


    Haven't we seen this before ? Oh, yes, I think we did. The token managers in WSE, they are pretty much the same thing. It looks like the Geneva team came up with a solution that worked well in the past with WSE. One token manager for each kind of token we want to consume in our application. If your app needs to consume a custom token or customize an existing one, just derive the SecurityTokenHandler base class or one of the existing SecurityTokenHandler implementations and override some of its methods with custom functionality. For instance, the Geneva Framework now comes with two built-in token handlers for Username tokens, a MembershipUsernameSecurityTokenHandler for validating users against a membership provider and a WindowsUsernameSecurityTokenHandler for doing the same against a windows account store.

    Most of the code we had in the past as part of authorization policies (IAuthorizationPolicy) for mapping claims or validating tokens in a UsernamePasswordValidator or X509CertificateValidator has now moved to token handlers in the Geneva framework.

    I like this way of extending a custom handler for supporting new kind of tokens, it is quite more straightforward to me than the model currently supported by WCF.


  • Some thoughts on Portable STS (P-STS) and Geneva Cardspace

    The other day and friend of mine asked me about portable STS implementations, if I knew about any available solution that he could use on his company. That reminded me of a conversation I had like two years ago with another developer working on custom .NET CLR framework version for portable devices (like smartcards). As part of that project, his team was also working on a TCP/IP communication stack for the device, and a http handler for accepting raw WS-TRUST messages. One goal for that project was to have a P-STS that could be interoperable with WCF. The idea seemed very promising at time.

    So, what is a PSTS after all ?. In a few words, it is a service running on a portable device that exposes WS-TRUST endpoints and can issue security tokens of any kind (e.g, SAML tokens).

    Making a search today on google will drop several P-STS products or solutions,  some of them also claim to be interoperable with WCF and Microsoft Cardspace V1.

    In terms of identity management, A P-STS really makes a great different over existing authentication mechanisms like username/password, X509 certificates or any other kind of two-factor authentication device. Most of these authentication mechanisms are widely accepted and used today in applications within corporate environments or applications that requires off-line support. However, sometimes they lack of a truly identity support, which means that they do not represent the user identity at all in the context of those applications, they are just a way of identifying returning users, or they are hard to extend with additional user's identity claims.

    I can not deny that X509 certificates have demonstrated to be a very effective and secure way to authenticate users. In addition, X509 certificates can be extended with some custom attributes, the space is limited, but at least there is a possibility. However, X509 certificates represent hard tokens, the claims stored on a certificate can not be changed once it has been issued. Therefore, they are a good solution as long as their information do not change frequently over a period of time.

    Issue tokens (e.g SAML tokens) on other hand are more dynamic and cheaper to create. They usually have a short expiration time, they can issued and used  on the fly, but what is more important, they can carry custom information or claims about the subject it has been issued for.

    Some good news is that the Geneva Cardspace team has also announced some support for roaming scenarios in Cardspace V2. There will be a way to store our identity cards on a device (or somewhere in the cloud), which will be great to combine with a P-STS, no need to export/import the cards anymore. This scenario was not possible in Cardspace V1, and here is the explanation. According to what Rich Randall mentioned in the PDC talk "BB44 Identity: Windows CardSpace "Geneva" Under the Hood ", the future Cardspace interface could look as follow,



    As you can see, it will not be long until we have complete and portable identity solutions for roaming scenarios.


  • Claims negotiation between a consumer, STS and Relying Party in WCF

    According to the WS-Trust specification, a service consumer has a way to negotiate or ask for specific claims to the STS. Those claims (or some of them) will be generally used by  the service implementation running on the relying party.

    They are negotiated through an "claims" element in the RST message,

    <wst:RequestSecurityToken xmlns:wst="...">





            <wst:Claims Dialect="...">...</wst:Claims>









    The "wst:claims" is an optional element for requesting a specific set of claims. Typically, this element contains required and/or optional claim information identified in a service's policy.

    Based on these facts, we can elaborate some possible scenarios for claims negotiation between these three parties.

    1. No negotiation at all

    The STS might just ignore these claims requirements in the RST message and always returns a fixed claim set according to the consumer identity, or the service might not express what claims it expects at all. This scenario might be suitable for a local STS in small-sized or medium-sized organizations, where the IT department has a complete control over the client applications and services that interact with that STS. This kind of solution is easier to implement, and quite rigid too, a change in the claims required by the service will also require changes in the STS implementation. As you see, this solution does not scale at all for a high number of applications or relying party services.

    Many of the STS examples you will find today are implemented like this.

    2. Negotiation based on the AppliesTo header.

    This solution present a subtle difference with the one discussed before, the claims vary according the relying party that will make use of them. The STS ignores the claims requirements in the RST messages and returns a claim set based on the received AppliesTo header. An existing agreement must exist between the STS and the relying party, which will include in addition to the key for encrypting the tokens, a number of expected claims.  Again, easy to implement, difficult to scale up.

    3. Manual negotiation based on the "Claims" header.

    In this scenario, the consumer sends the expected claims in the "claims" header and the STS makes use of them for generating the resulting token. However, the negotiation of those claims between the consumer and the relying party is manual, a previous agreement must exist, the service does not express those requirements through metadata. This means that the claims are hard-coded during development in the client configuration.  If the service requires additional claims, only the client configuration will have to be changed, the STS does not have to be touched at all.

    If you are implementing a custom STS with the latest Microsoft Geneva bits, there is a property "Claims" in the RequestSecurityToken for getting access to these values.

    protected override IClaimsIdentity GetOutputClaimsIdentity(IClaimsPrincipal principal, RequestSecurityToken request, Scope scope)


        IClaimsIdentity outputIdentity = new ClaimsIdentity();


        foreach (Claim claim in request.Claims)


            //Do something...





        return outputIdentity;


    The client can specify those claims through configuration as well,


      <binding name="ServiceBinding">

        <security mode="Message">

          <message issuedTokenType="" negotiateServiceCredential="false">


              <add claimType =""/>

              <add claimType =""/>

              <add claimType ="" isOptional ="true"/>







    Once they are added to the binding configuration, WCF will automatically include them as part of the RST message to the STS.

    4. Automatic negotiation based on the "Claims" header.

    This is by far the best solution we can find. The three parties automatically negotiates the claims at runtime,

    I. The service exposes the claim requirements through metadata (WS-Policy)

    II. The client acquires the service's policy and requirements using some mechanism that could be WS-MetatadaExchange.  Later,  the client includes some claim requirements into the RST message that will be send to the STS.

    III. The STS extracts those requirements from the RST message, and then, it makes use of them for generating the resulting token.

    The Cardspace identity selector on the consumer side works like this. It first detects what claims are needed by the Relying Party, and then, displays all the possible cards (From different Identity providers) that satisfy those requirements to the user.

    Exposing the claim requirements on the relying party through WCF is equivalent to do it on the client side (same binding configuration),


      <binding name="ServiceBinding">

        <security mode="Message">

          <message issuedTokenType="">


              <add claimType =""/>

              <add claimType =""/>

              <add claimType ="" isOptional ="true"/>