Contents tagged with Federation

  • Centralizing Federated Services configuration with SO-Aware

    Configuring a WCF service to use federated authentication in an organization is not something trivial as it requires some good knowledge of the available security settings, and more precisely, how to talk to the existing security token services with the right WCF bindings.

    This is something that usually only a few people in the organization knows how to do it right, so having a way to centralize all this configuration in a central location and have the rest of the developers to use becomes really important.    

    SO-Aware plays an important role in that sense, allowing the security experts to configure and store the bindings and behaviors that the organization will use to secure the services in the service repository.

    Developers can later reference, reuse and configure their services and client applications with those bindings from the repository using a simple OData API, or the WCF specific classes that SO-Aware also provides for configuring services and proxies.

    A WCF binding for configuring a service with federated authentication usually looks as follow,

    <customBinding>
    <binding name="echoClaimsBinding">
    <security authenticationMode="IssuedToken"
    messageSecurityVersion="WSSecurity11WSTrust13WSSecureConversation13WSSecurityPolicy12BasicSecurityProfile10"
    requireSecurityContextCancellation="false">
    <issuedTokenParameters tokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0">
    <claimTypeRequirements>
    <add claimType="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name" isOptional="false"/>
    <add claimType="http://SOAwareSamples/2008/05/AgeClaim" isOptional="false"/>

    </claimTypeRequirements>
    <issuer address="http://localhost:6000/SOAwareSTS"

    bindingConfiguration="stsBinding"
    binding="ws2007HttpBinding">
    <identity>
    <dns value="WCFSTS"/>
    </identity>
    </issuer>
    <issuerMetadata address="http://localhost:6000/mex"></issuerMetadata>
    </issuedTokenParameters>
    </security>
    <httpTransport/>
    </binding>
    </customBinding>

    You basically have there, the information required by WCF to connect to the STS (or token issuer), and the claims that the service is expecting. This binding is also referencing another existing binding “stsBinding”, that the client will use to connect and secure the communication with the STS. if you want to store the same thing in SO-Aware, you will need a way to configure a binding in a way that can reference existing bindings. That can be done using the “Parent” property as you can see in the image below,  

    parentBinding[1]

    Once you have the binding stored and correctly configured in the the repository, it’s a matter of using the SO-Aware service host for configuring existing services with that binding.

    [ServiceContract()]
    public interface IEchoClaims
    {
    [OperationContract]
    List<string> Echo();
    }

    public class EchoClaims : IEchoClaims
    {
    public List<string> Echo()
    {
    List<string> claims = new List<string>();

    IClaimsPrincipal principal = Thread.CurrentPrincipal as IClaimsPrincipal;

    foreach (IClaimsIdentity identity in principal.Identities)
    {
    foreach (Claim claim in identity.Claims)
    {
    claims.Add(string.Format("{0} - {1}",
    claim.ClaimType, claim.Value));
    }
    }

    return claims;
    }

    }

    <serviceRepository url="http://localhost/SoAware/ServiceRepository.svc">
    <services>
    <service name="ref:EchoClaims(1.0)@dev" type="SOAware.Samples.EchoClaims, Service"/>
    </services>
    </serviceRepository>

    As you can see, the configuration is very straightforward. The developer configuring the service does not need to know anything about how to configure the WCF bindings or federated security. He only needs to reference an existing service configuration in the repository. This assumes the service was already configured in the Portal or using the OData API.
     
    ServiceConfig[1]
     
    The same thing happens on the client side, no configuration is needed at all. The developer can use the “ConfigurableProxyFactory” and the “ConfigurationResolver” classes that SO-Aware provides to automatically discover and resolve all the service configuration (service address, bindings and behaviors). In fact, the developer does not know anything about where the STS is, which binding uses, or which certificates are used to secure the communication. All that is stored in the repository, and automatically resolved by the SO-Aware configuration classes.
     
    static void ExecuteServiceWithMetadataResolution()
    {
    ConfigurableProxyFactory<IEchoClaims> factory = new ConfigurableProxyFactory<IEchoClaims>(
    ServiceUri,
    "EchoClaims(1.0)",
    "dev");

    var endpointBehaviors = resolver.ResolveEndpointBehavior("echoClaimsEndpointBehavior");
    foreach (var endpointBehavior in endpointBehaviors.Behaviors)
    {
    if (factory.Endpoint.Behaviors.Contains(endpointBehavior.GetType()))
    {
    factory.Endpoint.Behaviors.Remove(endpointBehavior.GetType());
    }

    factory.Endpoint.Behaviors.Add(endpointBehavior);
    }

    factory.Credentials.UserName.UserName = "joe";
    factory.Credentials.UserName.Password = "bar";

    IEchoClaims client = factory.CreateProxy();

    try
    {
    string[] claims = client.Echo();

    foreach (string claim in claims)
    {
    Console.WriteLine(claim);
    }
    }
    catch (TimeoutException exception)
    {
    Console.WriteLine("Got {0}", exception.ToString());
    ((IContextChannel)client).Abort();
    }
    catch (CommunicationException exception)
    {
    Console.WriteLine("Got {0}", exception.ToString());
    IContextChannel channel = (IContextChannel)client;
    ((IContextChannel)client).Abort();
    }
    finally
    {
    ((IContextChannel)client).Close();
    }


    }

    In addition, as the Secure Token Service could also be implemented with WCF and WIF, you can also resolve the configuration for that service from the repository by reusing the “stsBinding” in the given example (WSTrustServiceContract is one of the service contracts that WIF provides for implementing a STS).
     
    <serviceRepository url="http://localhost/SoAware/ServiceRepository.svc">
    <services>
    <service name="ref:STS(1.0)@dev"
    type="Microsoft.IdentityModel.Protocols.WSTrust.WSTrustServiceContract,
    Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral,
    PublicKeyToken=31bf3856ad364e35"
    />
    </services>
    </serviceRepository>

    Read more...

  • ActAs in WS-Trust 1.4

    WS-Trust 1.4 introduced a new feature called as “ActAs” for addressing common scenarios where an application needs to call a service on behalf of the logged user or a service needs to call another service on behalf of the original caller. These are typical examples of what is usually resolved with the “Trusted Subsystem” pattern.

    Read more...

  • Brokered authentication for REST active clients with SAML

    I have been thinking for a while about what could be a good way to support brokered authentication for active REST clients. Something I did not want to do was to force the use of WS-Trust Active profile, which is in essence SOAP based.

    Some of the qualities attributes that are easy to reach with REST services, such as simplicity, interoperability and scalability can definitely be affected with the introduction of a additional SOAP stack for negotiating an identity token. WS-Trust passive requestor profile, on the other hand, was designed for dumb clients like web browsers, clients that do not have capabilities to handle cryptographic materials or the SOAP stack itself.  This profile basically hides most of the WS-Trust details from client applications through a sequence of http redirections, which could be helpful in this scenario for negotiating a token and still keep simple REST clients. However, as some user interaction is required, this profile is not suitable for consuming REST services from desktop applications or other active client applications.

    If we take a deep look at the functionality provided by a Secure Token Service (STS), it is not more than a service that handle the lifecycle of a identity token, it knows how to issue a token, renew it or finally cancel it when it is not longer need it.  If we see all these scenarios from a point of view of REST, an identity token is just a resource, something that can be created, updated or even deleted. Of course, there is not any spec available yet for this scenario, all I will show here is just an possible implementation of a Restful STS.

    The mapping of supported Ws-Trust actions to http verbs for my Restful STS is defined below,

    • Issue = POST, creates or issues a new token resource (A SAML token)
    • Renew = PUT, renew an existing token
    • Cancel = DELETE, cancel an existing token
    • GET, gets an existing token (There is not such thing in Ws-Trust)

    I leave out the "Validate" action as part of this implementation.

    What I have created for this example is a REST facade layered on top of a STS implementation with the Geneva Framework. The definition of service contract for this Restful STS for supporting that mapping should look like this,

    [ServiceContract]

    public interface IRestSts

    {

        [OperationContract]

        [WebInvoke(UriTemplate="Tokens", Method="POST", RequestFormat=WebMessageFormat.Xml, ResponseFormat=WebMessageFormat.Xml)]

        RequestSecurityTokenResponse IssueToken(RequestSecurityToken request);

     

        [OperationContract]

        [WebInvoke(Method = "PUT", UriTemplate = "Tokens/{tokenId}", RequestFormat = WebMessageFormat.Xml, ResponseFormat = WebMessageFormat.Xml)]

        RequestSecurityTokenResponse RenewToken(string tokenId);

     

        [OperationContract]

        [WebInvoke(Method = "DELETE", UriTemplate = "Tokens/{tokenId}", RequestFormat = WebMessageFormat.Xml, ResponseFormat = WebMessageFormat.Xml)]

        void CancelToken(string tokenId);

     

        [OperationContract]

        [WebGet(UriTemplate = "Tokens/{tokenId}", RequestFormat = WebMessageFormat.Xml, ResponseFormat = WebMessageFormat.Xml)]

        RequestSecurityTokenResponse GetToken(string tokenId);   

    }

    As I mentioned before, the client has to first acquire a token from the STS, that can be done with a regular Http POST containing a RequestSecurityToken message.

    Issue_REST

    The message embedded in the request body to the STS looks like this,

    <RequestSecurityToken xmlns="http://schemas.xmlsoap.org/ws/2005/02/trust">
        <AppliesTo>https://localhost/MyService</AppliesTo>
        <TokenType>http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV1.1</TokenType>
    </RequestSecurityToken>

    And the corresponding response like this,

    <RequestSecurityTokenResponse xmlns="http://schemas.xmlsoap.org/ws/2005/02/trust" xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
        <Links>
            <Link>
                <href>http://localhost:7362/STSWindows/Service.svc/_8a6fc87b-7e6a-45c9-a479-20ea42113e40</href>
                <rel>self</rel>
                <type>application/xml</type>
            </Link>
        </Links>
        <RequestedSecurityToken>....</RequestedSecurityToken>
        <TokenType>http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV1.1</TokenType>
    </RequestSecurityTokenResponse>

    Both calls, the first one to get the token from the STS, and the second call to invoke the service in the Relying party should be protected with transport security to avoid any middle in the man attack.

    In this sample, the STS is using basic authentication to authenticate the user trying to get access to the token. If the authentication succeed, the STS implemented with Geneva will provide the necessary claims associated with that user.

    The code on the client side to ask for a new token is quite simple,

    static string GetToken(string address, string appliesTo, string username, string password)

    {

        RequestSecurityToken request = new RequestSecurityToken

        {

            TokenType = "http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV1.1",

            AppliesTo = appliesTo

        };

        DataContractSerializer requestSerializer = new DataContractSerializer(typeof(RequestSecurityToken));

        WebRequest webRequest = HttpWebRequest.Create(address);

        webRequest.Method = "POST";

        webRequest.ContentType = "application/xml";

        webRequest.Credentials = new NetworkCredential(username, password);

        using (var st = webRequest.GetRequestStream())

        {

            requestSerializer.WriteObject(st, request);

            st.Flush();

        }

        WebResponse webResponse = webRequest.GetResponse();

        DataContractSerializer responseSerializer = new DataContractSerializer(typeof(RequestSecurityTokenResponse));

        using (var st = webResponse.GetResponseStream())

        {

            var response = (RequestSecurityTokenResponse)responseSerializer.ReadObject(st);

            return response.RequestedSecurityToken;

        }

    }

    It creates a new RequestSecurityToken message, provides the user credentials and post that information to the STS. The response from the STS is a RequestSecurityTokenResponse containing the issued token, that's what this method returns in response.RequestedSecurityToken.

    Once the client gets the issued token from the response, it can include it as part of the request message to the relying party's service. For this sample, I decided to include the token in the "Authorization" header, which is a common mechanism to attach authentication credentials in a request message to a REST service (Basic authentication, and other authentication mechanisms use the same approach).

    WebRequest webRequest = HttpWebRequest.Create(address);

    webRequest.Method = "GET";

    webRequest.Headers["Authorization"] = token;

    Now, the hard part, the Relying Party needs a way to parse the token and authenticate the user before calling the service implementation. Fortunately, the guys from the WCF REST Starter kit have provide an excellent solution for this kind of scenarios, message interceptors. What I did here was to implement a message interceptor for SAML tokens, which internally used the Geneva Framework for performing all the validations and parsing the token.  An easy way to inject message interceptors in a service implementation is through a custom service factory (Zero config deployment),

    class AppServiceHostFactory : ServiceHostFactory

    {

        protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses)

        {

            WebServiceHost2 result = new WebServiceHost2(serviceType, true, baseAddresses);

            result.Interceptors.Add(new MessageInterceptors.SamlAuthenticationInterceptor(new TrustedIssuerNameRegistry()));

            return result;

        }

    }

    The "TrustedIssuerNameRegistry" is a just a simple implementation of a Geneva "IssuerNameRegistry" provider that validates the issuer of the SAML token.

    All this stuff is of course transparent to the service implementation, it only receives a bunch of claims representing the user identity. Those claims can be got accessed through the current user principal. In the code below, the service generates a feed with all the received claims.

    IClaimsIdentity identity = (IClaimsIdentity)Thread.CurrentPrincipal.Identity;

    var feed = new SyndicationFeed()

    {

        Id = "http://Claims",

        Title = new TextSyndicationContent("My claims"),

    };

    feed.Items = identity.Claims.Select(c =>

        new SyndicationItem()

        {

            Id = Guid.NewGuid().ToString(),

            Title = new TextSyndicationContent(c.ClaimType),

            LastUpdatedTime = DateTime.UtcNow,

            Authors =

                {

                    new SyndicationPerson()

                    {

                        Name = c.Issuer

                    }

                },

            Content = new TextSyndicationContent(c.Value)

        }

    );

    The complete sample is available to download from here. Note, it uses the latest Geneva Framework bits (And also the X509 certificates included with the samples, just run the certificate setup file included with the framework).

    Read more...

  • WS-TRUST profiles and Cardspace

    Geneva framework supports today the two WS-Trust profiles, Active and Passive.

    The active profile deals specially with applications that are able to make soap request to any WS-Trust endpoint. On other hand, the passive profile is for clients that are unable to emit proper SOAP (a web browser for instance) and therefore receive the name of "passive requestors". This last one involves browser-based communication with several http redirects between the different parties (client, STS and relying party).

    Cardspace embedded in a web browser page however is not a Passive client. Once the user decides to be authenticated in a website with an information card, the Cardspace identity selector will negotiate and get the issue token from the identity provider using the active profile. Finally, the identity provider will pass the token to the browser using some Inter-Process communication, and the browser can later submit the token to the server using an standard http mechanism like a web post.

    As you can see, Carspace in a browser is actually an hybrid between Active and Passive. Vittorio has also discussed this scenario in the past, he called it "Passive-Aggressive".

     

    Read more...

  • Some thoughts on Portable STS (P-STS) and Geneva Cardspace

    The other day and friend of mine asked me about portable STS implementations, if I knew about any available solution that he could use on his company. That reminded me of a conversation I had like two years ago with another developer working on custom .NET CLR framework version for portable devices (like smartcards). As part of that project, his team was also working on a TCP/IP communication stack for the device, and a http handler for accepting raw WS-TRUST messages. One goal for that project was to have a P-STS that could be interoperable with WCF. The idea seemed very promising at time.

    So, what is a PSTS after all ?. In a few words, it is a service running on a portable device that exposes WS-TRUST endpoints and can issue security tokens of any kind (e.g, SAML tokens).

    Making a search today on google will drop several P-STS products or solutions,  some of them also claim to be interoperable with WCF and Microsoft Cardspace V1.

    In terms of identity management, A P-STS really makes a great different over existing authentication mechanisms like username/password, X509 certificates or any other kind of two-factor authentication device. Most of these authentication mechanisms are widely accepted and used today in applications within corporate environments or applications that requires off-line support. However, sometimes they lack of a truly identity support, which means that they do not represent the user identity at all in the context of those applications, they are just a way of identifying returning users, or they are hard to extend with additional user's identity claims.

    I can not deny that X509 certificates have demonstrated to be a very effective and secure way to authenticate users. In addition, X509 certificates can be extended with some custom attributes, the space is limited, but at least there is a possibility. However, X509 certificates represent hard tokens, the claims stored on a certificate can not be changed once it has been issued. Therefore, they are a good solution as long as their information do not change frequently over a period of time.

    Issue tokens (e.g SAML tokens) on other hand are more dynamic and cheaper to create. They usually have a short expiration time, they can issued and used  on the fly, but what is more important, they can carry custom information or claims about the subject it has been issued for.

    Some good news is that the Geneva Cardspace team has also announced some support for roaming scenarios in Cardspace V2. There will be a way to store our identity cards on a device (or somewhere in the cloud), which will be great to combine with a P-STS, no need to export/import the cards anymore. This scenario was not possible in Cardspace V1, and here is the explanation. According to what Rich Randall mentioned in the PDC talk "BB44 Identity: Windows CardSpace "Geneva" Under the Hood ", the future Cardspace interface could look as follow,

     

     

    As you can see, it will not be long until we have complete and portable identity solutions for roaming scenarios.

    Read more...

  • Claims negotiation between a consumer, STS and Relying Party in WCF

    According to the WS-Trust specification, a service consumer has a way to negotiate or ask for specific claims to the STS. Those claims (or some of them) will be generally used by  the service implementation running on the relying party.

    They are negotiated through an "claims" element in the RST message,

    <wst:RequestSecurityToken xmlns:wst="...">

            <wst:TokenType>...</wst:TokenType>

            <wst:RequestType>...</wst:RequestType>

            ...

            <wsp:AppliesTo>...</wsp:AppliesTo>

            <wst:Claims Dialect="...">...</wst:Claims>

            <wst:Entropy>

                  <wst:BinarySecret>...</wst:BinarySecret>

             </wst:Entropy>

            <wst:Lifetime>

                <wsu:Created>...</wsu:Created>

                <wsu:Expires>...</wsu:Expires>

            </wst:Lifetime>

    </wst:RequestSecurityToken>

    The "wst:claims" is an optional element for requesting a specific set of claims. Typically, this element contains required and/or optional claim information identified in a service's policy.

    Based on these facts, we can elaborate some possible scenarios for claims negotiation between these three parties.

    1. No negotiation at all

    The STS might just ignore these claims requirements in the RST message and always returns a fixed claim set according to the consumer identity, or the service might not express what claims it expects at all. This scenario might be suitable for a local STS in small-sized or medium-sized organizations, where the IT department has a complete control over the client applications and services that interact with that STS. This kind of solution is easier to implement, and quite rigid too, a change in the claims required by the service will also require changes in the STS implementation. As you see, this solution does not scale at all for a high number of applications or relying party services.

    Many of the STS examples you will find today are implemented like this.

    2. Negotiation based on the AppliesTo header.

    This solution present a subtle difference with the one discussed before, the claims vary according the relying party that will make use of them. The STS ignores the claims requirements in the RST messages and returns a claim set based on the received AppliesTo header. An existing agreement must exist between the STS and the relying party, which will include in addition to the key for encrypting the tokens, a number of expected claims.  Again, easy to implement, difficult to scale up.

    3. Manual negotiation based on the "Claims" header.

    In this scenario, the consumer sends the expected claims in the "claims" header and the STS makes use of them for generating the resulting token. However, the negotiation of those claims between the consumer and the relying party is manual, a previous agreement must exist, the service does not express those requirements through metadata. This means that the claims are hard-coded during development in the client configuration.  If the service requires additional claims, only the client configuration will have to be changed, the STS does not have to be touched at all.

    If you are implementing a custom STS with the latest Microsoft Geneva bits, there is a property "Claims" in the RequestSecurityToken for getting access to these values.

    protected override IClaimsIdentity GetOutputClaimsIdentity(IClaimsPrincipal principal, RequestSecurityToken request, Scope scope)

    {

        IClaimsIdentity outputIdentity = new ClaimsIdentity();

     

        foreach (Claim claim in request.Claims)

        {

            //Do something...

     

            outputIdentity.Claims.Add(...);

        }

     

        return outputIdentity;

    }

    The client can specify those claims through configuration as well,

    <wsFederationHttpBinding>

      <binding name="ServiceBinding">

        <security mode="Message">

          <message issuedTokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV1.1" negotiateServiceCredential="false">

            <claimTypeRequirements>

              <add claimType ="http://schemas.microsoft.com/ws/2005/05/identity/claims/EmailAddress"/>

              <add claimType ="http://schemas.microsoft.com/ws/2005/05/identity/claims/GivenName"/>

              <add claimType ="http://schemas.microsoft.com/ws/2005/05/identity/claims/Surname" isOptional ="true"/>

            </claimTypeRequirements>

            <issuer></issuer>

          </message>

        </security>

      </binding>

    </wsFederationHttpBinding>

    Once they are added to the binding configuration, WCF will automatically include them as part of the RST message to the STS.

    4. Automatic negotiation based on the "Claims" header.

    This is by far the best solution we can find. The three parties automatically negotiates the claims at runtime,

    I. The service exposes the claim requirements through metadata (WS-Policy)

    II. The client acquires the service's policy and requirements using some mechanism that could be WS-MetatadaExchange.  Later,  the client includes some claim requirements into the RST message that will be send to the STS.

    III. The STS extracts those requirements from the RST message, and then, it makes use of them for generating the resulting token.

    The Cardspace identity selector on the consumer side works like this. It first detects what claims are needed by the Relying Party, and then, displays all the possible cards (From different Identity providers) that satisfy those requirements to the user.

    Exposing the claim requirements on the relying party through WCF is equivalent to do it on the client side (same binding configuration),

    <wsFederationHttpBinding>

      <binding name="ServiceBinding">

        <security mode="Message">

          <message issuedTokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV1.1">

            <claimTypeRequirements>

              <add claimType ="http://schemas.microsoft.com/ws/2005/05/identity/claims/EmailAddress"/>

              <add claimType ="http://schemas.microsoft.com/ws/2005/05/identity/claims/GivenName"/>

              <add claimType ="http://schemas.microsoft.com/ws/2005/05/identity/claims/Surname" isOptional ="true"/>

            </claimTypeRequirements>

            <issuer></issuer>

          </message>

        </security>

      </binding>

    </wsFederationHttpBinding>

    Read more...

  • Implementing an identity provider and relying party with Zermatt and ASP.NET MVC

    Zermatt is the framework recently released by Microsoft to develop claim-aware applications. You can find some announcements here and here.

    This framework supports the WS-Federation active and passive profiles. This last one was initially designed with an unique purpose in mind, allow the integration of "dumb clients" into the identity metasystem. As "dumb clients", I am talking about clients like web browsers that do not have the ability to handle cryptographic material.

    All the magic is done through some consecutive Http redirects, and today we will see how develop an identity provider and a relying party web (with ASP.NET MVC) that are involved in the whole process.

    The identity provider is based on the quickstart that is automatically generated in Visual Studio when you create a new MVC web application. This quickstart uses FormsAuthentication to authenticate the application users and also provides an Account controller (that internally uses ASP.NET Membership) to manage all those users. In order to integrate Zermatt in this application, I added a new controller STSController that knows to process messages for getting issue tokens with the user's claims.

    For the relying party, Zermatt provides some web controls to authenticate the user against the identity provider using the passive profile. Unfortunately, for the simple fact that ASP.NET MVC does not support controls with view state, we can not use them here. As workaround, I created a couple of extensions methods that generate the Urls for sending the corresponding messages to the identity provider (Login and Logout).

    public static class LoginUrlExtensions

    {

       public static string LoginUrl(this UrlHelper helper, string actionName, string controllerName, string stsUrl)

       {

           string host = helper.ViewContext.HttpContext.Request.Url.Authority;

           string schema = helper.ViewContext.HttpContext.Request.Url.Scheme;

     

           string realm = string.Format("{0}://{1}", schema, host);

           string reply = helper.Action(actionName, controllerName).Substring(1);

     

           return string.Format("{0}?wa=wsignin1.0&wtrealm={1}&wreply={2}&wctx=rm=0&id=FederatedPassiveSignIn1&wct={3}",

              stsUrl, realm, reply, XmlConvert.ToString(DateTime.Now));

       }

     

       public static string LogoutUrl(this UrlHelper helper, string actionName, string controllerName, string stsUrl)

       {

           string host = helper.ViewContext.HttpContext.Request.Url.Authority;

           string schema = helper.ViewContext.HttpContext.Request.Url.Scheme;

     

           string realm = string.Format("{0}://{1}", schema, host);

           string reply = string.Format("{0}{1}", realm, helper.Action(actionName, controllerName));

     

           return string.Format("{0}?wa=wsignout1.0&wreply={1}", stsUrl, reply);

       }

    }

    The "actionName" and "controllerName" are just used to generate the reply address where the user must be redirect after being authenticated in the identity provider. These extension methods can be used in the view as follow,

    <a href="<%=Url.LoginUrl("Login", "Home", "localhost://STS")%>">Login</a>

    We also need a method in the relying party to parse the RRST message and generate a cookie with the user credentials and claims.

     

    public interface IFederatedAuthentication

    {

       IClaimsPrincipal Authenticate();

    }

     

    public class FederatedAuthentication : IFederatedAuthentication

    {

         private string logoutUrl;

     

         public FederatedAuthentication(string logoutUrl)

         {

             this.logoutUrl = logoutUrl;

         }

     

         public IClaimsPrincipal Authenticate()

         {

             string securityTokenXml = FederatedAuthenticationModule.Current.GetXmlTokenFromPassiveSignInResponse(System.Web.HttpContext.Current.Request, null);

     

             FederatedAuthenticationModule current = FederatedAuthenticationModule.Current;

     

             SecurityToken token = null;

             IClaimsPrincipal authContext = current.AuthenticateUser(securityTokenXml, out token);

     

             TicketGenerationContext context = new TicketGenerationContext(authContext, false, logoutUrl, typeof(SignInControl).Name);

             current.IssueTicket(context);

     

             return authContext;

         }

    }

     

    As you can see in the code above, the Zermatt module (FederatedAuthenticationModule) that parses the response message is tied to the Request object, something that we do not have direct access from a MVC controller (Well, it is bad practice if we want to test our code). That's the reason I decided to put all that code in a pluggin that can be injected later in the controller.

    The complete solution is available to download from this location. Any feedback would be great!!. Enjoy!!.

    Read more...

  • Federation Over TCP With WCF

    One of the discussions that we had during the last summit with the rest of "Connected Systems" MVPs was the possibility of supporting a Federation Scenario over TCP in WCF. For many of us that scenario was possible in theory, but unfortunately no documentation or samples existed to support it. In fact, WCF only comes with pre-built binding for federation scenarios, the "WsFederationHttpBinding" binding, which is completely tied to Http.

    For that reason, I decided to give it a shot and try to manipulate some custom bindings to use tcp instead of the common used http transport. One curios thing about TCP is that it requires security sessions (SecureConversation with requireSecurityContextCancellation equals to "True") in order to work fine. If you do not configure the binding with those security settings, WCF will throw a nice error message saying that the order of the binding elements is not correct. At the beginning I did not configure it in that way, and it took me sometime to figure out what the problem was, I would save some time with a better error description. 

    The resulting bindings for client, STS and the sample service were the following (In this sample, the client is authenticating against the service with a client certificate).

    1. Client

    <bindings>

      <customBinding>

        <binding name="STSBinding">

          <security authenticationMode="SecureConversation" requireSecurityContextCancellation="true">

            <secureConversationBootstrap authenticationMode="MutualCertificate"/>

          </security>

          <binaryMessageEncoding/>

          <tcpTransport />

        </binding>

        <binding name="ServiceBinding">

           <security authenticationMode="SecureConversation">

             <secureConversationBootstrap authenticationMode="IssuedToken">

               <issuedTokenParameters tokenType=http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV1.1>

                 <issuer address="net.tcp://localhost:8000/sts" bindingConfiguration="STSBinding" binding="customBinding">

                   <identity>

                     <dns value="STSAuthority"/> <!--Sample Cert for the STS -->

                   </identity>

                 </issuer>

               </issuedTokenParameters>

            </secureConversationBootstrap>

          </security>

          <binaryMessageEncoding/>

          <tcpTransport />

        </binding>

      </customBinding>

    </bindings>

    2. STS

    <bindings>

      <customBinding>

        <binding name="MutualCertificateBinding">

          <security authenticationMode="SecureConversation" requireSecurityContextCancellation="true">

            <secureConversationBootstrap authenticationMode="MutualCertificate"/>

          </security>

          <binaryMessageEncoding/>

          <tcpTransport />

        </binding>    </customBinding>

    </bindings>

    3. Sample Service

    <bindings>

      <customBinding>

        <binding name="SampleService">

          <security authenticationMode="SecureConversation" requireSecurityContextCancellation="true">

            <secureConversationBootstrap authenticationMode="IssuedToken">

               <issuedTokenParameters tokenType=http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV1.1>

            </issuedTokenParameters>

           </secureConversationBootstrap>

         </security>

         <binaryMessageEncoding/>

        <tcpTransport />

       </binding>

      </customBinding>

    </bindings>

    It is not required that the STS and service use both TCP transport for communicating with the client, which is a cool thing because now we can combine different transports in a whole federation scenario. For instance, we can have a Http communication between the client and the STS, and a TCP communication with between the client and the final service.

    The complete sample is available to download from here.

     

    Read more...