Artur Trosin's blog

No Pain, No Gain!

  • Do your features fit together?

    Today a was listening to the great Q&A session with Anders Hejlsberg. But the post is not about the session itself, the thing that touched my ‘heart'  is that he mentioned a great point about the language design. There was a question to him about a missing feature in C#, and his answer was that a language is not (only) about features but it is more about "how all the features do they play together".

    I think it is very important point, and it applies not only language design but to design and architecture in general. While designing or architecting a project trying to add more ‘features' to the design (or architecture) we often forget  to think how do they play together.  And is very hard to analyze their impact from "play together" perspective.

    I think there someone will say that design (or architecture) is better when we can add features independently without affecting other packages, components, etc.. and I agree with that, but finally the software is about interaction on different levels (classes, packages, components,...) and the meaning is how these interaction play together when is added a new "feature"  and doesn't matter at which level of abstraction it is introduced.

    Yes, we can unit testing  for introducing a new feature, but it will solve the problem only partially answering the compile and covered business cases if they are broken or not, but it will not say anything about how ‘nicely do they play together ' all the features.

    Another technique is evaluative design (or architecture) where we can add small features and constantly refactoring to a better design every time we see a better way of doing. Last but not least technique that comes to mind is DDD techniques that make more transparent the entire design process in case of complex business solutions.

    I think you will be agree with  me that there is not general technique that can say to you about your design  "features play ability" but there are techniques that can make it better (doesn't matter these features are functional or non-factional). But it is important to take in consideration when we introduce a new feature how it will fit together with other existing features.

    (PS: Just thoughts, maybe Interface Segregation Principle applied ad various levels of abstraction can be used as a techniques in order to design a better "features play ability" ? )

    Thank you,

    Artur Trosin

    Read more...

  • C# Domain-Driven Design Sample Released

    In the post I want to declare that NDDD Sample application(s) is released and share the work with you. You can access it here: http://code.google.com/p/ndddsample. NDDDSample from functionality perspective matches DDDSample 1.1.0 which is based Java and on joint effort by Eric Evans' company Domain Language and the Swedish software consulting company Citerus. But because NDDDSample is based on .NET technologies those two implementations could not be matched directly. However concepts, practices, values, patterns, especially DDD, are cross-language and cross-platform :).

    Implementation of .NET version of the application was an interesting journey because now as .NET developer I better understand the differences positive and negative between these two platforms. Even there are those differences they can be overtaken, in many cases it was not so hard to match a java libs\framework with .NET during the implementation.

    Here is a list of technology stack:

    • 1. .net 3.5 - framework
    • 2. VS.NET 2008 - IDE
    • 3. ASP.NET MVC2.0 - for administration and tracking UI
    • 4. WCF - communication mechanism
    • 5. NHibernate - ORM
    • 6. Rhino Commons - Nhibernate session management, base classes for in memory unit tests
    • 7. SqlLite - database
    • 8. Windsor - inversion of control container
    • 9. Windsor WCF facility - for better integration with NHibernate
    • 10. MvcContrib - and in particular its Castle WindsorControllerFactory in order to enable IoC for controllers
    • 11. WPF - for incident logging application
    • 12. Moq - mocking lib used for unit tests
    • 13. NUnit - unit testing framework
    • 14. Log4net - logging framework
    • 15. Cloud based on Azure SDK

    These are not the latest technologies, tools and libs for the moment but if there are someone thinks that it would be useful to migrate the sample to latest current technologies and versions please comment.

    Cloud version of the application is based on Azure emulated environment provided by the SDK, so it hasn't been tested on ‘real' Azure scenario (we just do not have access to it).

    Thanks to participants, Eugen Gorgan who was involved directly in development, Ruslan Rusu and Victor Lungu spend their free time to discuss .NET specific decisions, Eugen Navitaniuc helped with Java related questions. Also, big thank to Cornel Cretu, he designed a nice logo and helped with some browser incompatibility issues.

    Any review and feedback are welcome!

    Thank you,

    Artur Trosin

    Read more...

  • Agile: Practical exercise

    Start is a point where our brain accepts something or reject, or at least puts on the skeptic shelf.  So, a good start is always important. Agile methodology usually falls in reject set with people with waterfall thinking or who never *really* tried it. Why?..., I'm going to discuss to in this article, also how we can fix it by doing a practical exercise for introducing practical side of Agile to new comers.

    I saw many times when people agree with agile statements, but some silent feelings are left that it will not work from practical perspective. As result the agile theory remains as a ‘famous' theory which is forgotten in time and space. Theory and Practice are percept in different ways, theory can be understood but it doesn't allow feeling the subject, and doesn't matter it is management Agile such as SCRUM or development Agile such as XP.

    Randal L Schwartz quoted: The difference between theory and practice in theory is much less than the difference between theory and practice in practice.

     In case of agile feeling emotional and practical part of the methodology is critical for its adaptation. In water fall methodologies the psychological part is less important because they are usually driven by bureaucracy, where any person can be replaced (even in waterfall is not true, it is just a try to reduce it), and so on...     

    There are many great resources with agile theory, even theory about practice; I don't want to list them here now. Here I want to make public already tested game which is a simple game that introduces people to Agile and SCRUM.  The game allows feeling practical side of agile to agile new comers and allow to understand agile as from inside and not from words of a person who did it.  It is important difference. The game is called ‘Air Plane Factory', the game consist in creating a line of production of paper airplanes in agile iterative manner.

    Here are more details http://www.agileway.com.br/2009/11/16/the-airplane-factory-game/.

    I've just tried to apply the game in our company with a group of ~12 persons divided in three teams. The result was only positive one: fun, great team work, communication, iterative PDCA (plan-do-check-act), visible improvement over iterations, transparency... and simple from process perspective. All these finalized with a great retrospective of agile benefits and values and applied during the game.

    Here is how it looks like:

    1. Iteration's Planing Meeting

    Agile

    2. Iternation's implementation phase

    Air plain implementation

    3. Acceptance Testing at the end of the iteration

     Acceptance Testing

    From my perspective, one advantage of the game over the real agile project is that the game shows on a smaller scale the entire agile process, making agile benefits and values more obvious.

    Thanks to game creators for sharing it to the community and team which accepted to participate.

    Thank you,

    Artur Trosin

    Read more...

  • Domain-Driven Design: Two basic premises

    In the post I want to discuss about two basic Domain-Driven Design premises which stand on the base of all other DDD principles, patterns, and practices. DDD principles, patterns, and practices described by Eric Evans are not something invented by him, but are something that were discovered and used long experience path by many folks. So, all DDD goodies have just two simple premises which I will cover in the post.

    Nowadays Complexity

    Before to dive in the premises lets discuss how they evolved. Nowadays more and more are automated various business domains (Domain - particular field of knowledge, e.g. Banking, Accounting, Assurance, E-Commerce, etc.. ) . The complex human work is automated as much as possible to reduce costs and earn velocity. All these are primary factors for successful business and top market place and the business competition. If to compare nowadays software situation with 90’ time span, when a lot of software solutions were common applications, now custom solutions are predominant, because: for a successful competition is necessary to be one step further. The tendency is that business complexity grows continuously dragging in software solution into “hell”. There are a lot of other things that can make software development complex but as we can see core of the complexity is business domain itself. complexity

    In order to automate enterprise systems, complexity is the key factor that eventually can’t be omitted ONLY CONTROLLED. Of course there are two types of complexity:

    1. essential – complexity that is reasonable and unavoidable that can’t(or shouldn’t) be omitted

    2. accidental - complexity which is non-essential to the problem to be solved

    So, by following the DDD premises we will be able better highlight and concentrate on the essential domain complexity.

    Domain Knowledge

    Two main DDD premises

    Below are two premises that are also two DDD principles where all starts from. All other principle, patterns and practices are built on these principles OR they are compatible complementing each other.

    1. Primary focus should be on the domain and domain logic

    2. Complex domain designs should be based on a model

    First principle says that for most of the projects primary focus should be on business domain and domain logic. That is true for most of the projects but for a part of the project it isn’t. For example:

    1. For device that handle small logic but very fast such as routers, modems, etc…

    2. Frameworks or technical libraries where main focus is not a business domain

    Second principle says that design should be based on models. Model is very generic term, a model is a pattern, plan, representation (especially in miniature), or description designed to show the main object or workings of an object, system, or concept.

    Why models? Let’s see…

    Humans use models starting from beginning of its history, and use them in various scopes:

    1. Explain (very distinct from predict)
    2. Guide data collection
    3. Illuminate core dynamics
    4. Suggest dynamical analogies
    5. Discover new questions
    6. Promote a scientific habit of mind
    7. Bound (bracket) outcomes to plausible ranges
    8. Illuminate core uncertainties.
    9. Offer crisis options in near-real time
    10. Demonstrate tradeoffs / suggest efficiencies
    11. Challenge the robustness of prevailing theory through perturbations
    12. Expose prevailing wisdom as incompatible with available data
    13. Train practitioners
    14. Discipline the policy dialogue
    15. Educate the general public
    16. Reveal the apparently simple (complex) to be complex (simple)
    17. Etc…

    As an example I’ve got Atom Model. Which in translation means uncuttable, something that cannot be divided. On the right is modern model of the Atom which is evolution of the first model from the left.

    Atom Models Evolution

    So, initial model from the left is initial “Atom” model, where the "Atom" term comes/starts from. That was a start for next models and theories. Scientists learn and discovered new models with new atom model structures through various experiments. The new model structures evolved to match new obtained results from the experiments, so model evolved through time and discussions. Even now, using the most powerful microscopes Atom structure can’t be revealed, that is why were a lot of models based on various theories and if a model and its theory doesn’t pass an experiment then the model and its theory is thrown.

    Models rarely represent something real or right. They could and are, more valuable when are not realistic. You could think that the best models are wrong. Yes, But they are fruitfully wrong! Models are just to highlight abstractions.

    "Art is a lie that helps us see the truth." – Picasso
    "All models are wrong, but some are useful." - George Box

    Even music has a Model!

    Models can abstract very abstract things in order to materialize them for a better perception. From first sight Music is very abstract “concept” that is hard to imagine a model for it.

    Music Model

    This is a model of the music, the model allows to have a written form of music, to show some not obvious aspects that can’t be heard, to communicate. In music history were a lot of cases when symphonies were composed by completely deaf musicians. Again, above form is modern form to write music, the form was very different in incipient forms.

    Best Model?

    Can you answer for yourself which model is the best model?

    Earth Models in Context

    Of course, it depends. Depends on the context. If we need a political situation then first model is better if we need to explore internal earth structure then second model is better. Even both models represent earth, each model shows its abstractions, and has its context. Also you can notice that the model represent something real but model itself is far to be real, even so it is very valuable for its specific purpose.

    As a conclusion for the theory, here is a definition of the model from DDD perspective:

    Domain Model - is a rigorously organized and selective abstraction of the (Business) Domain knowledge.

    Conclusion

    The two DDD premises are very simple abbreviations for a set of practices that are used in various domains to handle complexity. Focus to domain logic based on models is very important to control complexity, as you can see not only in software development.

    Refernces & Resources

    “Why Models” - http://jasss.soc.surrey.ac.uk/11/4/12.html
    “Domain-Driven Design: Tackling Complexity in the Heart of Software” - http://www.amazon.com/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215
    For introduction to DDD and more resources DDD take a look to my post: http://weblogs.asp.net/arturtrosin/archive/2009/02/09/domain-driven-design-learning.aspx

    Thank you,

    Artur Trosin

    Read more...

  • Software as Craft - Agile Conference in Chisinau

    In the post I want to highlight very important event for IT community that happened in the Chisinau capital of Moldova. Also I want to discuss my view on the Chisinau’s IT field from knowledge sharing perspective.

    Moldova is a small and relatively poor country in Europe but has a very well developed IT knowledge. Chisinau has at least three universities which graduates new IT specialists each year(USM, UTM, ASEM). That is why the country is very attractive for foreign IT investors. So as result most of the developers works for foreign investors as freelancer or in the local IT companies or migrate in better developed countries.

    So back to the event itself, as you can see that Moldova and especially Chisinau has a lot of developers BUT what is very specific for the developers within the Chisinau is that as I know it is not very common to get in groups, create communities such as: Agile, TDD, DDD, alt.net, etc… just for improving personal skills through knowledge sharing. The event is conference Software as Craft 2009, 14-16 May. I think the main intention of the conference was knowledge sharing and the second is to grow agile culture in Moldova, both are very important especially because both are far from to be on mature level in the most of the local IT companies. There weren’t any similar events before in Chisinau, so many thanks to Tacit Knowledge who is ONLY sponsor, organizer and initiator (I’m not Tacit Knowledge developer, I don’t advertise here my company). The conference was organized on the very high level. All that was absolutely free for any person who wished to attend. I appreciate that because not all companies can do that in crisis period, other part of the IT companies in Chisinau just want to extract all the juice from developers by outsourcing them without investing in their knowledge.

    As side effect of the conference, I hope, that developers’ community will be much closer to each other and open for knowledge sharing. For me was an interesting opportunity to discuss with bright folks and to capture important agile knowledge and practices from the persons such as Tom Looy who is ThoughWorks alumnus and at the moment Agile consultant.

    The conference covered interesting talks about Agile, TDD, DDD, Social Media, Parallel Programming..etc.. I think each person found its interests in each talk.

    As a conclusion of the post I want to say that nowadays is already old style of trying to learn something new sitting home alone and the only way to grow skills effectively is learning from each other by getting in communities. I don’t want to say that we should not work individually but I want to say that it is definitely not enough. Sadly but true is that among developers flows some skeptical ideas like “why I need to share my knowledge that so hard were accumulated by me”, so it is absolutely WRONG. Try to believe that all your output knowledge to community will have also a resonance back to you and finally you only will earn from that. Just take a look to Udi Dahan, Greg Young, Ayende Rahien, Martin Fowler, etc… they are very smart guys, try to understand why they share the knowledge with community using “tools” such as: Open Source Project, blogs, groups,…

    Thank you,

    Artur Trosin

    Read more...

  • Apache Hash Code and Equals Builders

    In the post I want to present two useful utility classes that for a long time are used in Java world and developed within Apache Software Foundation. These are HashCodeBuilder and EqualsBuilder classes which were ported by me in C#. To implement a good method of Hash Code and Equals for any class is not an easy task, but the classes assists implementing object.GetHashCode and object. Equals.

    For equality comparation of objects should be used all relevant fields of the object, derived fields could be excluded. In order to build a Hash Code for an object is recommended to use same fields that were used for equality.

    I like to learn by examples, so let’s to start one.

            public class RgbColor

            {

                private readonly ushort r;

                private readonly ushort g;

                private readonly ushort b;

     

                public RgbColor(ushort r, ushort g, ushort b)

                {

                    this.r = r;

                    this.g = g;

                    this.b = b;

                }

     

                public ushort R

                {

                    get { return r; }

                }

     

                public ushort B

                {

                    get { return b; }

                }

     

                public ushort G

                {

                    get { return g; }

                }

    That is an simple immutable POCO class that represents an RGB Color, and a typical implementation for the Equals and Hash Code are similar to:

               public bool Equals(RgbColor other)

                {

                    return other.r == r && other.g == g && other.b == b;

                }

     

                public override int GetHashCode()

                {

                    int result = r.GetHashCode();

                    result = (result * 397) ^ g.GetHashCode();

                    result = (result * 397) ^ b.GetHashCode();

                    return result;

                }

    [NOTE: irrelevant code blocks are omitted]

    TIP: these methods were generated using ReSharper.

    As we can see same logic is repeated especially same mathematical and logical operations every time, DRY principle will argue us and finally the embracement will not let us sleep. What to do? Get some drugs?

    Of course NO! Now we a lucky, we will apply HashCodeBuilder and and EqualsBuilder classes.

                public bool Equals(RgbColor other)

                {

                    return new EqualsBuilder()

                        .Append(R, other.R)

                        .Append(G, other.G)

                        .Append(B, other.B)

                        .IsEquals();                   

                }

     

                public override int GetHashCode()

                {

                    return new HashCodeBuilder()

                        .Append(R)

                        .Append(G)

                        .Append(B)

                        .ToHashCode();

                }

    With three primitive properties all is simple and nice but how to deal when we have collections, arrays, jagged arrays ect..?

    Flowing test demonstrate Append method generality:

            [Test]

            public void testRaggedArray()

            {

                var array1 = new long[2][];

                var array2 = new long[2][];

     

     

                for (int i = 0; i < array1.Length; ++i)

                {

                    array1[i] = new long[2];

                    array2[i] = new long[2];

                    for (int j = 0; j < array1[i].Length; ++j)

                    {

                        array1[i][j] = (i + 1) * (j + 1);

                        array2[i][j] = (i + 1) * (j + 1);

                    }

                }

                Assert.IsTrue(new EqualsBuilder().Append(array1, array1).IsEquals());

                Assert.IsTrue(new EqualsBuilder().Append(array1, array2).IsEquals());

                array1[1][1] = 0;

                Assert.IsTrue(!new EqualsBuilder().Append(array1, array2).IsEquals());

            }

    You can discover more usages by taking a look to the unit tests that are ported too. OR taking a look to java classes documentation because the usage is almost compatible.

    http://commons.apache.org/lang/api/org/apache/commons/lang/builder/EqualsBuilder.html

     http://commons.apache.org/lang/api/org/apache/commons/lang/builder/HashCodeBuilder.html

    http://www.java2s.com/Tutorial/Java/0500__Apache-Common/EqualsBuilder.htm

    http://www.java2s.com/Tutorial/Java/0500__Apache-Common/HashCodeBuilder.htm

     I’m sure that could be other implementations, based on attributes for the class proprieties and common implementation for the hash code and equal in the base class using reflection. I’m talking about something similar to the Sharp Architecture approach. However even in the case could be used the Builder classes for various type handling flexibility.

    The classes are ported as part of Domain Driven Design Sample implemented in C#, so you can find within the trunk real usages of the classes.

    So source for the classes is here:

    EqualsBuilder.cs

    HashCodeBuilder.cs

    About Domain Driven Design (DDD) Sample application I will blog more lately, for introduction in DDD in general please refer here.

    For folks who want to know more about Builder pattern and Fluent interfaces can refer to my previous post.

    Thank you for your attention,

    Artur Trosin

    Read more...

  • Builder Pattern and Fluent Interface

    In the post I want to discuss the practical part of the Builder pattern and how builder pattern usage and implementation can be simplified by Fluent Interface, it will show how these two patterns can leave in harmony with each other.

    For how many of us happened that requires enum types to be more complex types, to have a description for each enum or other additional fields. In the description field case we can attach attributes description above each enum value and using reflection to obtain them, in some cases it is a handy solution but in other it’s not. Of course each solution has its limitation, and in many cases enum types are not very helpful.

    As another solution, we can create a class with static readonly fields, this could be a typical implementation:

    public class EnumType

    {

        public static readonly EnumType ONE = new EnumType(1, "Descr1");

        public static readonly EnumType TWO = new EnumType(2, "Descr2");

        public static readonly EnumType THREE = new EnumType(3, "Descr3");      

     

        private readonly int id;

        private readonly string description;

     

        private EnumType(int id, string description)

        {

            this.description = description;

            this.id = id;

        }

     

        public int Id

        {

            get { return id; }

        }

     

        public string Description

        {

            get { return description; }

        }

    }

    Here are only Id, and Description fields but of course can be more or less depending on the requirements.

    Of course, using class instead of enums we will lose other facilities one of them is switch structure usage. To solve last issue I will try to apply Builder pattern. Shortly Builder pattern is a design pattern and its focus is on constructing a complex object step by step. Maybe we will not use it in its “GoF form”, but finally the idea of the patterns is that they are not ready to use solution they are adaptable and should be adapted to the context.

    What it will “build” is the switch structure that we can use for class types, similar to above EnumTypes class, but of course our switch usage is not limited only to the type.

    Here is a test that will try to pass further:

    [Test]

    public void CanCreateSimpleSwitchBuilder()

    {

        EnumType state = null;

        var enumType = EnumType.THREE;

        var builder = new SimpleSwitchBuilder();

        builder.Switch(enumType);

        builder.Case(EnumType.ONE);

        builder.Body(() => { Console.WriteLine(EnumType.ONE); state = EnumType.ONE; });

        builder.Case(EnumType.TWO);

        builder.Case(EnumType.THREE);

        builder.Body(() => { Console.WriteLine("->" + EnumType.TWO + EnumType.THREE); state = EnumType.TWO; });

        builder.Default.DefBody(() => Console.WriteLine("Def"));

        builder.Do();

     

        Assert.AreEqual(state, EnumType.TWO);

    }

    I will not show the SimpleSwitchBuilder code for the test which will pass it because usage of the SimpleSwitchBuilder class is ugly. But the idea is simple, Switch method sets state which will be tested against each Case value, then are Case methods which can cascade and a body represents a action that will be executed when Do method is invoked, if there is no Case for the Switch value then is executed Default action if it is specified.

    To increase readability we will introduce fluency for the SimpleSwitchBuilder:

    [Test]

    public void CanCreateSimpleFluentSwitchBuilder()

    {

        EnumType state = null;

        EnumType enumType = EnumType.THREE;

        new SimpleSwitchBuilder()

        .Switch(enumType)

            .Case(EnumType.ONE)

                .Body(() => { Console.WriteLine(EnumType.ONE); state = EnumType.ONE; })

            .Case(EnumType.TWO)

            .Case(EnumType.THREE)

                .Body(() =>

                {

                    Console.WriteLine("->" + EnumType.TWO + EnumType.THREE);

                    state = EnumType.TWO;

                })

            .Default

                .DefBody(() => Console.WriteLine("Def"))

        .Do();

     

        Assert.AreEqual(state, EnumType.TWO);

    }

    The readability is increased making the each method return itself (this), That is how we introduce the fluency.

    Here is the code for the Simple Builder:

    public class SimpleSwitchBuilder

    {

        private Action defaultAction;

        private object testObject;

        private IList<object> caseList;

        private readonly IDictionary<object, Action> caseActions = new Dictionary<object, Action>();

     

        public SimpleSwitchBuilder() { }

     

        public SimpleSwitchBuilder Switch(object obj)

        {

            caseList = new List<object>();

            testObject = obj;

            return this;

        }

     

        public SimpleSwitchBuilder Case(object obj)

        {

            caseList.Add(obj);

            return this;

        }

     

        public SimpleSwitchBuilder Body(Action action)

        {

            foreach (var switchCase in caseList)

            {

                caseActions.Add(switchCase, action);

            }

     

            caseList = new List<object>();

     

            return this;

        }

     

        public SimpleSwitchBuilder Default

        {

            get { return this; }

        }

     

        public SimpleSwitchBuilder DefBody(Action action)

        {

            defaultAction = action;

            return this;

        }

     

        public void Do()

        {

            foreach (KeyValuePair<object, Action> caseAction in caseActions)

            {

                if (ReferenceEquals(caseAction.Key, testObject) || Equals(caseAction.Key, testObject))

                {

                    caseAction.Value();

                    return;

                }

            }

     

            if (defaultAction != null)

                defaultAction();

        }

    }

    Ok, fluency is nice, but what if the switch class is not used correctly, and method invocation order is not correct?

    [Test]

    [ExpectedException(typeof(NullReferenceException))]

    public void CanCreateSimpleSwitchBuilderInWrongWay()

    {

        new SimpleSwitchBuilder()

            .Default

                .DefBody(() => Console.WriteLine("Def"))

            .Case(EnumType.ONE)

                .Body(() => Console.WriteLine(EnumType.ONE))

        .Do();

    }

    The exception will rise, but it will not say anything about the problem, to solve the problem we can introduce validation of the methods call order BUT it could become very complex, the validation will be more complex than implementation itself, and the validation will hide the real switch logic.

    In order to solve the problem we will introduce interfaces, each interface will return its methods for the next switch step:

        public interface IDo

        {

            void Do();

        }

     

        public interface IBody : IDo

        {

            ICase Case(object obj);

            IDefault Default { get; }

        }

     

        public interface ICase

        {

            ICase Case(object obj);

            IBody Body(Action action);

        }

     

        public interface IDefault

        {

            IDo Body(Action action);

        }

     

        public interface ISwitch

        {

            ICase Switch(object obj);

        }

    Implementing the interfaces will allow having following fluency without any complex validation and knowing details of usage of the methods, also the usage is more intuitive.

    Instead of many methods that can lead to wrong usage order (see pervious examples):

    We will have nice and intuitive usage:

    Here is the full source code.

    Conclusion

    Builder pattern and fluent interface pattern in various scenarios can not only simplify and make more intuitive API usages but also simplify its validation logic. There are other ways of implementation of the fluent interface pattern, for example using nested class.

    Thank you,

    Artur Trosin

    Read more...

  • Rhino Tools: Rhino Security Guide

    In the post I want to discuss basics of Rhino Security which was developed by Ayende. This is a nice security model implementation that could be easily integrated and adapted in many applications and scenarios. Intention of the post is to give good startup for Rhino Security. I will try to follow KISS (Keep It Simple) in my explanations, yes, I know that “It's All Relative” :) and also keep to minimum required infrastructure for the startup (mainly only required). Rhino Security is a part of Rhino tools, which is a set of already mature reusable classes and tools that cover various scenarios, such as well known:

    1.Rhino Mocks - mocking framework

    2.Rhino Service bus – enterprise service bus implementation

    3.Rhino Common – set of reusable classes (working with threads, IoC, helpful Http Modules, ect..)

    4.Rhino ETL (Extract Transform Load) – library that allows moving data in various formats.

    5. Rhino Queues - queuing service that queues over the Internet

    6. Rhino DHT - Rhino Persistent Hash Table

    7.And many other goodies.

    Rhino Security in my opinion is less popular then other Rhino stuff, one reason could be that is on web is not so much information. That is why I want to put the basics of Rhino Security and show how easy you can setup working application based on Rhino Security(by “easy” I mean that is much easier to setup rhino security then to try to implement at least a little part of it).

    Rhino Security implementation is not very huge in terms of code lines, but is based on many other set of classes. I will describe them shortly later. Rhino Security as a persistence mechanism uses NHibernate that allows you very easy to integrate with many RDBMS types. To more precise Rhino Security could be configured with both: Castle Active Record and NHibernate. Castle Active Record is Active Record pattern implementation which behind the scene uses NHibernate (mmm,… yes,…it is not only Active Record pattern implementation, it has much more other features than are described by classic Active Record pattern). I’ve chosen to use NHibernate because it is more popular then Castle Active Record (Castle AR) and other reason is that Castle AR can’t be used without NHibernate but NHibernate without Castle AR can :).

    Let’s do first steps and get all the Rhino bits from SVN repository, here is the SVN link:

    https://rhino-tools.svn.sourceforge.net/svnroot/rhino-tools/trunk/

    in order to Check out the repository you can use any of SVN clients, there are many free:

    1)http://tortoisesvn.net/about - easy to use and is integrated in windows explorer

    2)SmartSVN – the SVN client has two versions, free and more limited and professional version, I like the client and I use it in day by day work, and even its free version is powerful enough to handle various scenarios.

    I will not dive deep in “How To checkout” details, so the result of checking out from the trunk is on the next image. On the image is the last trunk at the moment, 2077. Rhino Security is in “security” folder, but we will not touch it now. In the root folder are “readme” files for some additional information. Also in the root folder are various NAnt and bat files that will invoke NAnt and build entire projects tree. In order to build a little bit faster run “build_without_tests.cmd” file. After a black screen and a lot of blinking lines will be generated “build” folder that will contain all the generated assemblies for all Rhino projects and dependencies.

    If you have made all the steps successfully then let’s start and setup first Rhino Security project, if not then all required assemblies and project sample are in attached zip file.

    Rhino Folder Structure

    So, create a new folder where we will set our new project. Then in the root of just created folder create a new folder “libs” where we will place all required assembly references and dependencies.

    Now, copy following 20 assemblies from Rhino’s generated “build” folder to “libs” folder: 

    Rhino Libs

    Then open visual studio and create a new console project, name the project “RhinoSecurity”, and place it in the created root folder (on the same level as “libs” folder).

    Rhino Sample

    In just created project reference following assemblies from our “libs” folder:

    Rhino Sample

    NHibernate.ByteCode.Castle.dll – assembly is not used directly from our code, but we reference it because it is loaded runtime and should be copied to the output bin folder. The assembly contains implementation for three NHibernate interfaces which are used to generate in memory proxy classes for our POCO mapped classes such as User entity which will be discussed further (that is why we need to set mapped properties as virtual). In this case is used Castle but you can find other implementations.

    Now we have a good base to start type our logic. But, before to dive in implementation details, let’s discuss few Rhino Security types that can help us to integrate Rhino Security with our domain model. Most of the systems nowadays have User entity in the domain model, the User entity can have various proprieties that are specific for each different domains. Rhino Security allows attaching the security implementation to any User entity with minimal changes to the existent User entity, but how Rhino Security knows which entity from our model represents the User? For that is responsible Rhino.Security.IUser interface that marks the User entity, the only member of the interface which should be implemented is SecurityInfo member.

    Here is typical implementation of the interface which fits in many common user implementations:

    public class User : IUser

    {

        private long id;

        private string name;

     

        public virtual long Id

        {

            get { return id; }

            set { id = value; }

        }

     

        public virtual string Name

        {

            get { return name; }

            set { name = value; }

        }

     

        /// <summary>

        /// Gets or sets the security info for this user

        /// </summary>

        /// <value>The security info.</value>

        public virtual SecurityInfo SecurityInfo

        {

            get { return new SecurityInfo(name, id); }

        }

    }

    The Id property is usually identity of the User entity (which not necessary should be a long). SecurityInfo property is used by Rhino Security to attach our user entity to Rhino Security implementation which is used internally by the framework to manage authorization logic for the User. [Mapping file for the User entity and NHibernate configuration are in the provided source code sample] Now I’ll try to describe main Rhino Security model entities from my point of view, which we will explore lately in practice:

    -UserGroup – is a user group defined by its name, it’s used to associate users to the group and define common permissions for users that belong to the group. The groups can be structured hierarchically in parent/child relationship. In some cases you can think about users group in terms of Roles which have set of permissions ex: Administrator, Guest, etc…

    - Operation – a named operation that also can be structured hierarchically using following convention: “/Content/View”, “/Content/Edit” etc… so if we create a new “/Content/Edit” operation then are created two operations “/Content” as parent and “/Content/Edit” as child. Further you can allow or deny the operation for a User or for a UserGroup. When an operation is allowed or denied for a User, UserGroup, etc… also could be specified a level, levels defines importance of the permission, but very often is used Default Level which is equal to 1. Permission with higher level is more dominant in taking decision of operation allowance. For example: if we deny operation “/Content/View” with default level for a user and after that allow for the same operation but with level 9 then finally the operation will be allowed for the user.

    -Permission- is result of allow or deny process for an operation (e.g. “/Content/View”) for a User or UserGroup (or EntityGroups).

    -User – represent any entity that implements IUser interface, we already covered it above…

    I didn’t cover here EntityGroups, IEntityInformationExtractor, Query Permissions,.. maybe I will cover them in the next posts because I want to concentrate on the basic and most common things, here I only want to mention that they exists and you can use them to associate permissions for any entity that implements IEntityInformationExtractor interface.

    For a completeness of the entire image I want to show Rhino Security Data Model and to comment it:

    Rhino Security Database Model

    All tables which are prefixed with “security_” are related to directly Rhino Security, Users table is used to persist User entity.

    Again, tables from red region I won’t touch now because they are related to Entities, shortly, the tables from red region are used to associate permission for entities like Accounts from a given EntitiesGroups or to entities directly (omitting EntitiesGroups) that implements IEntityInformationExtractor which are related by SecurityKey (Accounts.SecurityKey with security_EntityReferences.EntitySecurityKey).

    So, from the above diagram we can see that we can allow or deny operations by permission for Users or UserGroups. A User can belong to one or more UserGroups. Operations and UserGroups can be structured hierarchically in parent\child relationship. That’s it... Nothing complicated, is it?

    Back to Code:

    So, let’s initialize Rhino Security on order to make it work.

    First of all we need to initialize Windsor Inversion of Control Container and add at least two Windsor facilities:

    1) NHibernateUnitOfWorkFacility

    2) RhinoSecurityFacility

    we can do it programmatically or using configuration file, here is programmatic version:

    //Create container

    container = new WindsorContainer();

    //Register Unit Of Work Facility instance

    container.Kernel.AddFacility("nh", new NHibernateUnitOfWorkFacility());

    //Regiser Rhino Security Facility instance:

    //1) provide Rhino’s DB table naming convetion:

    //  Prefix '_'              ex: security_Permissions or

    //  Schema '.'(is default). ex: security.Permissions

    //2) provide User Type that implements IUser

    container.Kernel.AddFacility("security",

                                 new RhinoSecurityFacility(SecurityTableStructure.Prefix, typeof (User)));

    All code explanations are in the comments, only what I want to add is that we need to run the code once in application startup.

    The configurations could be setup in few ways, and most of them which I saw were using Rhino Bindsor which is Domain Specific Language written in Boo that allows configuring Castle Windsor Inversion of Control container without using any xml tag. Maybe some of you already noted I didn’t go with Boo way, because it can look as more “exotic” way for some folks.

    Next, we need to resolve Rhino Security repository\services\facades that will allow us to manage and query Rhino Security model:

    //The repository is main "player" that allows to manage security model

    authorizationRepository = IoC.Resolve<IAuthorizationRepository>();

    //The service provides authorization information                      

    authorizationService = IoC.Resolve<IAuthorizationService>();

    //Provide a fluent interface that can be used to assign permissions

    permissionsBuilderService = IoC.Resolve<IPermissionsBuilderService>();

    //Allow to retrieve and remove permissions

    //on users, user groups, entities groups and entities.

    permissionService = IoC.Resolve<IPermissionsService>(); 

    Now, let’s create a transient User entity and save it to DB.

    //Create a User

    var userArt = new User { Name = ("ArturTrosin") };

    UnitOfWork.CurrentSession.Save(userArt);

    The user entity we will use to operate with. Then create two groups, child and parent:

    //Create "AdminUserGroup" UsersGroup                 

    authorizationRepository.CreateUsersGroup("AdminUserGroup");

    //Create Child group for "AdminUserGroup"

    authorizationRepository.CreateChildUserGroupOf("AdminUserGroup", "GuestUserGroup");

    Then create three operations:

    //Create two operations: root /Content          operation

    //              and its child /Content/Manage   operation

    authorizationRepository.CreateOperation("/Content/Manage");

    UnitOfWork.Current.TransactionalFlush();

    //Create third operation as child of the /Content

    authorizationRepository.CreateOperation("/Content/View");

    Associate user with created group:

    //add user to "AdminUserGroup", so all further permissions for "AdminUserGroup"

    //group are aslo applied for the user also              

    authorizationRepository.AssociateUserWith(userArt, "AdminUserGroup");

    Here is how we can associate two permissions for a user group or a user:

    //Create Permission using Builder Pattern

    //and its fluent interface:              

    permissionsBuilderService

      //Allow "/Content" Operation

      .Allow("/Content")

      //for "AdminUserGroup"

      .For("AdminUserGroup")

      //Could be specified On an entityGroup

      //or an entity that implemented IEntityInformationExtractor 

      .OnEverything()

      //Here could be specified Permission priority Level

      //DefaultLevel is equal with 1

      .DefaultLevel()

      .Save();

     

    //Create Deny permission for user with level 5   

    permissionsBuilderService

      .Deny("/Content/View")                 

      .For(userArt)                

      .OnEverything()                 

      .Level(5)

      .Save();

     

    In the next code lines are demonstrated few Rhino Security methods that allows you to retrieve various info:

    //Ask users allowance for an Operation

    bool isAllowedContentOp = authorizationService.IsAllowed(userArt, "/Content");

    bool isAllowedContentManageOp = authorizationService.IsAllowed(userArt, "/Content/Manage");

     

    //Retrieve Rhino Security entities 

    UsersGroup adminUsersGroupWithoutUser = authorizationRepository.GetUsersGroupByName("AdminUserGroup");

    Operation contentViewOp = authorizationRepository.GetOperationByName("/Content/View");

    Permission[] userArtPermission = permissionService.GetPermissionsFor(userArt);

     

    //Retrieve athorization info that can help to

    //understand reason of allowance (or not) of an operation

    //its very helpful for debuging

    AuthorizationInformation authInfo = authorizationService

                                                       .GetAuthorizationInformation(userArt, "/Content");

    And finally remove created entities:

    //Cleanup created entities

    authorizationRepository.RemoveOperation("/Content/Manage");

    authorizationRepository.RemoveOperation("/Content/View");

    authorizationRepository.RemoveOperation("/Content");

    //Remove child group first

    authorizationRepository.RemoveUsersGroup("GuestUserGroup"); 

    authorizationRepository.RemoveUsersGroup("AdminUserGroup");              

     

    authorizationRepository.RemoveUser(userArt);               

    UnitOfWork.CurrentSession.Delete(userArt); 

    Link to full source code and libs is here.

    Note that I didn’t show code of UoW Flush method calls, which persists changes to DB. And of course in order to run the code you need a database with Rhino Security DB schema in it similar to provided earlier, so there is a method which will do all the dirty work for you:

    new DbSchema().Generate(container);

    The only thing that you should care is to create an empty DB (named Security_Test) and verify if connection string from hibernate.cfg.xml file is set properly.

     Conclusion

    Even Rhinos tools could be not compatible between SVN versions; However most of the Rhino bits are used already in production environments and are very popular with a great community support.

    So, I would recommend taking Rhino Security in account if you already use NHibernate and if you need a similar model or your required model could be expressed with Rhino Security model.

    Notice: The code is tested with VS 2008 sp1 and Sql server 2005.

    Thank you,

    Artur Trosin

     

    Read more...

  • ReSharper Tip: Manual Code Reordering

    Most of you already know what ReSharper is and its power; its shortcuts can improve productivity drastically. That is why I really recommend learning its basic keyboard shortcuts for persons that have already ReSharper license. Here is full ReSharper keyboard shortcuts map: http://www.jetbrains.com/resharper/documentation/feature_map.html.

    I don’t want to go through all ReSharper keyboard shortcuts now but I want to show a Tip that I discovered for me few days ago named “Manual Code Reordering”. For some of you it could not be new but for other it could be that is why I want to demonstrate them in the post.

    Let say you have a method or code block which you want to move UP or Down, usually you will cut it and will paste in right place, for that purpose ReSharper has its keyboard shortcut, so place your cursor on Method Name or other code block and press Ctrl+Alt+Shift and holding the keys for a while your code block will become selected:

    Method Move

    Now while holding the keys you can move Up/Down the method within the class, you can do it with any other code blocks or (simple statements, code structure such as if/for/foreach etc..).

    Ok, but it is not all, if you place your cursor on operand then ReSharper will allow you to change operand order using Left/Right keyboard arrows, something like here:

    Operator Move

    Maybe some will find this feature as not so useful (as other), but the shortcut save me some coding time. I hope you like it :).

    Ahh.. I use ReSharper 4.1. And my ReSharper Keyboard Schema is Intellij IDEA. (ReSharper->Options->General->Intellij IDEA)

    PS: If you don’t find something to be useful then maybe you just wasn’t in situation when it would be useful? :)

    Thank you,

    Artur Trosin

    Read more...