Tales from the Evil Empire

Bertrand Le Roy's blog


Bertrand Le Roy

BoudinFatal's Gamercard

Tales from the Evil Empire - Blogged

Blogs I read

My other stuff


August 2010 - Posts

Building my new blog with Orchard – Part 1

Building the new house... Several people have asked me if I would move my blog to Orchard. There are actually several challenges with this that have nothing to do with Orchard itself, but suffice it to say that right now I’m not really considering it.

On the other hand, for a long time I’ve been wanting to create a second, more personal blog about movies, books, video games and opinions to clearly separate the software stuff from the rest. I’ve been posting several times on science, games and even on politics here but it always felt a little wrong and I felt obligated to tone it down seeing that this blog has a clear association with my employer, Microsoft.

Anyway, the release of Orchard 0.5 looks like the perfect opportunity to create that new blog. I have big plans beyond just blogging for this site and the flexibility of the Orchard platform will be perfect for this.

I will document the whole process here as it unfolds.The Orchard 0.5.144 zip from CodePlexI’m starting with a standard 0.5.144 zip release as downloaded from CodePlex. As I wanted to be able to do local module development that I would later deploy to my hosted account, I started by deploying into a local IIS 7 directory configured to run in the default ASP.NET 4.0 application pool. This way, I can point VS or WebMatrix to my local directory and hack new modules.

My next step was to deploy the files to my hoster after convincing him to switch my site to the 4.0 app pool. I used WebMatrix to do the ftp transfer as I find the publication UI to be quite nice but I could have used any ftp client (I also use FileZilla). I deployed everything except for the contents of the App_data folder.Publishing from WebMatrix

One thing I had to do was to set-up the machine key web.config entry (always a good thing to do) to work around a MAC validation bug that has since been fixed in our dev branch.

While I was in config, I deleted the following lines, which declare the dynamic compilation provider:

  <add extension=".csproj"
CSharpExtensionBuildProviderShim" /> </buildProviders>

Dynamic compilation is a fantastic feature for local development as it enables you to build and modify modules without explicitly compiling from Visual Studio and even without Visual Studio altogether. You can just save your files as you go and it will just get picked up and compiled on the fly without your having to do anything. It is what makes Orchard module development possible using only free tools.

But on a production server, I would consider it a liability. First, YAGNI. Second, there is about a million different ways it can go wrong (trust me on that).

This being done, it was time to hit the site for the first time, which resulted in the Orchard setup screen showing up. I was able in less than a minute to specify my site name, the login I wanted to use for the super-user and my password.

I chose to use the default SqlCE database because database backups and exports at my hoster are overly complex, and SqlCE will enable me to download a snapshot of the site whenever I want to back it up, database included. It also makes it trivial for me to get my production data back down to my local development version of the site.

I immediately changed the theme (can’t stand the current Contoso default) to use the clean white classic theme. I will modify it heavily after the next milestone when the theme engine is done, but this will do nicely in the meantime.

Changing the theme to classic white

After that was done, I created a blog, added it to the menu and set it as the site’s home page.Creating a new blog

Now that I had the blog as my new home page, I didn’t need the old home page, so I went ahead and deleted it from the “manage contents” screen.

Finally, I created an about page and wrote the contents I wanted there. I added the page to the site’s menu and disabled comments.Creating the About Page

And that’s pretty much where I’m at today. In the next post, I’ll show how I will import existing posts with their comments from my FaceBook wall and from this blog.

The site can be reached at this address (but there isn’t much to see yet):

Part 2 of this series can be read here:

Clay: malleable C# dynamic objects – part 2

(c) Bertrand Le RoyIn the first part of this post, I explained what requirements we have for the view models in Orchard and why we think dynamic is a good fit for such an object model.

This time, we’re going to look at LouisClay library and how you can use it to create object graphs and consume them.

But before we do that, I want to address a couple of questions.

1. If we use dynamic, aren’t we losing IntelliSense and compile-time checking and all the good things that come with statically-typed languages? And is C# becoming overloaded with concepts, and trying to be good at everything but becoming good at nothing?

Hush, hush, everything is going to be all right. Relax.

Now think of all the XML/DOM styles of APIs that you know in .NET (or Java for that matter). Except if they are doing code generation, you already don’t get IntelliSense and compile-time checking. Well, you do get them on the meta-model, which you care very little about, but not on the actual data, which you do care a lot about. So it’s ok, we can afford to lose what we didn’t have in the first place. And there’s always testing, right?

As for the evolution of C#, come on. You should be happy it’s still alive and innovating. Take a deep dive into what expression trees really mean for example. It’s a beautiful thing.

2. What’s wrong with ExpandoObject?

Nothing but we can do better. ExpandoObject is actually implemented in a surprising way that makes it very efficient. Hint: no dictionary in there. Other hint: it’s a beautiful thing.

But in terms of API usability it’s not very daring and in particular it does not do much to help you build deep dynamic object graphs. Its behavior is also fixed and can’t be extended.

Clay on the other hand is highly extensible and focuses on creation and consumption of deep graphs.

Let’s get started. The first thing you can do with Clay is create a simple object and set properties on it. Before we do that, we’ll instantiate a factory that will give us some nice syntactic semantic sugar. I wish we could skip this step and use some kind of static API instead but we can’t. Well, small price to pay as you’ll see:

dynamic New = new ClayFactory();

Now this “New” object will help us create new Clay objects, as the name implies (although this name is just a convention).

Here’s something easy and not too surprising:

var person = New.Person();
person.FirstName = "Louis";
person.LastName = "Dejardin";

Nothing you couldn’t do with ExpandoObject but where this gets interesting is that there is more than one way to skin that cat, and that is going to open up a lot of possibilities.

For instance in Clay, indexer syntax and property accessors are equivalent, just as they are in JavaScript. This is very useful when you are writing code that accesses a property by name without knowing that name at compile-time:

var person = New.Person();
person["FirstName"] = "Louis";
person["LastName"] = "Dejardin";

But that’s not all. You can also use properties as chainable setters, jQuery-style:

var person = New.Person()

Or you can pass an anonymous object in if that’s your fancy:

var person = New.Person(new {
    FirstName = "Louis",
    LastName = "Dejardin"

Even better, Clay also understands named arguments, which enables us to write this:

var person = New.Person(
    FirstName: "Louis",
    LastName: "Dejardin"

In summary, there is a lot of ways you can set properties and initialize Clay objects.

As you’d expect, accessing properties can also be done in a number of ways and all of the following are equivalent:


You can also create JavaScript-style arrays:

var people = New.Array(
    New.Person().FirstName("Bertrand").LastName("Le Roy")

That array is also a full Clay object, meaning that you can add properties to it on the fly.

Then you can do this in order to count the items in the array and to access the FirstName property of the first item in the array:


It’s even easier when you want to create an array property on an existing Clay object:

person.Aliases("bleroy", "BoudinFatal");

If more than one argument gets passed in, Clay is assuming that you’re initializing the property as an array. And if you only have zero or one arguments, just explicitly pass in an array (CLR or Clay):

person.Aliases(new[] {"Lou"});

Contrary to CLR arrays, Clay arrays can grow dynamically:


And they also answer to a number of methods such as AddRange, Insert, Remove, RemoveAt, Contains, IndexOf, or CopyTo.

Getting all this together, we can create a reasonably complex object graph with a fairly expressive and terse syntax:

var directory = New.Array(
        FirstName: "Louis",
        LastName: "Dejardin",
        Aliases: new[] { "Lou" }
        FirstName: "Bertrand",
        LastName: "Le Roy"
    ).Aliases("bleroy", "boudin"),
        FirstName: "Renaud",
        LastName: "Paquay"
    ).Aliases("Your Scruminess", "Chef")
).Name("Some Orchard folks");

There’s one last thing I’d like to show that I found really neat and surprising the first time Louis showed it to me.

Imagine that you have a CLR interface that you need to implement, for example:

public interface IPerson {
    string FirstName { get; set; }
    string LastName { get; set; }

but you would like to do it using a Clay object such as one of the persons defined above. Well, you can do this:

IPerson lou = people[0];
var fullName = lou.FirstName + " " + lou.LastName;

What’s extraordinary here is that lou is a perfectly valid statically typed CLR variable. You’ll get full IntelliSense and compile-time checks on that code. It’s just an object that implements IPerson although we never wrote a concrete type implementing that interface.

What makes the magic possible is that Clay is overriding the cast operator and creating a dynamic proxy for the interface (using Castle) that delegates the members to the Clay object.

So there is an actual CLR type but it’s being code-generated on the fly.

That is what enables you to write the following:

foreach(var person in directory) {

What’s happening here is that the “directory” Clay array is being cast into an IEnumerable, for which all the right methods are implemented by the dynamic Clay array object.

We are going to dig deeper into Clay in future posts as there is so much more to do and hack but that should give a satisfactory introduction of the basic intents for the library. I certainly hope you like it and find some crazy ideas about how to use it.

You can find Clay here: http://clay.codeplex.com/

The first part of this post is here:

Clay: malleable C# dynamic objects – part 1: why we need it

Bertrand Le Roy, d'après Mucha When trying to build the right data structure in Orchard to contain a view model to which multiple entities blindly contribute, it became obvious pretty fast that using a dynamic structure of sorts was a must.

What we needed was a hierarchical structure: a page can have a list of blog posts and a few widgets, each blog post is the composition of a number of parts such as comments, comments have authors, which can have avatars, ratings, etc.

That gets us to the second requirement, which is that multiple entities that don’t know about each other must contribute to building that object graph. We don’t know the shape of the graph in advance and every node you build is susceptible to being expanded with new nodes.

The problem is that C# static types are not that much fun to build with those requirements.

You could use an XML DOM kind of API with ChildNodes and Attributes collections and NodeName and Value properties and that would absolutely work.

But I think most people would agree that any long term exposure to this style of API is a serious reason for depression that has sucked the will to live out of so many developers we don’t want to go there unless a gun is pointed to our heads.

The main reason why those APIs are so dreadfully boring is that they give you access to the metadata first and hide access to the actual data (which is what you really care about) under secondary APIs such as Value.

The value of a node in an object graph is the one thing you care about the most. The second thing you want easy access to is the children of the node. You want to be able to access them by index or by name.

Honestly, which one would you rather write?




Yeah, I thought so. The first option feels almost like using reflection to do simple property dereferencing.

OK, so it should be clear by now that the reason why XML APIs are so un-fun in C# is that static languages hate unpredictability and want to know everything about an object at compile-time. They accept what is known in advance (nodes have meta-data that is stable in structure) and relegate what’s unknown to properties.

In other terms, you end up with the real object being a property of the meta-data structure, whereas the meta-data should be a property of the object.

Before I conclude, I want to say a word about object initializers, which have been around for a while and are commonly used to build fuzzy option parameters like this:

Html.TextBoxFor(m => m.CurrentUser, new {
title = "Please type your user name",
style = "float:right;"

It should be noted that these anonymous objects, while very flexible at creation time, are basically constants so once you’ve built them, you cannot add new properties or methods to them, which makes them unsuitable for our use-case.

Fortunately for us, C# 4.0 has a great new keyword ready for all kinds of abuse: dynamic.

In part 2, I’ll show how Clay, a small library that Lou wrote, is solving our problem in a very nice and elegant way.

The Clay library: http://clay.codeplex.com

Part 2:

Optional named parameters work pretty well

Not exactly ducks Rob has found a use for dynamic:

Yay! Let’s celebrate!

Well, I was a little puzzled because I don’t think it quite adds up in the specific example he chose (although please see no aggressiveness here: Rob’s a friend; peace!). The idea is to have the same flexibility that a dynamic language can offer in terms of evolution of an API. Here’s his original Ruby example:

def my_method(args)
  thing_one = args["thing1"]
  thing_two = args["thing2"]

my_method :thing1 => "value", :thing2 = Time.now

The idea, which is quite common in dynamic languages, is that instead of passing a list of predetermined parameters in a specific order, you pass in a dictionary of sorts that contains a list of named parameters in no particular order. It’s more readable (think in particular of the inexpressive mess that C# is when you have a few Boolean parameters on a method) and it’s more flexible: if you need to accept a new option or parameter, just do it and existing code won’t break. Your code can just assume a default value for the new option and that’s it.

Which is something that C# 4.0 does quite well with named optional parameters. Here was Rob’s solution by the way, which is quite smart:

static void DoStuff(dynamic args) {

DoStuff(new { Message = "Hello Monkey" });

And here’s the same thing expressed with optional parameters:

static void DoStuff(string message = null) {

DoStuff(message: "Hello Monkey" });

In terms of readability and flexibility, both approaches are roughly equivalent, but the optional parameters are providing the opportunity for IntelliSense (for both code inside the method and client code) as well as static compile-time type checking. If you want to add an additional parameter, that’s really easy, just add it. If the calling code relies on named parameters, it will still work as the value for the newly added parameter will be assumed to be the default.

Or will it? Actually, not exactly, this will require a recompilation of the client app, which is a FAIL of sorts. The reason for that is that optional parameters really are mostly a compilation trick so if you change the signature, even though the client code doesn’t need  to change, the compiled version of it does as it needs to bind to a different method, with a different signature.

But on expressiveness and flexibility criteria only, I would pick optional parameters instead of the dynamic approach every time.

Now don’t get me wrong, an anonymous object as an options parameter still has its place when the method it’s being passed to is going to pass down the contents of the objects without caring about its precise structure or contents.

An example of that, also taken from Rob’s post, is an HTML helper that takes an option object that gets transformed into attributes of the tag being rendered:

Html.TextBox("name", "value", new {size = 20});

Here, the whole point is to enable custom attributes that the API author did not think about in advance and let those flow into the markup. The code for the helper doesn’t expect anything in particular from this object, it just copies properties over. I think an option object in this case is fine.

Although one can’t help but wish there were some super-easy JSON-like syntactic sugar in C# to create dictionaries, inferring the key and value types at compile-time, something like that:

var a = new {
    foo = "bar",
    bar = "baz",
    answer = 42             

but that wouldn't require reflection or dynamic.

On the other hand the HTML helper case is also a case where the use of dynamic does not bring a lot of value unless I’m missing something…

UPDATE: Phil explains in details why changing the signature requires a recompilation of the client code: http://haacked.com/archive/2010/08/10/versioning-issues-with-optional-arguments.aspx.

Orchard 0.5 is out

(c) Bertrand Le Roy 2004 Before I joined Microsoft seven years ago, I had spent a couple of years building a Web CMS. It wasn’t open-source unfortunately but the experience convinced me that most public-facing web sites would shortly use some form of CMS. I also forged strong opinions about the right level of component granularity that a CMS must implement.

For the last year and a half, I have been fortunate enough to work with a talented small team within ASP.NET and with a growing community from all around the world on building a new Web CMS on top of ASP.NET MVC.

Today I am very happy to invite my readers to check out some of the results of that work: earlier this week, we released version 0.5 of Orchard.

We are far from being done, but this is an important milestone in a couple of ways.

First, the set of features that we implemented makes it reasonable to build some sites with Orchard.

Second, our developer story is now fairly complete. We have a reasonable and stable developer story that you can bet on today to build extensions (the UI story is not at this point yet and is what we’ll work on next). This means that from this point on, progress should begin to come more from the community than from our small team.

If we’ve done our jobs right, it should be loads of fun.

I just can’t wait to see the cool stuff that you’re going to build…

Orchard 0.5:


What did you start programming on?

(c) Bertrand Le Roy There’s some kind of controversy going on today in our microcosm. I don’t want to enter that controversy because I think nobody’s willing to listen to anybody but themselves.

Instead, I want to propose something different, a trip down memory lane. Most people reading this blog are professional developers who in general care about good practice and good craftsmanship. I do too.

But I also remember how I got started with computers. It wasn’t in a computer science class. I learnt by myself on a TI99/4A in Basic first and then Extended Basic.

It was a magical experience in more than one way: I didn’t really understand what I was doing or what was going on in the computer. I was just doing whatever worked. I had no idea programming computers was hard because it didn’t seem to be.

I was reading books and magazines where I couldn’t understand half the words (I’m French and most of the literature if not all of it was in English) but it didn’t matter. I had silly notions at first, for example, I remember distinctly asking myself if a goto would rewind the cassette tape to go back to that instruction. That is how ignorant I was.

Despite all that, I was able to produce a bundle of spaghetti code that would probably pop my eyes out today but that was a decent video game that I was able to sell to a few people. None of them looked at the code and told me “you’re doing it wrong”. That encouraged me and allowed me to buy new toys. I went on to learn 6502 assembly and dug deeper and deeper into lower layers of my new Atari 800XL. I was starting to make good sense of all this.

Later, I started web programming. Again, silly notions about what was happening behind the scenes. I had no idea what a database really was, it was just a place where I could store stuff and magically get it out later. I had no idea what an object was, just that I could put a bunch of properties on a variable. Inheritance? What’s that?

And here I am, 30 years later, thinking that I’m not too bad at what I’m doing, and occasionally pontificating on the silly things that n00bs can do sometimes.

But there is one thing that I know: I didn’t learn all of that in a week. It took me 30 years to learn all I know today about computers. It was an extremely slow process of doing stuff without understanding it and slowly digging through the silliness to find how it made sense. That’s how I’ve always learned and how I still do it today.

So seriously, try to remember how things were before you became this über-computer geek. How did you start programming? What hardware did you use? What language? What were the silly things you believed?

I can’t wait to see your answers in comments or in trackbacks.

More Posts