Tales from the Evil Empire

Bertrand Le Roy's blog

News


Bertrand Le Roy

BoudinFatal's Gamercard

Tales from the Evil Empire - Blogged

Blogs I read

My other stuff

Archives

Asking questions is a skill

If you’re going to get into any sort of technical job, you’re going to have to ask questions. A lot of questions. Unfortunately, too few people understand how to ask questions properly. Asking questions is a skill. It has to be learnt.

Here are a few of the most common mistakes I see people make every day on Stack Overflow, or on other technical forums…

1. Not enough information

“I installed Orchard, and it doesn’t work. Please help”

Put yourself in the shoes of someone trying to answer this. Can you really imagine anyone can? What are you expecting the answer to this could be?

How does it not work? What did you do? What were you expecting, and what happened instead?

Ask yourself: is there enough information for anybody who’s not you to figure out what the problem is?

Notice how asking a question that is too vague will only get you more questions in return. This is a sure sign, that you should learn to read and interpret.

2. Too much information

“[wall of text explaining your setup with a luxury of irrelevant details…]
Why does the screen flash blue when I click the green button?”

The people who are likely to be able to answer your question can’t invest a lot of their time understanding all the specifics of your business, nor should they have to. It’s nice of you to spend the time to give all those details, but that doesn’t mean you should expect your interlocutors to spend any time reading them.

Focus on the details that are immediately useful to answering the question. Distil the question itself to be free of any specificities that make it only applicable to your specific case. Not only will you make it more likely that anyone will read it to the end and answer it, the probability that your question and its answer may be useful to others is improved.

3. Aggressive rants

“I CLICKED THE F$#%ING GREEN BUTTON, AND THE SCREEN DIDN’T FLASH BLUE. I CAN’T BELIEVE PEOPLE BUILD PIECES OF S&*T LIKE THIS AND THEN EXPECT US TO USE THEM.”

Don’t insult the people you’re asking answers from, and don’t insult their work either. Stay civil and professional. That’s a simple one to understand. I can’t believe that people can be so thick in the head that they can’t understand something that F$#%ING SIMPLE :p

4. More than one question at a time

“Where is the green button? What will happen if I press it? Is it normal that the screen flashes blue? Should I build my web site with a CMS? What time is it?”

This one is subtle, and is more about human psychology and common cognitive abilities. If you ask more than one question at a time, you’re lowering your chances of getting a good answer to any of them. Human beings are not good at multitasking. Make it easy for your questions to be answered by asking them one by one. Doing so will also avoid creating a messy and frustrating thread. Last but not least, your questions are going to be more googleable if they are isolated and stand on their own.

5. No question at all

“I pressed the green button. The screen flashed blue.”

Be sure that your question actually is one. Sometimes, symptoms like those in #2 can be seen, but no discernable question can be extracted from the verbose mess. Have a clear and concise question, that makes it very obvious what you’re asking, and that ends with an actual question mark. We should not have to figure out what your question is. You have to tell us.

6. Expecting people to work for free, or to do your homework

“My boss asked me to evaluate Orchard vs. Umbraco. Please send me a spreadsheet of strengths and weaknesses of each, with a dissertation of why your preferred system would be the right choice for my business.” … [five minutes later]… “Why has nobody answered my question? Is this project dead?”

“Give me teh code”

“What does ‘CMS’ mean?”

There are many people out there who are spending a lot of their time answering questions, so it’s easy to get carried away and expect those same people to do for free work that you should normally pay them for, or that you should do yourself.

You must do a minimum of basic research before you ask the community to solve your problems. Any question that is likely to be answered with a link to Wikipedia or to LMGTFY is simply not worth asking. The answer is already out there. You are doing yourself a disservice by being lazy, you’ll look like an idiot, and your question is not going to help anyone else.

Do not ask people to do your research or your homework, or if you do, be prepared to pay them for it.

In conclusion…

Ask questions, lots of them: it’s the best way to learn. There are people out there who are eager to give you answers. But for this to happen, you need to treat question asking as a basic technical and human skill. I’m only aiming at the low-hanging fruit here: there is much more to this than a blogger could hope to address in a simple post. Consider your own weaknesses and work on them. You are going to notice the quality of the answers you get rise steadily as you do so. You are also going to learn how formulating the right questions often gets one halfway to solving the problem.

The CMS ecosystem and Microsoft

CMS is extremely important strategically for any web company. About 35% of web sites use a CMS, and the top ones are all PHP (WordPress on its own is more than 20% of all web sites). In other words, if you care about the market share of your web platform, you need a good CMS running on it.

Before I left Microsoft, I was working on Orchard. I’m one of the people who founded and built it, from within the ASP.NET team, with great support from management, even after I left the company.

Today, the team has one engineer working full-time on the platform, and that’s marvelous. It’s not just any engineer either, it’s Sébastien Ros, one of the best developers I know. Microsoft is one of the main sponsors of the Harvest conference, and they also use Orchard in several web properties.

For all these reasons, I’m having a really hard time understanding the deafening silence from Microsoft’s web platform team about CMS in general, and Orchard in particular. Updates on what’s new on the platform never include Orchard. For instance, when I asked some of the Scotts (Hanselman and Hunter; and yes, you have to change your name to Scott in recognition of the great Gu if you want to join that team) on Twitter about why there wasn’t a video about Orchard among their great new batch of “Building Modern Web Apps” videos, I got what looks like a knee-jerk reaction from Scott:

@shanselman: @bleroy @coolcsh and the DotNetNuke one and the sharepoint one and the umbraco one and the sitefinity one? You're welcome to make them.

I have several things to say about this.

1. I did make videos, lots of them, on my own time, and others do too. None of them have been relayed in any blog post or tweet that I’ve seen from the team. And of course, we’re talking about Microsoft producing new videos here, not some guy making a YouTube clip…

2. There are videos about WebAPI, but not about ServiceStack; or about MVC, but not Nancy (although to be fair, Nancy does get some official support from Microsoft). Microsoft talks a lot about NuGet (an Outercurve project, like Orchard), but not of OpenWrap. Scott’s argument does not seem to be a valid comparison.

3. Yes, you should absolutely talk about and support the .NET CMS ecosystem. Why exactly shouldn’t you talk about DNN, Umbraco, Orchard, and even Sitefinity? They are all beautiful platforms that help the .NET ecosystem grow. Indeed, why not have Microsoft contribute to DNN and Umbraco as well?

We love C#, we love ASP.NET, and we love Orchard. Does Microsoft?

http://www.asp.net/orchard

UPDATE: I got an answer from Scott Hunter on Twitter: "@coolcsh: @bleroy @shanselman @sebastienros We might have an exciting announcement around Orchard soon. We will try and push more." This makes me quite happy. That's all I'm asking for and I thank Scott for it. Cheers, carry on then.

The Shift: how Orchard painlessly shifted to document storage, and how it’ll affect you

We’ve known it all along. The storage for Orchard content items would be much more efficient using a document database than a relational one. Orchard content items are composed of parts that serialize naturally into infoset kinds of documents. Storing them as relational data like we’ve done so far was unnatural and requires the data for a single item to span multiple tables, related through 1-1 relationships. This means lots of joins in queries, and a great potential for Select N+1 problems.

Document databases, unfortunately, are still a tough sell in many places that prefer the more familiar relational model. Being able to x-copy Orchard to hosters has also been a basic constraint in the design of Orchard. Combine those with the necessity at the time to run in medium trust, and with license compatibility issues, and you’ll find yourself with very few reasonable choices. So we went, a little reluctantly, for relational SQL stores, with the dream of one day transitioning to document storage.

We have played for a while with the idea of building our own document storage on top of SQL databases, and Sébastien implemented something more than decent along those lines, but we had a better way all along that we didn’t notice until recently… In Orchard, there are fields, which are named properties that you can add dynamically to a content part. Because they are so dynamic, we have been storing them as XML into a column on the main content item table. This infoset storage and its associated API are fairly generic, but were only used for fields. The breakthrough was when Sébastien realized how this existing storage could give us the advantages of document storage with minimal changes, while continuing to use relational databases as the substrate.

public bool CommercialPrices {
    get { return this.Retrieve(p => p.CommercialPrices); }
    set { this.Store(p => p.CommercialPrices, value); }
}

This code is very compact and efficient because the API can infer from the expression what the type and name of the property are. It is then able to do the proper conversions for you. For this code to work in a content part, there is no need for a record at all. This is particularly nice for site settings: one query on one table and you get everything you need.

This shows how the existing infoset solves the data storage problem, but you still need to query. Well, for those properties that need to be filtered and sorted on, you can still use the current record-based relational system. This of course continues to work. We do however provide APIs that make it trivial to store into both record properties and the infoset storage in one operation:

public double Price {
    get { return Retrieve(r => r.Price); }
    set { Store(r => r.Price, value); }
}

This code looks strikingly similar to the non-record case above. The difference is that it will manage both the infoset and the record-based storages. The call to the Store method will send the data in both places, keeping them in sync.

The call to the Retrieve method does something even cooler: if the property you’re looking for exists in the infoset, it will return it, but if it doesn’t, it will automatically look into the record for it. And if that wasn’t cool enough, it will take that value from the record and store it into the infoset for the next time it’s required. This means that your data will start automagically migrating to infoset storage just by virtue of using the code above instead of the usual:

public double Price {
    get { return Record.Price; }
    set { Record.Price = value; }
}

As your users browse the site, it will get faster and faster as Select N+1 issues will optimize themselves away. If you preferred, you could still have explicit migration code, but it really shouldn’t be necessary most of the time. If you do already have code using QueryHints to mitigate Select N+1 issues, you might want to reconsider those, as with the new system, you’ll want to avoid joins that you don’t need for filtering or sorting, further optimizing your queries.

There are some rare cases where the storage of the property must be handled differently. Check out this string[] property on SearchSettingsPart for example:

public string[] SearchedFields {
    get { return
(Retrieve<string>("SearchedFields") ?? "")
.Split(new[] {',', ' '},
StringSplitOptions.RemoveEmptyEntries); } set { Store("SearchedFields", String.Join(", ", value)); } }

The array of strings is transformed by the property accessors into and from a comma-separated list stored in a string. The Retrieve and Store overloads used in this case are lower-level versions that explicitly specify the type and name of the attribute to retrieve or store.

You may be wondering what this means for code or operations that look directly at the database tables instead of going through the new infoset APIs. Even if there is a record, the infoset version of the property will win if it exists, so it is necessary to keep the infoset up-to-date. It’s not very complicated, but definitely something to keep in mind. Here is what a product record looks like in Nwazet.Commerce for example:A product record

And here is the same data in the infoset:The infoset data

The infoset is stored in Orchard_Framework_ContentItemRecord or Orchard_Framework_ContentItemVersionRecord, depending on whether the content type is versionable or not. A good way to find what you’re looking for is to inspect the record table first, as it’s usually easier to read, and then get the item record of the same id.

Here is the detailed XML document for this product:

<Data>
  <ProductPart Inventory="40" Price="18" Sku="pi-camera-box"
    OutOfStockMessage="" AllowBackOrder="false"
    Weight="0.2" Size="" ShippingCost="null" IsDigital="false" />
  <ProductAttributesPart Attributes="" />
  <AutoroutePart DisplayAlias="camera-box" />
  <TitlePart Title="Nwazet Pi Camera Box" />
  <BodyPart Text="[...]" />
  <CommonPart CreatedUtc="2013-09-10T00:39:00Z"
    PublishedUtc="2013-09-14T01:07:47Z" />
</Data>

The data is neatly organized under each part. It is easy to see how that document is all you need to know about that content item, all in one table. If you want to modify that data directly in the database, you should be careful to do it in both the record table and the infoset in the content item record.

In this configuration, the record is now nothing more than an index, and will only be used for sorting and filtering.

Of course, it’s perfectly fine to mix record-backed properties and record-less properties on the same part. It really depends what you think must be sorted and filtered on. In turn, this potentially simplifies migrations considerably.

So here it is, the great shift of Orchard to document storage, something that Orchard has been designed for all along, and that we were able to implement with a satisfying and surprising economy of resources. Expect this code to make its way into the 1.8 version of Orchard when that’s available.

Video: Orchard’s best kept recipes

In this talk that I gave last June for Orchard Harvest in Amsterdam, I showed, in no particular order, my favorite Orchard features, tricks, and modules. Don’t expect a narrative in there, cause there isn’t one, but I’m hoping you’ll learn a thing or two.

What’s your favorite Orchard trick?

My workflow for comment notifications

Workflows in Orchard 1.7 are a damn sweet feature, and in this post I’m going to show you a very simple and useful case: comment moderation and notifications.

Let’s begin by going to the Modules screen and checking that the older Rules module is disabled, and the Workflows is enabled. Once this is done, let’s click on Workflows in the admin menu and click on “Create a new Workflow Definition” on the top-right of the screen.

You’ll be prompted for a name. Let’s choose “Comment notification” and hit Save. You should now have a blank design surface for the workflow. Let’s drag the Content Published activity onto the design surface:Dragging the content published activity onto the design surface

Then click on the activity and select the pencil icon to switch to its edit screen:Edit the activity

Find the Comment content type, select it, and save. This sets up the first step of our workflow, to trigger when a comment publication is attempted by a user. If you have comment moderation on, this means the the user submitted the comment, not that it was actually validated and published onto the public web site. If you have spam protection on, this won’t get triggered for failed captchas.

Drag a new Send Email activity onto the surface, next to the first activity. Then grab the blue ball on top of the Content Published activity and drag it onto the new email one:Connect the output of the Content Published activity to the Send Email activity

Click the email activity and select the edit button.Edit the email activity

Fill the edit form as follows:The Email activity configuration

The pattern for the subject is:

A new comment was published on {Site.SiteName}

And the pattern for the body is:

<h3>A new comment was published on {Site.SiteName} by {Content.CommentAuthor}</h3>

<p>{Content.CommentMessage}</p>

<div>
  <a href="{Site.BaseUrl}/Admin/Comments">Moderate comments</a> |
  <a href="{Content.CommentApproveUrl}">Approve</a> |
  <a href="{Content.CommentModerateUrl}">Disapprove</a> |
  <a href="{Content.CommentDeleteUrl}">Delete</a>
</div>

Here we are using a bunch of tokens, which are globally accessible and contextual variables that may be used in Orchard in various places to configure features such as emails dynamically, from the admin UI. All the expressions that are between curly braces will be replaced in the e-mail with actual values.

In this template, we have a notification that a comment was published, then we have the actual text of the comment, and then we have four buttons that will enable the comment moderator to act on that comment in one click, directly from his e-mail client.

The last thing we have to do (and this one is easy to forget, believe me), is to tell the workflow engine what activity should start the workflow. In our case, that should be the Content Published activity, so let’s select the activity and click its first button:Set the Published activity as the start of the workflow.

The outline and background of the activity should have changed to reflect its new status.

We can now save the workflow and test it.

Now every time someone submits a comment, the site administrator will receive an e-mail looking like this:image

Note that for this to work, you need to have properly configured e-mail settings for Orchard. Doing so is out of the scope of this post.

With this, you can always keep an eye on your user’s activity on your site. This example also demonstrates some usage of workflows and of tokens, two must-know features of Orchard. I hope this helps.

UPDATE: it is a good move to add a delay activity between the update activity and the send mail activity, even if it's short, so that the e-mail sending doesn't happen on the main thread and doesn't block. Thanks to Zoltan for the tip.

How Orchard deals with dependency licenses

Managing dependencies in any project presents challenges, but open source adds its own constraints. In the wake of the release by Microsoft of new and useful libraries that are unfortunately encumbered by unacceptable licensing restrictions, I thought it would be a good time to summarize how we do things here in the Orchard project, both to get feedback and to give ideas to others facing similar challenges.

Early in the development of your project, you’ll have to choose a license. There are roughly four typical categories of licenses, and then some minor variations:

  • Proprietary: no derivative work or redistribution is allowed without authorization. An example of that is Microsoft Office or Windows EULAs.
  • Dual open-source and commercial: usage is allowed in open source projects, but commercial applications fall under a proprietary license. An example of that is RavenDB.
  • Copyleft: derivative work and redistribution is allowed and encouraged, but only under a similarly copyleft license. An example of that is Gnu Linux. The restrictions are an explanation for the controversial claim by Ballmer that GPL is “viral”: if you choose to use a GPL project, any derivative work must be done under GPL or an equivalent. Copyleft licenses are often used as the open-source half of a dual-license, as they are efficient at forcing commercial applications to not use the free version, lest they become free themselves.
  • Permissive: derivative work and redistribution is allowed and encouraged, even for commercial projects. An example of this is Orchard.

Orchard is under BSD, which is a permissive license. This means that you may use Orchard in pretty much any way you want, including to build commercial proprietary platforms. We wanted anyone but the most obtusely opposed to open source firms to be able to use our platform in any possible way, without restrictions.

The main consequence of our license choice is that downstream restrictions are non-existing. Upstream restrictions do exist, however. We’ve basically taken the constraints for ourselves so that you don’t have any as an Orchard user. It’s a very user-friendly strategy.

By upstream restrictions, I mean that if we want the end product to be so permissive, we can’t include any components that would force a change in the final licensing terms. Specifically, we can’t include anything that isn’t under a permissive license, because only permissive licenses lack any “viral” characteristic.

In particular, we rejected RavenDB as a possible data store because of its dual license. It would be no problem at all for us to include it, as we are open-source, but downstream commercial users would then be forced to buy a license.

Similarly, we rejected and replaced several GPL-licensed components because they would have imposed unacceptable constraints on our commercial users, in addition to forcing us into a copyleft license.

Proprietary could be an option, if free redistribution is allowed, even without the source code, because we take dependencies as binaries and are not particularly ideologically-driven. It has not been the case so far however, as proprietary software usually comes with many caveats.

The choice of a license for an open-source project is an important step, not just because it will affect your users, but also, and this is often overlooked, because it will affect the range of acceptable choices of dependencies. Choose wisely, knowing if you want to impose the necessary constraints on yourself or on your users.

Effortlessly resize images in Orchard 1.7

I’ve written several times about image resizing in .NET, but never in the context of Orchard. With the imminent release of Orchard 1.7, it’s time to correct this. The new version comes with an extensible media pipeline that enables you to define complex image processing workflows that can automatically resize, change formats or apply watermarks. This is not the subject of this post however. What I want to show here is one of the underlying APIs that enable that feature, and that comes in the form of a new shape.

Once you have enabled the media processing feature, a new ResizeMediaUrl shape becomes available from your views. All you have to do is feed it a virtual path and size (and, if you need to override defaults, a few other optional parameters), and it will do all the work for you of creating a unique URL for the resized image, and write that image to disk the first time the shape is rendered:

<img src="@Display.ResizeMediaUrl(Path: img, Width: 59)"/>

Notice how I only specified a maximum width. The height could of course be specified, but in this case will be automatically determined so that the aspect ratio is preserved.

The second time the shape is rendered, the shape will notice that the resized file already exists on disk, and it will serve that directly, so caching is handled automatically and the image can be served almost as fast as the original static one, because it is also a static image. Only the URL generation and checking for the file existence takes time.

Here is what the generated thumbnails look like on disk:The resized images on disk

In the case of those product images, the product page will download 12kB worth of images instead of 1.87MB. The full size images will only be downloaded as needed, if the user clicks on one of the thumbnails to get the full-scale.

This is an extremely useful tool to use in your themes to easily render images of the exact right size and thus limit your bandwidth consumption. Mobile users will thank you for that.

Testing Orchard drivers

If you’ve ever tried to test Orchard part drivers, you may have been blocked by the fact that the methods on drivers are protected. That, fortunately, doesn’t mean they are untestable. Those methods are still accessible through explicit interface implementations. In particular, drivers implement IContentPartDriver, which is defined as follows.

public interface IContentPartDriver : IDependency {
    DriverResult BuildDisplay(BuildDisplayContext context);
    DriverResult BuildEditor(BuildEditorContext context);
    DriverResult UpdateEditor(UpdateEditorContext context);
    void Importing(ImportContentContext context);
    void Imported(ImportContentContext context);
    void Exporting(ExportContentContext context);
    void Exported(ExportContentContext context);
    IEnumerable<ContentPartInfo> GetPartInfo();
    void GetContentItemMetadata(GetContentItemMetadataContext context);
}

By casting your driver to this interface, you get public access to these methods.

For example, here is some code I wrote recently to test the import and export methods of a driver:

[Test]
public void ImportGetAllDefinedProperties() {
    var doc = XElement.Parse(@"
<data>
<UspsShippingMethodPart
    Name=""Foo""
    Size=""L""
    WidthInInches=""10""
    LengthInInches=""11""
    HeightInInches=""12""
    MaximumWeightInOunces=""1.3""
    Priority=""14""
    International=""true""
    RegisteredMail=""true""
    Insurance=""false""
    ReturnReceipt=""true""
    CertificateOfMailing=""true""
    ElectronicConfirmation=""true""/>
</data>
");
    var driver = new UspsShippingMethodPartDriver(null)
        as IContentPartDriver;
    var part = new UspsShippingMethodPart();
    Helpers.PreparePart<UspsShippingMethodPart, UspsShippingMethodPartRecord>(
        part, "UspsShippingMethod");
    var context = new ImportContentContext(
        part.ContentItem, doc, new ImportContentSession(null));
    driver.Importing(context);

    Assert.That(part.Name, Is.EqualTo("Foo"));
    Assert.That(part.Size, Is.EqualTo("L"));
    Assert.That(part.WidthInInches, Is.EqualTo(10));
    Assert.That(part.LengthInInches, Is.EqualTo(11));
    Assert.That(part.HeightInInches, Is.EqualTo(12));
    Assert.That(part.MaximumWeightInOunces, Is.EqualTo(1.3));
    Assert.That(part.Priority, Is.EqualTo(14));
    Assert.That(part.International, Is.True);
    Assert.That(part.RegisteredMail, Is.True);
    Assert.That(part.Insurance, Is.False);
    Assert.That(part.ReturnReceipt, Is.True);
    Assert.That(part.CertificateOfMailing, Is.True);
    Assert.That(part.ElectronicConfirmation, Is.True);
}

[Test]
public void ExportSetsAllAttributes() {
    var driver = new UspsShippingMethodPartDriver(null)
        as IContentPartDriver;
    var part = new UspsShippingMethodPart();
    Helpers.PreparePart<UspsShippingMethodPart, UspsShippingMethodPartRecord>(
        part, "UspsShippingMethod");
    part.Name = "Foo";
    part.Size = "L";
    part.WidthInInches = 10;
    part.LengthInInches = 11;
    part.HeightInInches = 12;
    part.MaximumWeightInOunces = 1.3;
    part.Priority = 14;
    part.International = true;
    part.RegisteredMail = true;
    part.Insurance = false;
    part.ReturnReceipt = true;
    part.CertificateOfMailing = true;
    part.ElectronicConfirmation = true;

    var doc = new XElement("data");
    var context = new ExportContentContext(part.ContentItem, doc);
    driver.Exporting(context);
    var el = doc.Element("UspsShippingMethodPart");

    Assert.That(el, Is.Not.Null);
    Assert.That(el.Attr("Name"), Is.EqualTo("Foo"));
    Assert.That(el.Attr("Size"), Is.EqualTo("L"));
    Assert.That(el.Attr("WidthInInches"), Is.EqualTo("10"));
    Assert.That(el.Attr("LengthInInches"), Is.EqualTo("11"));
    Assert.That(el.Attr("HeightInInches"), Is.EqualTo("12"));
    Assert.That(el.Attr("MaximumWeightInOunces"), Is.EqualTo("1.3"));
    Assert.That(el.Attr("Priority"), Is.EqualTo("14"));
    Assert.That(el.Attr("International"), Is.EqualTo("true"));
    Assert.That(el.Attr("RegisteredMail"), Is.EqualTo("true"));
    Assert.That(el.Attr("Insurance"), Is.EqualTo("false"));
    Assert.That(el.Attr("ReturnReceipt"), Is.EqualTo("true"));
    Assert.That(el.Attr("CertificateOfMailing"), Is.EqualTo("true"));
    Assert.That(el.Attr("ElectronicConfirmation"), Is.EqualTo("true"));
}

The Attr method, in case you're wondering, is an extension method I blogged about yesterday.

The Helper class that I’m using here massages a fake part to behave like a real part. It gives the part a fake record, and adds a fake content item around it. It might not be enough in all situations, but it does make the fake convincing enough in this case.

public static ContentItem PreparePart<TPart, TRecord>(
    TPart part, string contentType, int id = -1)
    where TPart: ContentPart<TRecord>
    where TRecord: ContentPartRecord, new() {

    part.Record = new TRecord();
    var contentItem = part.ContentItem = new ContentItem
    {
        VersionRecord = new ContentItemVersionRecord
        {
            ContentItemRecord = new ContentItemRecord()
        },
        ContentType = contentType
    };
    contentItem.Record.Id = id;
    contentItem.Weld(part);
    return contentItem;
}
A C# helper to read and write XML from and to objects

I really like jQuery’s pattern of attribute getters and setters. They are fluent and work really well with HTML and XML DOMs. If you specify a value in addition to the name, it’s setting, otherwise it’s getting. In C#, we have an OK API for XML, XElement, but it’s not as easy to use as jQuery’s attr methods. It is also missing the flexibility of Javascript with regards to parameter types. To recreate the simplicity of attr in C#, I built a set of extension methods for the most common simple types:

var el = new XElement("node");
el.Attr("foo", "bar")
  .Attr("baz", 42)
  .Attr("really", true);
var answer = el.Attr("baz");

The element built by this code looks like this:

<node foo="bar" baz="42" really="true"/>

And the answer variable will contain “42”.

Even with this API, there is still a fair amount of repetition in code that reads and writes XML from and to objects. You could rely on serialization in those cases, of course, but when you need a little more control, and the types are not necessarily serializable, or if you just want to do it manually, you need something more. This is why I also built ToAttr and FromAttr. Both are extension methods that take an object and an expression for what property of the object to get or set from XML attributes. The methods will infer the type and name from the property.

This is especially useful when writing the import and export methods in an Orchard part driver:

protected override void Importing(
UspsShippingMethodPart part,
ImportContentContext context) {
var el = context.Data.Element(typeof(UspsShippingMethodPart).Name); if (el == null) return; el.FromAttr(part, p => p.Name) .FromAttr(part, p => p.Size) .FromAttr(part, p => p.WidthInInches) .FromAttr(part, p => p.LengthInInches) .FromAttr(part, p => p.HeightInInches) .FromAttr(part, p => p.MaximumWeightInOunces) .FromAttr(part, p => p.Priority) .FromAttr(part, p => p.International) .FromAttr(part, p => p.RegisteredMail) .FromAttr(part, p => p.Insurance) .FromAttr(part, p => p.ReturnReceipt) .FromAttr(part, p => p.CertificateOfMailing) .FromAttr(part, p => p.ElectronicConfirmation); } protected override void Exporting(
UspsShippingMethodPart part, ExportContentContext context) {
context.Element(typeof (UspsShippingMethodPart).Name) .ToAttr(part, p => p.Name) .ToAttr(part, p => p.Size) .ToAttr(part, p => p.WidthInInches) .ToAttr(part, p => p.LengthInInches) .ToAttr(part, p => p.HeightInInches) .ToAttr(part, p => p.MaximumWeightInOunces) .ToAttr(part, p => p.Priority) .ToAttr(part, p => p.International) .ToAttr(part, p => p.RegisteredMail) .ToAttr(part, p => p.Insurance) .ToAttr(part, p => p.ReturnReceipt) .ToAttr(part, p => p.CertificateOfMailing) .ToAttr(part, p => p.ElectronicConfirmation); }

There is no need to specify attribute names or types here, everything is inferred from the expression. Both methods manipulate XML looking like this:

<UspsShippingMethodPart
        Name="Foo"
        Size="L"
        WidthInInches="10"
        LengthInInches="11"
        HeightInInches="12"
        MaximumWeightInOunces="1.3"
        Priority="14"
        International="true"
        RegisteredMail="true"
        Insurance="false"
        ReturnReceipt="true"
        CertificateOfMailing="true"
        ElectronicConfirmation="true"/>

UPDATE: You may notice that there is still quite some repetition of the part parameter in the import/export code above. In order to remove this repetition, I’ve added a small class that aggregates the XML element with a context and has simpler ToAttr and From Attr methods. With this new helper class, we can rewrite the driver’s import/export code to be even more concise:

protected override void Importing(
UspsShippingMethodPart part, ImportContentContext context) {
var el = context.Data.Element(typeof (UspsShippingMethodPart).Name); if (el == null) return; el.With(part) .FromAttr(p => p.Name) .FromAttr(p => p.Size) .FromAttr(p => p.WidthInInches) .FromAttr(p => p.LengthInInches) .FromAttr(p => p.HeightInInches) .FromAttr(p => p.MaximumWeightInOunces) .FromAttr(p => p.Priority) .FromAttr(p => p.International) .FromAttr(p => p.RegisteredMail) .FromAttr(p => p.Insurance) .FromAttr(p => p.ReturnReceipt) .FromAttr(p => p.CertificateOfMailing) .FromAttr(p => p.ElectronicConfirmation); } protected override void Exporting(
UspsShippingMethodPart part, ExportContentContext context) {
context.Element(typeof (UspsShippingMethodPart).Name) .With(part) .ToAttr(p => p.Name) .ToAttr(p => p.Size) .ToAttr(p => p.WidthInInches) .ToAttr(p => p.LengthInInches) .ToAttr(p => p.HeightInInches) .ToAttr(p => p.MaximumWeightInOunces) .ToAttr(p => p.Priority) .ToAttr(p => p.International) .ToAttr(p => p.RegisteredMail) .ToAttr(p => p.Insurance) .ToAttr(p => p.ReturnReceipt) .ToAttr(p => p.CertificateOfMailing) .ToAttr(p => p.ElectronicConfirmation); }

You can find the code for this helper class here:
https://gist.github.com/bleroy/5384405

And I have a small test suite for the whole thing here:
https://gist.github.com/bleroy/5385284

Getting your Raspberry Pi to output the right resolution

I was setting up a new Raspberry Pi under Raspbian on a Samsung monitor the other day. If you don’t do anything, Raspbian and the Pi will attempt to detect the modes supported by your monitor and will make a choice of what seems best to it. And sometimes it gets that very wrong. In those cases, you’ll need to find what the right mode is and to set it up.

It took me quite a few attempts before I succeeded, mostly misled by misinformed forum and blog posts. The right post, the one that has all the correct info does exist however: http://www.raspberrypi.org/phpBB3/viewtopic.php?f=26&t=5851. Let me distil this to a short set of instructions, in case you don’t want to dive in and assimilate all that information. Here is what worked for me…

From the command-line, logged-in as root…

1. Get the list of what’s supported by your monitor:

tvservice -d edid
edidparser edid

2. Find the mode that you want in the resulting list (for me it was “HDMI:EDID DMT mode (82) 1920x1080p @ 60 Hz with pixel clock 148 MHz” with a score of 124,416, which wasn’t the highest, explaining why I had a lower resolution by default). The mode number is the one between parentheses: 82.

3. Edit the config file:

nano /boot/config.txt

Find the section about HDMI, uncomment it and set the right group and mode from step 2. If your mode description contains “DMT”, the group should be 2, and if it contains “CEA”, it should be 1, so for me that was:

hdmi_group=2
hdmi_mode=82

Exit the editor with CTRL+X, followed by Y.

4. Reboot:

shutdown -r now

More Posts Next page »