Tales from the Evil Empire

Bertrand Le Roy's blog


Bertrand Le Roy

BoudinFatal's Gamercard

Tales from the Evil Empire - Blogged

Blogs I read

My other stuff


September 2010 - Posts

0x81000037, 0x80070002 and how I finally convinced Windows Backup to do its job

0x81000037 When trying to backup my machine onto a USB drive, a few weeks ago I have been starting to get a very unhelpful 0x81000037 error. Of course, the first thing I did was to bing it but I didn’t like what I found. There is a “How to troubleshoot Windows Backup and Restore issues when a reparse point folder or its subfolder is added to a user library in Windows 7” KB article that unfortunately does not live up to its title. It does some hand waving around “reparse points” but does not even bother to explain what a reparse point is, let alone how to discover and remove them.

Other links I found were from distressed users hitting the problem and having no clue how to solve it. Responses from support have been equally unhelpful and full of jargon as the KB article. The poor users seem to all have resorted to more or less extreme solutions, ranging from using a third party backup system to reformatting the machine.

I did not want to do either of those so I spent a disproportionate amount of time troubleshooting this and trying to come up with a solution that worked for me. Hopefully this will help others.

But first things first. What the hell is a reparse point?

Reparse points are to folders what shortcuts are to files. Roughly speaking. It’s an alias for a folder, created so that you can access that folder coming from different places. Technically it allows a few more elaborate scenarios but that’s the idea. For example, if you open Windows Explorer and go to favorites, everything you see is reparse points:Favorites Reparse Points

If you open any of those folders, you are going to get transported to a completely different part of your hard drive: the real folder that the reparse point points to.

For example, if I open “Downloads”, I will end up in “C:\Users\bleroy\Downloads” which is not under anything resembling “Favorites” (note: the Windows Explorer favorites are different from IE favorites). Libraries is a similar catalog of reparse points. Really reparse points are everywhere under your user data folder.

So once we understand what reparse points are we need a way to find them before we can remove them. If we got this error, we know that there are reparse points under some of the folders that are in the current backup plan.

In order to know what is in your current backup plan, go to the backup and restore control panel (Control Panel\System and Security\Backup and Restore) and click “Change settings”. Click “next” until you reach the screen asking you what you want to back up. Write down all the top folders that are checked, then click cancel.

Now you’ll want to open a command line (hit the Windows key and type cmd and enter). Next, cd into each of those directories and type the following command:

dir /A:L /S *

This is something I found completely fortuitously while digging into the help topics of commands. MSDN was of zero help here and only gave me commands to manage reparse point if you know where they are, but nothing to actually find them.

The result should look something like this, with each reparse point appearing as a <JUNCTION>, a junction being one kind of reparse point (see comments section for more info):List All Reparse Points

Well, actually, that is the result on my box once I have cleaned up. Because in the end, cleaning up my drive was what enabled me to put backup back on track.

Here are a few things I did:

  • Remove all reparse points I found under folders I absolutely had to back up or at least remove them from the backup plan.
  • Remove from the backup plan folders that I didn’t really need backed up.
  • Clean up old files that I didn’t need any more.
  • Move and reorganize things around so that my file organization is more rational and makes it easier to actually find stuff.
  • Be careful not to remove reparse points I wasn’t sure would not be needed by the system or by applications I use.
  • Simplify, simplify, simplify.

After each step, I tried to backup again. In the course of doing that, I got another error, 0x80070002. The claim from support on this one is that it comes from user profiles lacking a user profile path. That was not the case for me and I managed to get rid of the error by doing more cleanup of my file system.

In the end, once my drive was thoroughly cleaned of about 40GB of junk, I was able to achieve a somewhat successful backup:Backup Victory

The only remaining error here was benign: I was still under my old backup plan that was pointing to some of the stuff I had deleted.

The list of files is easily obtained by going into options and then clicking on “view skipped files”:View Skipped Files

Going through the backup wizard again enabled me to get rid of those errors and I’m happy to say that my backup panel now looks like this:Backup Complete Victory

I sincerely hope this helps others get their backup working without them having to fumble for weeks like I did.

Please read if you have public ASP.NET sites

Yesterday, a new crypto oracle-type vulnerability was publicly disclosed. It is an important vulnerability that is likely to be exploitable on a large proportion of ASP.NET sites, even those that are using configuration settings that were previously considered safe.

There is a workaround available already that should be set-up right now. You should pay a lot of attention to this and apply the workaround without trying to simplify it as that may result in your sites still being vulnerable. The issue is rather subtle (like pretty much all oracle attacks are).

Scott published a blog post with all the details that I will not attempt to reproduce here in order to minimize any chance of confusion.

Please go to Scott’s post, read it and do what you have to do.

It’s always a bummer when that sort of thing happens but now is the time to take action so that your sites don’t fall to an automated or manual attack in the next few days.


UPDATE: Scott published a FAQ on this issue:

Building my new blog with Orchard – part 2: importing old contents

Building the new house... In the previous post, I installed Orchard onto my hosted IIS7 instance and created the “about” page.

This time, I’m going to show how I imported existing contents into Orchard.

For my new blog, I didn’t want to start with a completely empty site and a lame “first post” entry. I did already have quite a few posts here and on Facebook that fit the spirit I wanted for the new blog so I decided to use that to seed it.

The science and opinion posts on Tales of the Evil Empire always seemed a little out of place (which some of my readers told me quite plainly), and the Facebook posts were blocked behind Facebook’s silo walls even though they were public. You still need a Facebook account to read those posts and search engine can’t go there as far as I know.

There is a BlogML import feature built by Nick Mayne that’s available from the Module gallery (right from the application as shown in the screenshot below), and I absolutely recommend you use it if you just need to import from an existing blog:

BlogML module

But well, I didn’t use Nick’s module. I wanted to play with commands, and I wanted to see what it would take to do a content import in the worst possible conditions or something close to that. Hopefully it can serve as a sample for other batch imports from all kinds of crazy sources.

Importing from Facebook was the most challenging because of the need to login before fetching the content. I chose to not even try to mess with APIs and to do screen scraping, which is not the cleanest but has the advantage of always being an option.

I also wanted to import post by post rather than in a big batch, to have really fine-grained control over what I bring over to the new place: I’m not moving, I’m splitting.

Before we dive into the code, I had to make a change in my local development machine. The production setup remains exactly what I described last time, but for my local dev efforts, the basic 0.5 release setup as downloaded from CodePlex is a little limiting.

To be clear, there is nothing you can’t do in Orchard with Notepad and IIS/WebMatrix, but if you have Visual Studio 2010, you’re going to feel a lot more comfortable.

The changes I applied were to clone the CodePlex repository locally, run the site, go through the setup process and then copy the Orchard.sdf file from my production site onto the local copy, in App_Data/Sites/Default. This way, I got a copy of the production site, data included, but could run, develop and debug from Visual Studio.

The first thing that I needed to do was to get my own private module. To do this, I like to use scaffolding from the command line. I open a Windows command-line and pointed it at the root of the local web site (src/Orchard.Web from the root of my enlistment) and typed “bin\orchard” to launch the Orchard command-line.

After enabling the scaffolding module (“feature enable Scaffolding”), I was able to use it to create the module: “scaffolding create module BlogPost.Import.Commands /IncludeInSolution:true”. The IncludeInSolution flag added the new module project to my local Orchard solution, which is really nice.

That done, I switched to Visual Studio, which prompted me to reload the solution because of the new project. After a few seconds, I was able to edit the manifest:

name: BlogPost.Import.Commands
antiforgery: enabled
author: Bertrand Le Roy
website: http://vulu.net
version: 0.5.0
orchardversion: 0.5.0
description: Import external blog posts
        Description: Import commands
        Category: Content Publishing
        Dependencies: Orchard.Blogs

The only thing I wanted in my module was the set of new commands, which I implemented in a new BlogPostImportCommands.cs file at the root of the new module.

As I said, I’m extracting the data through screen scraping. As I’m not insane, I’m using a library for that. I chose the HTML Agility Pack, which is a .NET HTML parser that comes with XPath. Point it at a remote document and query all you like.

In the class that will contain the commands, I needed a few dependencies injected:

private readonly IContentManager _contentManager;
private readonly IMembershipService _membershipService;
private readonly IBlogService _blogService;
private readonly ITagService _tagService;
private readonly ICommentService _commentService;
protected virtual ISite CurrentSite 
{ get; [UsedImplicitly] private set; }

The content manager, blog, tags and comment services will enable us to query and add to the site’s content. The membership service will enable us to get hold of user accounts, and the ISite will give us global settings such as the name of the super user.

Our import commands are going to need a URL and in the case of the Facebook import a login and a password.

public string Url { get; set; }

public string Owner { get; set; }

public string Login { get; set; }

public string Password { get; set; }

We also have an owner switch but that is optional as the command will use the super-user if it is not specified:

var admin = _membershipService.GetUser(
String.IsNullOrEmpty(Owner) ?
CurrentSite.SuperUser : Owner);

The commands themselves must be marked with attributes that identify them as commands, provide a command name and help text, and optionally specify what switches they understand:

[CommandName("blogpost import facebook")]
[CommandHelp("blogpost import facebook /Url:<url>
/Login:<email> /Password:<password>\r\n\t
Imports a remote FaceBook note, including comments"
)] [OrchardSwitches("Url,Login,Password")] public string ImportFaceBook() {

When importing from Facebook, the first thing we want to do is authenticate ourselves and store the results of that operation in a cookie context that we can then reuse for subsequent requests that are going to fetch actual note contents:

var html = new HtmlDocument();
var cookieContainer = new CookieContainer();
var request = (HttpWebRequest)WebRequest
.Create("https://m.facebook.com/login.php"); request.Method = "POST"; request.CookieContainer = cookieContainer; request.ContentType = "text/html"; var postData = new UTF8Encoding()
.GetBytes("email=" + Login + "&pass=" + Password); request.ContentLength = postData.Length; request.GetRequestStream()
.Write(postData, 0, postData.Length); request.GetResponse();

We can now bring the actual content:

request = (HttpWebRequest) WebRequest.Create(Url);
request.CookieContainer = cookieContainer;
    new StreamReader(

Note that we are using the mobile version of the site here, just because the markup in there usually is considerably simplified and thus easier to query.

We can now create an Orchard blog post:

var post = _contentManager.New("BlogPost");
post.As<ICommonPart>().Owner = admin;
post.As<ICommonPart>().Container = blog;

And then we can start setting properties from what we find in the HTML DOM:

var postText = html.DocumentNode
.InnerHtml; post.As<BodyPart>().Text = postText;

Tags and comments are a little more challenging as they are lists. On Facebook, we can select all comment nodes:

var commentNodes = html.DocumentNode

Then we can treat each comment node as a mini-DOM and create a comment for each:

var commentContext = new CreateCommentContext {
    CommentedOn = post.Id
var commentText = commentNode
commentContext.CommentText = commentText;

This is a little more complicated than it should be, and eventually the need to go through a comment manager and an intermediary descriptor structure will go away.

Once the commands were done and working on my dev box (note that I had to enable the module by doing “feature enable BlogPost.Import.Commands” from the command line), I brought the database down from the production server and performed imports from the local command line (e.g. “blogpost import weblogs.asp.net /Url:http://weblogs.asp.net/bleroy/archive/2010/06/01/when-failure-is-a-feature.aspx”). I could have used the web version of the command-line directly on the production server but I didn’t because my confidence in the process was not 100% and I expected to encounter little quirks that would require fiddling with the code and the database on the fly. I was right.

Comments presented another challenge: in Orchard they are plain text, but what I had was full of links and HTML escape sequences. I decided to lose links on old comments but the escape sequences were really affecting readability, so I built a separate command to sanitize comment text throughout the Orchard content database.

So there you have it, this is the current state of my new site. I did some additional work after all that which is not exactly technical: I am currently in the process of replacing all photos and illustrations with black and white engravings and drawings. When I’m done with design work, I want the site to be black on white, resembling an old book as much as possible. I’m using lots of old illustrations such as those you can find on http://www.oldbookillustrations.com/ as well as in nineteenth century dictionaries and science books.What the site looks like so far

The code for the import commands can be downloaded from the following link:

Part 1 of this series can be found here:

More Posts