July 2006 - Posts - Jon Galloway

July 2006 - Posts

[SQL] Scripting backup and restore all databases on a server (Part 2 - Extra Credit)

In the first post of this series, I discussed scripting database restore statements. It seems simple enough, but there's a complication - the restore statement requires the database logical name. The logical name usually follows the default format (a database named Example would have a data logical name of Example_Data and a log logical name of Example_Log), but it doesn't have to. The logical name could be just about anything.

The easiest solution is to change the logical filenames to match the database name. Barring that, we've got a bit of a conundrum on our hands, because there's no one database that holds the logical filenames for all databases.

Normally I'd jump all over the INFORMATION_SCHEMA views. These things are easy to use, they're part of the SQL-92 standard, they've got a great beat, and they're easy to dance to. But these views don't know a thing about the way the data is actually persisted to files. It makes sense when you think about it - SQL-92 is a broad standard written for all kinds of databases and operating systems, which could store the data in all kinds of ways.

So, we'll look in the master database and sort this out, right? Not so fast, Slick! The actual file information in SQL Server isn't stored in the master database, it's stored in each database - in sysfiles, to be exact. No problem if you're only dealing with one database, but tricky if you need to deal with all databases on a server. That's what got me into this mess, remember?

Well, to the rescue comes sp_MSforeachdb, which loops through all databases on a server calling whatever SQL string you feed it. It even subs in the database name if you give it a question mark (?). Maybe we've got a shot at this then...

The following script builds a temp table (#fileinfo) which holds the logical and physical names of both the data and log files for every database on a server. No, this probably won't help that guy with multiple data or log files for a single database (I think his name was Raphael), but he stopped reading back at the first paragraph. For everyone else, this script first builds #fileinfo, then it uses it to generate RESTORE statements.

The obvious use of this script is to - wait for it - restore databases. You'd use the backup script generator I wrote about before, and you'd use the following script to crank out the restore statements. I'd encourage you to take a look at the data returned by "select * from #fileinfo", though. Maybe you can think of something even more exciting to do with a table that holds the logical and physical names of every database file on a server. Please promise to use your powers for good...

create table #fileinfo (
[db] varchar(100),
name varchar(100),
filename varchar(100),
logname varchar (100),
logfilename varchar(100))

exec sp_MSforeachdb
'use ?;
insert into #fileinfo ([db],name,filename) select
'
'?'',
rtrim(name),
rtrim(filename)
from sysfiles
where status & 0x40 != 0x40'


exec sp_MSforeachdb
'use ?;
update #fileinfo set
logname = rtrim(s.name),
logfilename = rtrim(s.filename)
from sysfiles s
where status & 0x40 = 0x40
and [db] = '
'?'''

delete from #fileinfo where db in ('model','master','tempdb','pubs','Northwind','msdb')

select
'restore database ' + quotename(db)
+ ' from disk=''' + db + '.bak'' WITH MOVE '''
+ name + ''' TO ''' + filename + ''', MOVE '''
+ logname +''' TO ''' + logfilename + ''''
from #fileinfo

--select * from #fileinfo

drop table #fileinfo

 

Posted by Jon Galloway | 2 comment(s)
Filed under: ,

[SQL] Scripting backup and restore all databases on a server (Part 1 - Simple Case)

We just migrated a group of production sites from one hosting environment to another. The new environment has staging and production servers, so we really completed two migrations. There were a lot of databases, and if you've been following my blog at all you probably know that I love to script repetetive tasks - not (only) from laziness, but from a desire to avoid typographical errors due in repetetive manual work.

First, I backed up all databases on the old server. I ran this script, which generated a DOS batch file:

DECLARE @BACKUP_DIRECTORY varchar(100)
SET @BACKUP_DIRECTORY = 'E:\DB_Backups\'

SELECT
'osql -E -d master -Q "BACKUP DATABASE '
+ QUOTENAME(CATALOG_NAME)
+ ' TO DISK = N'''+@BACKUP_DIRECTORY+''
+ CATALOG_NAME
+ '.bak'' WITH INIT, NOUNLOAD, NAME = N'''
+ CATALOG_NAME
+ 'backup'', NOSKIP , STATS = 10, NOFORMAT"'
FROM INFORMATION_SCHEMA.SCHEMATA
WHERE CATALOG_NAME NOT IN ('master','tempdb','msdb','model','Northwind','pubs')

So I shifted the Query Analyzer output to text mode (ctrl-t), ran the above query, and saved the results to a file - BackupDatabases.bat. I wanted a batch file so I could test the migration up to the cutover day, at which time I'd need to do a final backup of the old sites. Looking good so far.1

At this point, I zipped up the backup directory (full of .bak files) and copied it to the new server. Now it's time to restore all those .bak files. The following batch file rips through a directory, restoring everay .bak file it finds:

@ECHO OFF
SET DBDIRECTORY=D:\Program Files\Microsoft SQL Server\MSSQL\Data
SET BACKUPDIRECTORY=C:\DB_Backups

PUSHD %BACKUPDIRECTORY%
FOR %%in (*.bakdo CALL :Subroutine %%A
POPD

TITLE 
Finished restoring database backups
ECHO Finished restoring database backups
PAUSE

GOTO
:EOF

:Subroutine
set DBNAME=%~n1

TITLE Restoring %DBNAME% Database
ECHO Restoring %DBNAME% Database

::PUT DATABASE IN SINGLE USER MODE TO ALLOW RESTORE
osql --d master -"alter database %DBNAME% set single_user with rollback immediate"

::RESTORE DATABASE
osql --d master -"restore database %DBNAME% from disk='%~dp0\%DBNAME%.bak' WITH MOVE 

'%DBNAME%_Data' 
TO '%DBDIRECTORY%\%DBNAME%_Data.MDF'MOVE '%DBNAME%_Log' TO 

'%DBDIRECTORY%\%DBNAME%_Log.LDF'"

::GRANT PERMISSION TO ASPNET USER - UNCOMMENT IF DESIRED
::osql -E -d %DBNAME% -Q "sp_grantdbaccess '%COMPUTERNAME%\ASPNET'"
::osql -E -d %DBNAME% -Q "sp_addrolemember 'db_owner', '%COMPUTERNAME%\ASPNET'"

::RESTORE TO MULTI USER
osql --d master -"alter database %DBNAME% set multi_user"

GOTO:EOF

Unfortuantely, it didn't work for a few of the databases. What had I done wrong? Well, I'd assumed the simple case (hence the title of this post) - I'd assumed that the database logical names matched the database name, so Example.bak would restore to Example_Data (in Example_Data.mdf) and Example_Log (in Example_Log.ldf). That's not always the case, especially if a database has been copied via backup / restore / rename.

I'll talk about how to script backup / restore when database logical names don't match the database name next...

1 You can use of course set up SQL Server jobs to backup your databases, but there are advantages to having a single batch file which backs up all databases in one go.

Posted by Jon Galloway | 6 comment(s)
Filed under: ,

[SQL] Change Logical Filenames

A little while ago, we deployed some really great sites built on the DotNetNuke platform. We started with a base DNN install, worked for several months, and ended up with a really nice suite of sites. Along the way, we renamed the databases, but the logical database names remained DotNetNuke_Data and DotNetNuke_Log. We looked into changing the logical names, but came up empty.

The difference between database and logical names was a problem for two reasons:

  1. While it worked well, it's a rough edge on an otherwise highly polished project. Anyone working on the database sees logical filenames that don't match the database name. That has absolutely no impact to how well the application works, but it always bugged me.
  2. It's harder to script backup / restore operations when the database logical names don't match the database name. For example, I previously posted a script to FTP download and restore a MSSQL database backup, and I was frustrated that I needed to specify the database logical name.

Well, it turns out that I just didn't look hard enough. ALTER DATABASE allows you to rename a file using NEWNAME:

USE MASTER
GO
ALTER DATABASE MegaCorp
MODIFY FILE
(NAME = DotNetNuke_Data, NEWNAME='MegaCorp_Data')
GO

ALTER DATABASE MegaCorp
MODIFY FILE
(NAME = DotNetNuke_Log, NEWNAME='MegaCorp_Log')
GO
Posted by Jon Galloway | 4 comment(s)
Filed under: ,

IE7 Standalone (Updated for IE7 Release)

Update 10/23/06 for the IE7 Release version

Summary

IE6 and IE7 Side By Side I just released a new version of the IE7 Standalone Launcher. Due to changes in this beta release of IE7, a simple batch file alone won't cut it. You can grab a zip package (two batch files, two reg files) on the new tools.veloc-it.com site.

Finally!

Well, it took me long enough. I've been fighting with the IE7 Beta 3 since it was released - FileMon, RegMon, Dependency Walker, and process debugging. This release requires over 1500 interfaces to be added to the registry.

What's New

The previous versions of this standalone launcher were just simple DOS batch scripts that made a couple of registry settings, launched IE7 and waited for it to close, then removed the registry settings. That's no longer practical with the amount of registry entries that are now required, so I've broken it out into a launch script and two reg files.

In the past, setting up a directory to run IE7 required a few too many manual steps. I've simplified that with a setup script (also a batch file) which extracts the IE installation and gets things set up for you. Setting up IE7 standalone is really simple now - download and extract my IE7 Standalone package, download the IE7 installation from Microsoft and copy to the same directory, and run the setup batch file. After that, you can pop up IE7 whenever you want by double clicking the IE7.bat file.

IE7Directory

Another change - this time the launcher just waits 15 seconds and closes itself rather than hanging around until you close IE7. IE only reads the keys on startup, so we can clean them up once it's running.

DOS batch scripts? REG files? Is there a simpler way?

Well, yes. Yousif released an IE7 Beta 3 Standalone Installer (IE7s.exe) that downloads the IE7 installer file, does some magic stuff, and up pops IE7 in standalone mode. I tested it out and it worked well for me. I prefer the batch / reg file approach, though, for several reasons:

  • I'd like to make it as easy as possible for expert users to make recommendations and corrections. For instance, Dr. Thomas Meinike previously made a recommendation to update the IE Version Vector to allow testing of conditional comments (like <!--[if lt IE 7.0]-->). Version 1.4 of this script included his recommendation. I'd love to hear suggestions on how to improve these scripts - leave comments on this post or on the download page.
  • DOS batch files and REG files are easy to customize. They're just text files, so you can edit them in notepad. If 15 seconds is too long or short a wait for you, you can change it. If you want to experiment with other settings to enable enhanced IE7 features like RSS Feed support, have at it (and please let me know so I can update the scripts).
  • I try to avoid programs that make bulk changes to my registry but don't tell me what they're doing. It's like a random stranger offering a pill but refusing to say what it is. Random programs could be malicious (malware, spyware, etc.) or just incorrect (bad installers have caused me more headaches than malware over the past few years). In my testing, IE7s worked great, but I'd prefer to know what's going on with my computer when I can, especially when I'm running IE in an unsupported configuration. The one loophole for this rule is .NET EXE's, since I can open them in Reflector to see what they're doing; IE7s is a native EXE, though. I want to be clear that I'm not implying IE7s is doing anything wrong or sneaky - it seems to be a very clean, well written utility, and it's a lot easier to use than what I'm offering.

The usual disclaimers apply:

Lessons from Home Remodeling: One Week Phases

No, I don't buy into The Construction Metaphor

A few times a year, someone will write about how software development is like building construction, and shortly afterwards a bunch of people will write about how software development is very different from building construction, but the two disciplines can inform eachother. I'm squarely in the second camp. I think software development is easier in many ways - our only real raw material is time, and we don't suffer from pesky constraints like gravity. I'm always happy to learn from anyone who will teach me, though, and in this case it's a guy who spent decades remodeling homes.

Short iterations all around!

Greg Young just wrote about the impedance mismatch between agile development and the fixed price contract mindset our clients are used to. Businesses like to allocate a chunk of money to a shiny package, but we've realized that good business software solutions usually don't come in shiny packages. We're not alone in this problem, though.

This reminded me of a recent discussion I had with my father in law. Lenny is a very wise man, and I always respect his opinion. This guy's always thinking - he's put his philosophy degree to work. He's done carpentry and construction for decades, and he's an accomplished singer-songwriter-publisher. Both industries (construction and music) require some contract smarts, and Lenny's spent years fine tuning the process of working with others in a way that keeps everyone happy.

I was recently griping to him about the difficulties of keeping clients happy on fixed price web development project. He just chuckled and told me it's nothing new. He shared several parallels in his custom remodeling work - clients who don't know what they want until they see what you've built, clients who tell you exactly what to build and then are unhappy when you build it, the perils of taking orders from someone and later finding out that the spouse who signs the checks.

Interestingly enough, he shared with me that a lot of the people he knows in the custom remodeling business have moved to one week phases. This has tons of advantages, and while one week may be a bit short for a phase in the software development world, the value of short phases plays well here:

You identify requirements misunderstandings early

In almost 10 years of custom software development experience, I think one of the hardest tasks is to build the software the customer actually wants. Not the software I think they should have, and not the software they think they want, but the software they'll actually use and be happy to pay for. There are a lot of obstacles, but the worst seems to be communication. The only thing that seems to help is short delivery cycles. As with any inherently instable process, the only thing that can keep us on course is frequent feedback. It's hard - software developers don't like it because we find that the brilliant algorithms we've been crafting have nothing to do with what the clients want, and clients don't like it because they'd prefer to assume we've read their minds and are busy conjuring up the accounting system of their dreams. The problem is that we don't know exactly what they want, and we need frequent checkups to stay on course. Early and often (as Micah often says) is the key to eliminating the impedance mismatch between software developer and software user.

Most software fails because it's a well implemented solution to the wrong problem. We fine tune the aiming process, but we don't spend much time defining the target. Good software is frequently used software, and the only software that gets used is the kind that does what the end users want. Our only hope is to push software as close to the end user as possible, as often as possible. Real feedback is the lifeblood of successful software!

It's hard to get end users to evaluate software. We ask for feedback and they roll their eyes - "Oh, okay, you guys have your little bureaucratic checklists too, huh? Where do I sign so I can get you off my back and get to work? And when is this thing going live again?" The best way to encourage them to do what's in their best interest - evaluate the software at each checkpoint - is to tie it to a payment. People seem to really check things out when they have to pay for them.

You identify cash flow problems early

As Phil often reminds me, cashflow is king in a startup operation. We software developers have put a lot of thought into optimizing the disconnect between what clients say they want and what we build for them. We haven't solved it, but at least we know it's a problem. Those of us who have ventured into the business side of software development know that you can have a happy client and still be waiting around a while for your money. There are a number possible kinks in the money hose:

  • The client you've been working for has a boss who turns out to be the real client at the last minute. In the remodeling business, this game's called working for the wrong spouse, but it can be trickier in the business world where great-great-grandbosses can suddenly take interest in a project at the last minute.
  • The client doesn't actually have the money. The best credit check is a cancelled check.
  • The client has the money, they just don't put a high priority on paying bills on time.
  • And on and on and on... I'm sure you can think of several more.

You train the client on one of their most important jobs - paying their bills

The small deliverables - with a payment at the end of each phase - helps us to train and evaluate the client on one of their most important project related duties - funding development. It's actually easier to (successfully) ask a client for money if it's routine and the checks are small. Instead of waiting for "opening night" to see if we've all got our financial act together, we rehearse it over and over, and then we scale back opening night since we've already delivered just about everything long before the final deliverable. Final signoff is then kind of anti-climactic, since there's little risk of drama, the client's familiar with the product, and they don't really owe very much at that point. The stakes are low, and we can all focus on finishing the job.

You've never more than a week away from a "happy place"

Despite everyone's best intentions, clients may get upset. They assumed a feature would be included due to some unspoken business rules, an unexpected technical issue caused the deliverable to crash during a demo, etc. The client is upset: "This whole project is a disaster!"

Small and frequent payments include both implicit (and with software development, usually explicit) acceptance. So you always have something to fall back to: "I understand you're upset. We'll get this fixed up and keep moving. Now, you remember sitting with me just last Tuesday, right? You paid me for the last bit of work, and we both agreed that things were looking good, didn't we? So we've only got at most a week's worth of problems to sort out..."

Client and developer interests are more closely aligned

The goal at the beginning of the project is usually clear to all parties. The client has a business need, wants some software to make life easier, and is happy to pay for it. 25% up front, 25% at midpoint, 50% on completion. Sounds good.

Fast forward several months and the project is stuck in the 90% complete phase. The development team is unhappy because the client keeps complaining. The client's unhappy, but they're not going to pay a huge chunk of money and sign off on this thing until they're happy with it, right? The only thing that often forces the client's hand is a deadline, but that's pretty lousy for all concerned. The development team works like crazy up to the deadline, and the client grudgingly accepts it because they have no choice. Worse, though, is when the deadline keeps getting pushed, and you find yourself in one of Zeno's Projects.

There are two problems in this case, and they both stem from the fact that the contract terms are divisive. In a project where a client pays a chunk of money on completion, the contract pits the client against the development team during that "90% complete" time. The problems here:

  1. The client doesn't really review the work until the last moment, so the development team's been working under the false impression that they've almost completed the software the client wanted. Maybe they were wrong, or maybe the client didn't know exactly what they wanted until they started clicking, but either way the problem is that the development team hasn't built what the client wanted when they got around to reviewing it. As mentioned above, this is solved by regular review and payment, since it forces regular client review. Now they've been part of the decision making process early on, which is great - they've had some time to correct misunderstandings, fall in love with the product as it is. Any large requirements gaps are obviously partly on their hands, too. The point is that the client is no longer in an adversarial role in determining requirements gaps as the project's in the "wrapping up" phase.
  2. The client's forced into a position which encourages them to get as much as they can in exchange for a big chunk of money. Once that 50% chunk is paid, the project's done, right? But until then, the development team is held hostage to that final payment, so they may end up having to do what they consider is extra work just to finish the project up and get paid. The contract has succeeded in breaking up our happy team. Contrast this with the case above - the stakes have been lowered. Then everyone keeps focused on the task at hand - the client has much less incentive (or power) to get feature greedy. This is just another weekly iteration, and they've been reviewing the project all along so major feature requests at this point just make them look foolish. There's no huge ransom for the last little bit of work, just a fair payment for a week's work and the hope of future maintenance work or projects together. The development team feels good knowing that they're about to deploy what the client really wanted, and doesn't get too worked up if the client decides they want a bunch of new features - fine, we'll need to add another two iterations, and we'll be glad to do it for you.

Can it really work in our world?

Well, one week phases are pretty short. Depending on the project, I could see 1 to 3 weeks as a good range. Sam Gentile posted about his recent switch to one week iterations, but did say it's at the low end.

But how about the contract side of it? Our clients still want a big bid for their project - that's how their budgeting process works. They have budgets for office supplies and office parties, and policy says they need a budget for their software projects, too. Well, just as agile teams sometimes have to use a blocker to adapt to byzantine waterfall development policies, we can sometimes benefit from some strategery on the contract side as well. On a recent project for a large client, Micah help give the client a contract they could understand, but with some solid iteration and re-estimation included. The client has an overall budget estimate, but each iteration starts with a re-estimation for that phase. He explained to the client that in most cases the estimates will decrease, since the early estimate includes some worst case factors which can be eliminated as the product takes shape (see Jeff's post on the Mysterious Cone of Uncertainty). Some clients may benefit from education on just how inaccurate most software estimates are, and how both lowball and high estimates ultimately hurt them.

What do you think?

[SQL] Some of my favorite INFORMATION_SCHEMA utility queries

Phil just posted about using INFORMATION_SCHEMA to bulletproof your SQL change scripts. I've been working up a post on using queries against INFORMATION_SCHEMA views to generate SQL scripts, so this seems like a good time to chime in. The INFORMATION_SCHEMA views are views which describe a database's objects and schema. Phil did a great job of explaining what the INFORMATION_SCHEMA views are, so go take a look at his post if you'd like to know more. As the title of his post indicates, he's using INFORMATION_SCHEMA to make SQL change scripts more robust. I'm going to focus on using ad-hoc queries against INFORMATION_SCHEMA to save time.

The INFORMATION_SCHEMA views allow you to use SQL queries against your database schema, a fact you can exploit by writing queries which generate SQL. I usually shift to text output mode (Ctrl-T in both QA and SSMS), execute the ad-hoc query, copy the output to a new query window, and run it. You can consolidate this to one step; I'll discuss that in a bit.  

---------------------------------------------------------------------------------------------------
--Generate delete statements for all tables in the current database.
--Note that foreign key constraints will throw errors the first time you run this,
--but given the fact that you're deleting from every table in the database you don't care.
--Unless you've got circular FK constraints, you can just keep hitting execute (F5) 
--until the errors go away...
---------------------------------------------------------------------------------------------------
select 'DELETE FROM ' + quotename(table_name) + ';' from information_schema.tables where table_type = 'BASE TABLE'

---------------------------------------------------------------------------------------------------
--Usually when I delete all data, I want to reset autoincrement identity fields.
--This query will generate scripts to do that for you.
---------------------------------------------------------------------------------------------------
select 'DBCC CHECKIDENT ('''+quotename(table_name)+''', RESEED, 0);' from information_schema.tables where table_type = 'BASE TABLE'

---------------------------------------------------------------------------------------------------
--This generates scripts to drop every table in the database. I usually use this to generate
--all the scripts, then modify it to keep static lookup table data (state names, member types, etc.).
---------------------------------------------------------------------------------------------------
select 'DROP TABLE ' + quotename(table_name) + ';' from information_schema.tables where table_type = 'BASE TABLE'
---------------------------------------------------------------------------------------------------
--This generates the change scripts to change ownership to dbo for all objects owned by someone else.
--------------------------------------------------------------------------------------------------- select 'EXEC(''sp_changeobjectowner @objname = '''''+
(table_schema) + '
.' + (table_name) + ''''''
+ '
, @newowner = dbo'')'
from information_schema.tables
where table_schema!='
dbo'
order by table_schema,table_name

select
'
EXEC(''sp_changeobjectowner @objname = '''''+
(routine_schema) + '
.' + (routine_name) + ''''''
+ '
, @newowner = dbo'')'
from information_schema.routines
where routine_schema!='
dbo'
order by routine_schema,routine_name

---------------------------------------------------------------------------------------------------
--And if you just want a listing of tables and views that aren't owned by dbo:
--------------------------------------------------------------------------------------------------- select
table_name
from information_schema.tables
where table_schema!='dbo' order by table_schema,table_name

Oops - that last one didn't generate any SQL, it just ran a report. Well, sure, it's slick to use SQL to generate SQL, but sometimes you just want information.  This came in handy the other day - Phil found a SQL Injection vulnerability in some code we'd inherited, and it turned out that a stored procedure was at fault. The ASP.NET code was doing things correctly - using ADO.NET parameters, etc. - but the query was concatenating them into a SQL string and executing it. Whoops! I wanted get an idea of how many more procs were following that pattern, and this query did the trick:

---------------------------------------------------------------------------------------------------
--This makes a quick and dirty list of stored procedures which may execute dynamic sql.
---------------------------------------------------------------------------------------------------
select 
'/' + replicate('*',10) + routine_name + replicate('*',10) + '/',
routine_definition
from information_schema.routines
where routine_definition like '%exec %'
or routine_definition like '%execute %' or routine_definition like '%sp_executesql %'

As I said before, my preference is to generate scripts and execute them in two distinct steps. This allows you to keep the scripts in version control, but more importantly it allows you to review the script before you run it. That's especially important when you're running scripts which could potentially affect every object in your database. However, if you want to merge these two steps, you can use a technique I previously wrote about which executes a dynamic query once for each row in a temporary table. Notice in the query below that I've just inserted one of the above scripts between the "--Begin statement" and "--End statement" comments:

---------------------------------------------------------------------------------------------------
--Create and execute the SQL in one step...
---------------------------------------------------------------------------------------------------
USE [DBNAME]
GO declare @RowCnt int declare @MaxRows int declare @ExecSql nvarchar(255)

select @RowCnt = 1

declare @statements table (rownum int IDENTITY (1, 1) Primary key NOT NULL , statement varchar(255))
insert into @statements (statement)
--Begin statement
select
'EXEC(''sp_changeobjectowner @objname = '''''+
ltrim(table_schema) + '.' + ltrim(table_name) + ''''''
+ ', @newowner = dbo'')' from information_schema.tables
where table_schema!='dbo' order by table_schema,table_name
--End statement select @MaxRows=count(1) from @statements

while @RowCnt <= @MaxRows
begin select @ExecSql = statement from @statements where rownum = @RowCnt
print @ExecSql
execute sp_executesql @ExecSql
Select @RowCnt = @RowCnt + 1
end

Once you start using the INFORMATION_SCHEMA views, you'll find more and more uses for them. Some people go off the deep end and use these things to generate ASP.NET code; I think that's just going a bit too far. If you've got some other favorite uses of the INFORMATION_SCHEMA views, please leave them in the comments below.

And of course, these views are only one of may ways to get meta with your data - there's tons of fun to be had with SMO, and some day I might get around to posting about using Excel to generate SQL, XML, and HTML...

Posted by Jon Galloway | 4 comment(s)
Filed under: ,

Motorcycle blogging from San Diego to the Arctic Circle

As Phil mentioned earlier, my brother Brian is blogging his motorcycle trip from San Diego, California to Deadhorse, Alaska. Blogging a long trip is getting routine, but what's interesting is that Brian's actually blogging from a computer / webcam / GPS setup that's mounted on the motorcycle. Brian just finished a 10 year stint in the Navy and has been planning this trip for a long time. He's in northern Washington now, and will go over 17,000 miles by the time he's done. You can read more detailed specs on his bike, computer gear, and software on his blog, but here are the highlights:

Bike

  • Yamaha FJR-1300  "Super-Sport Touring"
  • 1300cc
  • 145hp

Computer

  • Mini ITX board
  • 1GHz CPU
  • 1GB RAM
  • 7" Newision touch screen
  • Holux Slim 236 GPS
  • Windows XP Motorcycle Home Edition
  • Logitech Fusion webcam
  • Cingular 8125 (aka HTC wizard) and WiFi for internet access

Software / Services

  • SubText
  • PostXing
  • Flickr
  • YouTube
  • GPSVisualizer
  • RSSFWD
  • Primary Navigation: iGuidance (modified for easier use with iGmod)
  • Trip Planning: Microsoft Streets & Trips 2006
  • Frontend: Centrafuse
  • Video Capture: Capture! (look on mp3car.com for more info)
  • Holux Slim 236 GPS
  • Logitech Fusion webcam
  • Cingular 8125 (aka HTC wizard) and WiFi for internet access

I wrote some custom .NET Winform apps for him (the coolest was a transparent scalable version of osk.exe, the Onscreen Keyboard provided in XP) but they ended up being unnecessary due to the front end software he found. Still, with SubText and PostXing he's got his own version of the DotNet Rocks Road Show. He had to fight with touch screen drivers and power while he was putting it together, but his system is holding up well on the road. He built an extra fan onto the case to make sure the system stays cool despite being mounted in a motorcycle gear bag.

So far it's been a very entertaining read. A normal daily post includes pictures, video, funny travel commentary, and quite often GPS tracks. The video clips have been really cool - he films via webcam while driving, then edits the video in Windows Movie Maker and speeds them up so they look a bit more like that freeway chase scene in Matrix 2.

And now I read that there may be an on10 episode in the works about his trip...

Interesting side note: I first tried to map the route from San Diego to Prudhoe Bay in maps.google.com, but they said they couldn't compute the route. local.live.com had no problem with it, though. Maybe it's time for me to fight my habit.google.com and check out these live.com sites a bit more...

More Posts