For years, there's been talk of tomorrow's technology - user agents, and how they'll make our lives better. How we'd be able to sit there waiting for information to come to us, with agents going away collecting it, collating it, and presenting it back. How this would change our lives. Like many, I've been of the opinion "I'll believe it when I see it" for a long time, and it kind of got written off and pushed to the back of my mind. And then I realised, user agents are here and have been for quite some time. It's just that the reality's not quite how I (or I guess others) envisaged it to be.
From the very first e-mail client, we've had user agents; desktop applications that communicate with remote servers and bring down data ready for when we want it, and notify us of its arrival. The addition of out-of-office replies, then Rules Wizards, then Junk mail filters are all moving the likes of Outlook in the direction of the archetypal "user agent". But the presentation is still largely standard desktop client. Other examples of such applications are the MSN Messenger Hotmail notifier and RSS readers.
To me, the two true prerequisites for a real user agent world are:
- Web services/XML - a common means of requesting, retrieveing, and interpreting data.
- A client-side infrastructure for displaying this information
Whilst the first of these is now available, standardisation of schemas is currently one of the biggest problems. Even RSS, one of the most widely used standards, has several different versions (0.90, 0.91, 2.0), and competition from the likes of Atom. What the world really needs is standardised ways of representing everything else - personal information, stock quotes, and so on.
A larger problem exists in the client-side infrastructure. Support for platforms besides Windows aside, there is currently no simple way built into the platform of writing such an agent that "integrates" with the environment and other such agents whilst having adequate usability. If you look at all the agents so far - E-mail clients, RSS Readers, MSN Messenger, whilst there are attempts to integrate all of these things, it's neither controlled nor elegant. In the future this, problem will be solved with the Longhorn Sidebar, along with MSN-esque popups. In the meantime, the closest thing I've seen so far is www.desktopsidebar.com; a true framework for integrating various agents that borrows more than a few ideas from the Longhorn Sidebar. Supporting Messenger, performance monitors, weather feeds, RSS feeds, the obligatory stock quotes, and so on, as well as being a framework for hosting further agents and giving them a UI with which to interact with the user.
So, agent technologies are here, they just crept up on us quietly - evolving over time rather than arriving with a fanfare fitting the amount of hype around them.
Long time since my last post - I've been getting ready for and going on a nice bit of pan-European road racing. The car was getting a fair bit of stick, so didn't make it the full distance, but that's another one to cross off the list of lifelong ambitions.
I've now updated both the NUnit GUI and MSI Command Launcher applications. The former has an extra try-catch around an event handler for showing details of errors, because of some weird boundary condition. The latter now changes directory to where the executable specified to be run exists, allowing it to use relative paths, etc. if it's a batch file.
Download URLs are the same as ever:
Having spent a few posts on .NET specific tools, I'll get back to more process and architecture oriented topics for a bit now...
As a believer in Agile and XP for delivering software (where appropriate), I subscribe to the "simplest thing that can possibly work" ethos - not overengineering a solution based upon the assumption that as yet unknown requirements will change the defined implementation. An application of YAGNI, if you will. Whilst numerous people in the office have taken this mantra on, I've spotted a lurking anti-pattern that's reared its head a couple of times:
"The simplest thing that could possibly get me out of work"
One of the great things about Agile/XP is that it delivers some control back to the developer, removing the need for business analysts that are basically translator-patterns from business -> technology. This is also one of the approach's problems (which I will discuss in another Blog entry soon); If developers cling to a statement such as "the simplest thing that could possibly work", it means that they are using none of their intelligence, experience, and insight to make decisions. Like most maxims, "The simplest thing..." is a principle to use that cuts to the heart of the majority of cases. It is also a rod for your own back if applied blindly rather than judging the implications of all solutions. In my opinion, a developer is a "professional" who shouldn't mechanically apply rules to derive solutions - a modicum of intelligence is required at each juncture, not a parrot-like ability to repeat a phrase.
I've seen this anti-pattern manifest itself most recently where 2 solid man days were wasted on a task that would've taken 10 minutes if there'd been an understanding of what "simple" implied, and the more elegant solution hadn't been discounted for not involving the simplest steps.
This leads, for me, to two rules:
- Whilst in the vast majority of cases, the simplest approach is the best one, there are numerous other cases where something that is only marginally more complicated will clearly give many times the potential going forwards.
- The "simplest thing" and the "quickest thing" are quite often very different. "Simple" could mean going through 100 files by hand, manually searching for similar text and overtyping it where needed. "Quick" could mean hitting Ctrl-R in a decent editor and typing in a simple regular expression. I know which of the two options I'd choose. And, unfortunately, I know people that would choose the other option, protecting themselves with the "shield of simple". The "sum of simple" should actually be: simplicity-of-each-step x number-of-steps.
The problem with simplicity is that it exhibits no intelligence.
I finished the initial release of my latest (and possibly last for a while) MSI plugin tonight. This one, MSI Command Launcher, extends the ability of custom actions to run user-defined code by:
- Supporting the execution of batch files (standard Setup & Deploy packages don't allow this)
- Allowing for roll-back of execution based upon the exit code returned from running a script/executable
- Provides a standardised Windows GUI for showing command-line input/output, rather than "shelling out" to a DOS box
MSI Command Launcher runs as a proper ProjectInstaller and can be run (i.e. tested) standalone from the command line. Documentation on usage is included in the archive that can be downloaded from: http://www.altervisitor.com/software/MSICommandLauncher.msi
When running, the application appears as follows:
This is a first release of this application, and I've not tested it as much as the others, so feedback would be great. The only problem I know about is if you're writing huge amounts of text to the console in an interactive script (one that requires user input) - the RichTextBox it writes to is pretty slow to update, meaning that input may be expected (and output blocked) before all of the preceeding text has been written to the screen. In applications that are more judicious in their usage of Console.WriteLine, and applications that don't read from the console, this problem won't exist.
If you do use this in your projects, please drop me a mail to let me know - I'd like to track the uptake of all the tools I put out on the Net, so I know which ones to support/extend.
One of the things that's irked me for quite some time is how, even in a mature TDD environment, the tests that are written never seem to make it past the integration server. Yet vast amounts of time gets spent tracing and fixing the differences between environments that cause glitches in execution. The time it takes to spot these errors and the fact that the installation of bad software has already occured makes it a bit like shutting the stable door after the horse has bolted. To me, it became apparent quite some time ago that tests shouldn't just be a means of validating refactorings and amends locally, they should also be a first line of defence when it comes to deploying to and troubleshooting environments (security factors aside).
So, after a few late nights over the last week, the project I've been working on has been finished - NUnit MSI. This application does pretty much what it says on the tin - it sits in MSI installers as a custom action and runs the tests as part of the installation package, deciding whether or not to continue installation based upon success (and the user's input if so desired). It can be downloaded from: http://www.altervisitor.com/software/NUnitMSI.msi, and here's a screenshot to give you an idea:
Screenshot of NUnit MSI
As well as running from within installers, I personally now use it as a custom build action to give me a "green light" that I've not broken anything, rather than having the heavier-weight NUnit GUI running all the time (although it is still useful for persistent errors and larger projects).
The code for NUnit MSI is based directly on NUnit v2.2 (2.2.0), and makes use of all the behind the scenes logic from that, just re-implementing the UI portion of NUnit. Full instructions are included in the archive, and are summarised (in brief) below. As a parting comment, if you use this, please let me know give me feedback - I'm all for extending it, polishing it, making it work as a VS.NET plugin, etc.
Running the NUnitMSI.exe application without any command line
parameters will look in the folder it resides in for the first file with a
.nunit file extension. This will automatically be loaded and the tests run.
If all the tests succeed, the application will pause for one second then
exit successfully. This is the exact same behaviour if it is added to a
Setup and Deploy project.
More advanced command-line usage takes the form:
NUnitMSI.exe [config.nunit] [/queryerrors]
Giving a relative filepath for "config.nunt" will cause the specified NUnit
project file to be loaded, instead of one being probed for.
The "/queryerrors" parameter will cause a dialog to be shown should errors
be detected, asking the user if they wish to continue with installation
regardless. The default is to NOT show this dialog, and for a failed test
to result in the installation rolling back. However, whenever tests fail,
whether this dialog is shown or not, the GUI does NOT automatically exit -
it remains open to allow the user to inspect the errors in more detail.
Usage example: "NUnitMSI.exe ../tests.nunit /queryerrors"
This basic-usage mode allows for NUnit MSI to be integrated with a local
build process as a "post build action", allowing for a succinct graphical
display of the test results, rather than requiring a heavyweight interface
such as the NUnit GUI, or having to parse the output of the command line
NOTE: once the tests have completed, pressing "F10" on the form will cause
them to run again. This is not documented in the GUI, but is useful when
running the application as an alternative to the standard NUnit GUI
NOTE: If the application does not have a ".nunit" file specified, and it
cannot locate one in the execution directory, it will exit automatically
with a status code of "-3". Other status codes are:
-1: Tests failed
-2: Invalid ".nunit" file specified
I wrote a bit of a monologue in an e-mail to some of the developers at work the other day about documenting code. I've never been a great fan of huge swathes of inline comments splitting up what's only 4 or 5 lines of self-explanatory code, so when the mantra of "code should be self documenting" came along, I didn't really have much of a reaction to it. But I've been thinking about that, and I came up with the following conclusion, which was the opening line of my e-mail:
When have you ever bought a professional product that didn't come with a manual?
By this, I'm not just talking software product, I'm inculding hi-fis, coffee makers, and so on. I'm still of the opinion that code within the body of a method should be self documenting. Personally, I probably write 1 line of comments for every 10-15 lines of code. And it's usually just a "section heading". However, there are occasions where we all have to write obtuse code; using logical bit-shifts in some CPU-cycle critical operation rather than do standard calculations, marshal some parameter differently because of a bug in a system being integrated with, and so on. Such things clearly need descriptions of why a choice has been made. This still only amounts to one single-line comment every few methods or so, but it means that self-documenting code is simply something to aim for that will never quite be achieved.
My real bug-bear has been with XML-documentation of code - the fact that developers seem to use the "code should be self documenting" mantra to protect them from having to document the public interfaces that others have to program against; which really isn't code-commenting at all. When was the last day working in Visual Studio .NET, you didn't make some use of MSDN? Whether it's Intellisense, hitting F1 on a method, or searching for a class in the MSDN Library? That documentation's all there because of XML comments. To me, putting a system into a live environment in a financial institution with no technical documentation for it is unprofessional - plain amateur, if you will. Yes, the code should be self documenting, but what about the purpose of each method? Details on where it's called from? The permissible values of parameters? Details on why something's changed from a previous version? What the code is specifically not supposed to support? How it deals with concurrency and locking issues?
This isn't about "commenting the code" - it's about when teams in an enterprise are moving between projects on a regular cycle, or when support is outsourced, or when your code is externally exposed for others to consume - it is aboslutely critical that documentation of this is produced in such circumstances. It just happens that one of the best tools for doing this, and allowing integration with other documentation/the development environment is NDoc. And it just happens that this works by having you insert the documentation into the code. Is this a coincidence? Not likely, if code and documentation are maintained separately, they will no doubt diverge, but having errors in the build process when doing continuous integration means there are no excuses for a system becoming unsupportable/opaque.
Thinking about it, it's in a developer's best interests, despite the extra time, effort, and clutter of the class (I really wish there was a neater way of handling this in .NET, such as a comment-behind file, as suggested). How many developers complain that they've become tied to a project, limiting their progression, all because of the entrenched knowledge of the intricacies, or the speed with which they can develop on it? When I ever get down to developing code nowadays, I now religiously document every class and public property/method, along with any non-obvious private members (i.e. ones that are hacked and I need reminding to come back to) - it's common courtesy both to myself and anyone else that ever comes across it.
Final thought: How long would it have taken you to learn .NET if none of the MSDN documentation had been present? What would the cost of that have been?
I use NUnit. A lot. Almost every time I hit compile, in fact. And I'm not a big fan of the command line version; there's a certain feel-good factor from seeing all of the orbs turn green. But this is where my problem lies with NUnit - although it's technically great, the interface just looks really clunky. Half of the problem is that it doesn't support Windows Themes - the controls neither have their FlatStyle set to System nor has the call to EnableVisualStyles been made. The rest of the problem is just to do with small tweaks around changing text to images, replacing the red/amber/green orbs with prettier ones, and so on.
So, as I've got an idea for another NUnit related project anyway, I decided to get to grips with the source code for it by refreshing the UI. Below are before and after pictures, and the updated files for running it yourself are here:
As I've not changed any real code, only the UI of the EXE (and a UI DLL), this archive simply contains two files to overwrite those that already exist in your NUnit folder. All your current existing projects, links, etc. should continue to work. Note that this is an update to the latest version at the time of writing - v2.2 (v2.2.0) , and will only work with this version.
If you like/don't like this update, please get in touch, as I'll be mailing the NUnit guys themselves over the next couple of days to see if we can't get the proper project updated.
Within the next couple of days I should have finished the NUnit-based tool I'm writing, too, which should be well worth the download...
Original NUnit 2.2 GUI
Updated NUnit 2.2 GUI
One problem that's always been apparent in .NET projects is applying changes in application configuration during deployment. There's a longer article I'm going to write shortly on managing differing configurations between environments in the enterprise. But for now, I've come up with a small utility that overcomes one glaring omission with .NET Setup & Deploy projects - taking settings entered within the installation wizard and applying them to the application being deployed. Whilst you can capture settings from the "User Interface Editor" screens quite nicely, and the web.config/app.config is a pretty good way of storing settings in smaller applications, there's no built-in way of combining the two. Obviously, in a larger application, a more structured way of storing data is needed than the .config files, and another deployment product such as those by Wise and InstallShield is more appropriate, but that still leaves the majority applications where the built-in VS.NET support chosen falls short of requirements.
So, as I had a small project to do for a colleague that needed a semi-professional installer, I came up with a WinForms application that would take configuration changes captured as part of the installer as part of it's command-line arguments (as name=value pairs), apply them, then close down, allowing for it to be added as a custom action in the installer. It supports a couple of other things, too - merging two config files, running without a UI, etc. It's available for download at http://www.altervisitor.com/software/NetConfigUpdater.msi, and is freely useable/distributable as long as the author credits remain intact. Usage instructions are included as part of the MSI. Feel free to post feature requests on the comments to this entry, and use the mailing form to send any bugs found back to me.
.NET Configuration Updater screenshot
My previous solution to spam detailed in another entry was based upon how I, as a service-oriented technical architect, naturally approach such problems. Having had a bit of a think about it, and a bit of enlightenment about how money isn't the only currency (CPU time, etc. also are in a way), I've come up with a new zero-infrastructure solution:
- A plugin is developed for mail clients such as Outlook, Eudora, etc. This is installed on all client machines
- This plugin is activated whenever an item is sent. It takes a subset of:
- The sender's e-mail address
- The recipient's e-mail address
- The subject line
- The date-time stamp
- Using this data, it computes some mathematically intensive function that results in a value. This should take a few seconds (of background processing) to compute. This function doesn't involve public/private keys - it would be a freely available algorithm that simply takes in the order of 2 seconds to come up with a value.
- This value is appended to the outgoing mail as a header - this is the signature
- --- The mail is transmitted and received by the client ---
- A plugin exists in the client's mail-reader that intercepts this header
- The function chosen for the computation must allow the "correctness" of the value to be determined within a fraction of a second rather than several seconds (there are formulae like this - I just can't remember them). Again, this algorithm would be freely available
- The validity of the header determines the validity of the e-mail
Rather than creating the signature/validating at the client, certain mail-servers could do this - both inbound and potentially outbound (from certain trusted servers). Individual users could set up rules as to whether or not they accept unsigned mails.
Why's this solution good?
Basically, it's realistically free for everyone and requires no infrastructure. If there's one thing that all the file-sharing applications have proven, it's that de-centralised peer-to-peer systems can thrive. In terms of implementation, this solution would take minimal time to be developed as a plugin for mail clients - the triviality will lend itself to freeware implementations, leading to mail clients including it in the long term. The fact that an e-mail takes an extra few seconds to send in the background wouldn't affect a normal user, but it would make sending signed bulk-mailings prohibitively expensive. For companies that send genuine bulk mail-shots, they could just be added to an allow-list on an ad-hoc basis (i.e. when you sign up to a mailing list).
In an ideal world, the effort being expended at any point in a project will be the same as at any other point - the team involved would never be overstretched or under utilised, always working at an optimal pace. One goal of any software development methodology should be to achieve this burn-rate equilibrium.
In projects run with non-agile methodologies this very rarely happens. The reason behind this is the same reason the projects fail - traditional methodologies involve big up-front designs that will be wrong, requirements will change, and so on. What's interesting is the energy expenditure signature of these projects is the inverse of what the methodology advocates. The effort *should* be expended at the beginning, ensuring all requirements are captured, that implementation and infrastructure designs are produced, that SLAs are defined, that resources are allocated and equipment acquired. Basically, the aim is to mitigate all risk downstream by spotting it up-front. In reality, you get an inverse signature of this - rather than having energy expenditure tailing off towards the end of the project, it picks up - bugs get found, requirements change, assumptions are proven incorrect, etc. So, you have a rising curve (and this isn't a linear gradient), rather than a falling one. Judging the exact graphs you're going to get of expenditure is really tricky, as you never know until the end of a project whether or not you've got all the requirements.
These signatures are far more interesting in an Agile project, however. Whilst the methodology allows, and even predicates the execution of work at a near-constant ongoing pace, the reality of this is likely to be a tendency towards higher expenditure at one end of the project than the other, and this can be measured quite accurately against stories completed (in XP). My supposition is that the signature that this generates may be able to be used to determine what state of health the team/project is in.
If we assume that "0" is a centre-point for energy expenditure that is equivalent to optimal working pace (that's not inducing burn-out), and that a line is draw against time on the x axis, then we can represent this signature as a standard 2d graph. I would predict that the individual signatures will vary from team to team depending on the dynamics of the individuals within them. Based upon experience, below is an initial attempt at the breakdown of a couple of signatures, and what they imply (note that these are per-iteration graphs as well as per-project) with projects/teams I've seen.
- If the line starts at zero, and rises over the duration then internal facets may be underdeveloped. One of several things may be happening:
- Assumptions are being made that aren't holding true
- The team members may not know each others capabilities/responsibilities
- Basically, the learning curve is quite high
- If the line starts off positive and drops to zero over the duration, then the team has spotted a deficiency and is trying to compensate. To me, this implies a mature team in an unknown situation - this may well be a lack of supporting infrastructure such as environmental defects, or a known lack of information. Either way, the foundations for the team to work upon aren't ideal.
One common signature should be the case across all projects/teams, however - A line that remains static at zero implies that the team and project is mature, running healthily... This is something to aim for.
Feedback on these and other signatures would be received with interest...
More Posts « Previous page
- Next page »