August 2003 - Posts
This morning, on the drive to work I was thinking about some code I had to write today that had to shut down the PC after it had finished. Aware that I could use ExitWindowsEx to the job, I wasn't relishing - as I never do - declaring said method so that I could call it from managed code.
What do I discover when I refresh SharpReader this morning? Rob McLaws points me in the direction of a Windows command line command that can restart, shutdown and otherwise screw with any computer on the network. So, now I have an option - either call ExitWindowsEx, or give Process.Start an instruction to run “shutdown”.
What's amazing to me about this is it brings home just how damn useful blogs actually are as a learning experience, and how timely information that comes over this media can be. I was thinking about a problem, and lo and behold there I am instantly in touch with someone who's thinking about a solution to my problem from an entirely different perspective.
Updated: As far as I can tell, not supported on anything other than XP and Win2K3.
Tomas Restrepo is reporting that Virtual PC is now available for download from MSDN subscriptions. It is indeed, and it can be found under Platforms\Connectix Virtual PC 5.2 is you sign in.
For those without MSDN subscriptions, a 45 day eval can be downloaded from the Microsoft site here.
This is a bit of a shame for me, because I've been a long time fan of VMware, which is an amazing product written by very clever geeks. I wonder how they'll fare now given that Microsoft have effectively come out with a competing product with a much, much lower price tag. For me, given that I've been using VMware for so long and that I trust it, I'd rather advise the particular client that needs a tool like this to use VMware, but seeing as their developers already have MSDN subscriptions, it's a bit of a hard sell getting them to fork out an additional $300 per seat.
Interestingly, this whole problem came about for my client because they suddenly discovered that their application would not talk to Oracle when running on anything other than XP Pro. All the developers were using XP Pro (including myself), and has been running interactive tests and unit tests from just XP Pro. So, one day before release when an in-house tester mentions that “it's coming up with a funny error about distributed transactions, the sh*t well and truly hits the fan because we discover that it ain't going to work on anything other than XP Pro! The moral of the story - test on different platforms!
This is actually a damn hard problem for small/medium software houses to deal with, because the more requirements you have, the more testing combinations you have. Imagine you want to support three platforms: XP, Win 2K and NT4. OK, so you have a database application that runs against SQL - that means that every test you do has to be done three times. Now imagine you have to support Oracle - every test now have to be run six times. Then they decide they want to support Win 2K3 - that's eight times. A big customer is using Sybase, so it needs to be checked against that - 12 times.
Two things can help us out here - one of them is some VM solution, like VMware or Virtual PC because it makes is very cost effective to set up different images that we want to test, e.g. XP Home with Oracle 8 client, XP Home with Oracle 9 client, XP Home with SQL Server and Oracle 8 client, and so on and so forth. This means that testers can run their interactive test scripts against a variety of platforms without incurring a massive hardware investment.
Test driven development also really helps us out because we can automatically execute unit tests against all of these images hopefully automatically, but at least part-automatically. Simply set up a machine with the images on them and loop through each image and run the entire unit test suit. Then, when a developer makes a change that breaks an obscure image like some bizarre NT 4 dialect with an old-ish service pack running Oracle 8 you actually know about it, hopefully way before you have two hours to fix it.
A note from Jeff Key points us all in the direction of displaying a cool validation balloon on text boxes...
The article can be found here.
Oh, so that's why I need to be at PDC!
Developers to get first taste of 'Longhorn'
After months of speculation, Microsoft plans to give developers their first hard look at the next version of Windows in October.
The Redmond, Wash., company expects to release a "developers preview" of the new operating system, code-named Longhorn, at its professional developers conference in Los Angeles. Although it won't be a full beta, or test, version, Microsoft executives have promised it will be more than just "slideware," software that companies haven't been using and don't know when it will be coming.
The company is expected to hand out a development kit that will give developers their first look at the inner workings of the much-heralded new operating system. Longhorn will usher in a raft of changes from previous versions of Windows.
In his Web log, Brad Abrams, a lead program manager involved with Microsoft's .NET initiative, said that developers will walk away with Longhorn code.
"If you are like me, you will not believe it until you see it in the bits," Abrams wrote. "Not only will you get to see some of the new look and feel (of the) stuff we are doing, but you will also get (a software development kit) and tools support for programming to the huge, new managed APIs that Longhorn offers."
....mmmm... new toys!!!
Couple of thoughts spring to mind on this Monday, which is a “public” holiday over here in the UK (OK, it's a “bank” holiday to us Brits)...
After about eighteen months of a frankly useless e-mail system that's continually getting jammed up with spam, I think I've finally found a solution. I don't want to come across to salesmanlike about this, but SpamArrest really is very good. It works on the principal that it continually polls your POP3 mailbox, and any senders that aren't on your “approved” list get an e-mail whereby they're sent to the SpamArrest site and asked to type in a word rendered as a GIF image. In the past 20 or so hours, it's stopped 100 pieces of spam, and delivered... three legitimate pieces of e-mail.
I've tried most of the other anti-spam software and this is definitely the only one that hasn't scared the crap out of me that legit e-mails are going to be lost.
Second thought for the say is a phrase from a book called Extreme Programming Explained : Embrace Change by Kent Beck. In Chapter 4 he talks about the four control variables of a software project. Under “Quality“ he says that “Quality is a terrible control variable.” Well said - when I'm feeling the need to rush something because I'm under pressure, I really remind myself of this, breathe deep and take stock of the situation. It's never worth it - and Kent's other thoughts on the subject bears this out: “You can make very short-tem gains by deliberately sacrificing quality, but the cost - human, business and technical - is enormous.“
Continuning my brain dump, if you haven't checked out http://www.extremeprogramming.org, give it a go. Really nice Web site that contains a wealth of information in TARDIS-like proportions.
Lutz may be famous for Reflector, but for those of you needing a menu and toolbar library, his CommandBar for .NET is worth a look.
It's not so hot for dynamic menus and toolbars (lots of flickering, which I for one can't seem to fix without breaking it), but if you want a good looking set of menus and toolbars it's certainly easy to use, is reliable and looks good.
One thing - Lutz actually does a rather good job of creating a neat API for his toolbars that follow the standards that Microsoft put out there for .NET developers. Those not looking for menus and toolbars may want to take a peek by way of a best practice example.
I've been meaning to blog for a while on mind mapping, but I see that Tim Sneath has blogged about it here.
If you have the right sort of mind, mind mapping is an amazing tool for helping you thing around the problem. Luckily for us, developers tend to have this sort of mind. It's simliar in a way to blogging - as you have a thought you write it down, then that cues you into other thoughts, which you write down, then you have another thought related to something you were thinking about ten minutes ago, so you find the original thought and add your new thought to that. At the end of the process (if there is an end!) is a far more concrete understanding of what the problem is all about.
Rather than reading about it, you can download a tool and play with it! MindJet (http://www.mindjet.com/) has a mindmapping tool that is pretty damn good. I use it virtually every day.
I've long been an advocate of hosting Remoting objects in an application's own Windows Service. However, I always knew that there was this "other way" of doing it in IIS (some would argue that I have this the wrong way round!). A new MSDN article ".NET Remoting Architectural Assessment" has plenty to say on Remoting best practice.
Of particular interest to those of you who tend to build there own services for hosting Remoting objects is this:
Hosting inside a system service
This possibility is more interesting, as the functionality is not so much provided by the Remoting infrastructure as by the notion of a system service itself. System services can be configured to start when the machine is started and to stay around until you tell them to go away, which is ideal for remote hosting. Note that IIS applications can also be configured to behave similarly by setting "High Isolation Mode" for the Virtual Application. This has a number of implications, however, which are not discussed in this article. Customers have asked some hard questions about this mechanism, which calls its usefulness into question. First, some advantages: We have already mentioned the benefits of a service itself. Furthermore, we do have full control over the activation of the host process—for example, we might elect to use dynamic publication or client activation. We don't need IIS, since we can get our user profile loaded and we get good performance with binary-encoded messages over TCP.
The disadvantages weigh pretty heavily, however. To start with, you need to build your own authentication and authorization mechanisms, should you require them. The article .NET Remoting Security Solution, Part 1: Microsoft.Samples.Security.SSPI Assembly provides a very full and detailed description of a security solution for .NET Remoting that " ... implements a managed wrapper around SSPI, providing the core functionality needed to authenticate a client and server as well as sign and encrypt messages sent between the two." This is definitely a great asset and provides a "faÃƒÂ§ade" for adding this functionality in a useful way. The problem is that it is not a supported product but an "informal" attempt to supply missing functionality. Also, it's a little intimidating for a developer, as the solution relies on the extensible nature of formatters and channels. All this needs to be hidden and the functionality surfaced by just adding an entry to the Remoting configuration, which depicts, for example, the use of Windows NT Challenge/Response (NTLM). It is likely, however, that such security mechanisms will be incorporated into a future version of .NET Remoting.
A System Service would also need to be scalable and reentrant to be useful as a Remoting server, as these features will be required in a multitiered distributed application. Without IIS, for example, the hosting service would have to manage its own auditing and authorization, both of which come standard with IIS.
For these reasons the System Service hosting mechanism is of limited use, perhaps in a constrained environment where messages are queued to a single exchange, security is not an issue, or IPSec is available over TCP.
(I am trying not to rant and rave about this - this just causes me a serious headache!)
This is one of these weird debugging issues - hopefully it will save some of you out there some time. About an hour ago, any of my changes to one of my application assemblies just didn't seem to be “sticking”. I was making changes and nothing appeared to be happening! Moreover, when I tried debugging the thing in VS .NET, any breakpoints I set would appear to set (no question mark in the little red dot), but VS .NET would just ignore any set breakpoints and skip over any calls into the assembly.
The solution? For some reason, the assembly had managed to get itself stored in the shadow copy cache. I am clueless as to why (maybe someone can enlighten me), but here's the diagnosis and solution for those who's experienced this problem.
First off, take a look in the Output window as VS .NET is running your app. Trace information is written out on assembly load so you can see what the debugger is attaching to. Here's some lines:
'Bs.Core.Host.exe': Loaded 'c:\work\xxx\bs.core\bs.core.host\bin\debug\bs.core.dll', Symbols loaded.
'Bs.Core.Client': Loaded 'C:\Work\xxx\Bs.Core\Bs.Core.Client\bin\Debug\Bs.Core.Client.exe', Symbols loaded.
'Bs.Core.Client.exe': Loaded 'c:\work\xxx\bs.core\bs.core.client\bin\debug\bs.core.dll', Symbols loaded.
'Bs.Core.Host.exe': Loaded 'c:\windows\assembly\gac\system\1.0.5000.0__b77a5c561934e089\system.dll', No symbols loaded.
'Bs.Core.Client.exe': Loaded 'c:\documents and settings\matthew\local settings\application data\assembly\dl2\a21peep8.8yn\d2rhz2m1.a4c\abec0abb\2043775d_2060c301\bs.core.desktop.dll', No symbols
The first three lines are making sense - the assemblies are comign out of the bin\debug folders and has “symbols laaded“. The last one isn't - it's been copied into the SCC somehow.
My solution (and I say *my* solution becuase this may cause problems), was to delete the entire shadow copy cache at c:\documents and settings\matthew\local settings\application data\assembly.
Update: I've just remembered that I was using Reflector to examine the resources in the assembly immediately before having this problem. This might have been what did it...
More Posts Next page »