May 2004 - Posts
My friend Darrell writes another Test Driven Development (TDD) success story in how he used NUnit to test some new code. The beautiful thing is he didn't have to step through a debugger to do so. In my own development, I rarely use the debugger, and as someone mentioned in Darrell's comments, when it gets time for the debugger, something is seriously wrong.
A couple of days ago I implemented a more secure way to deal with passwords in the application and the database. I was pretty sure I had set up some code previously to make this as painless as possible, but using the unit tests (plus new unit tests) I was able to make sure nothing else was broken as I completed my changes. I still had to complete a few iterations to make sure, but in the end, all lights were Green. After the changes were implemented in the library code, everyone else who used the new libraries reported no errors or problems!
Using unit tests and careful deliberate coding, our team has seen less than a handful of bugs reported back from QA, even after a major release of code. If you haven't done so, please introduce TDD to your daily coding routines. You'll be glad you did!
Benjamin blogs about Jim Newkirk's excellent talk on Test Driven Development this past week at TechEd. In particular, Jim talked about his use of the new unit testing features of the Visual Studio 2005 Team System.
The most interesting items, which Benjamin summarizes, are these:
The Two Tenets of Test Driven Development:
- Never write a single line of code unless you have a failing unit test. The goal is to take requirements and express them as test
- Eliminate duplication
How to do TDD
Jim starts by blocking out 4 - 8 hour sessions of development. He spends 15 - 20 minutes at the start of each session thinking about what he is going to do and brainstorming a list of unit tests.
A key part is not to get hung up on completeness, you can always add more later. The purpose of the tests is to describe completion requirements.
The flow of a TDD session: Red, Green, Refactor
The process is:
- Start by writing a test for a new capability
- Fix any compile errors
- Run the test and see it fail
- Write the code to make the test pass
- Refactor as needed (clean up any duplication)
The purpose is about how to use the functionality, not how to implement it! The process allows you to build confidence through having a set of tests that pass.
The most successful way to do test is to do it before the development. If you start it first then you need to think about how to test.
It's always good to have a quick refresher of basic TDD operation and thinking.
One of the coolest things about the new unit testing sysem is this:
... writing a class name followed by a method name that didn't exist yet. After compiling, Jim used a 'smart tag' to choose to create the method stub inside the target class. It wrote this stub and had a 'NotImplementedException' inside it. This is functionality similar to Eclipse and is good to see.
That's great! This can only help .Net developers use TDD more by following the “write the tests first” rule. Very exciting!
to a great early DM post by Brian Harry
(who has started blogging!) on the explanation of resource management and why deterministic finalization is missing
and the beginnings of the CLR. Brian helped lots of people in the early CLR days (as Sam points out
) and is now the creator of the Visual Studio Team Source Control
Some more changes! John Lam mentioned today that he has left Wintellect and joined ObjectSharp in Toronto (along with Marcie Robillard aka DataGrid Girl). Best wishes to all of you in your new endeavors!
Anil has a great summary of an article by David Wheeler on the principle of least privilege. I am listing some of the main points below.
The examples in the article are *nix/Linux focused but the concepts are relevant whatever OS you are running.
"One of the most important ways to secure programs, in spite of these bugs, is to minimize privileges. A privilege is simply permission to do something that not everyone is allowed to do. On a UNIX-like system, having the privileges of the "root" user, of another user, or being a member of a group are some of the most common kinds of privileges. Some systems let you give privileges to read or write a specific file. But no matter what, to minimize privileges:
- Give a privilege to only the parts of the program needing it
- Grant only the specific privileges that part absolutely requires
- Limit the time those privileges are active or can be activated to the absolute minimum
These are really goals, not hard absolutes. Your infrastructure (such as your operating system or virtual machine) may not make this easy to do precisely, or the effort to do it precisely may be so complicated that you'll introduce more bugs trying to do it precisely. But the closer you get to these goals, the less likely it will be that bugs will cause a security problem. Even if a bug causes a security problem, the problems it causes are likely to be less severe. And if you can ensure that only a tiny part of the program has special privileges, you can spend a lot of extra time making sure that one part resists attacks."
This topic is particularly relevant to me this week, partly because of my upcoming WinDev talk on this very subject. I have also been helping a Microsoft Research project that asks these questions: Why is the administrative privilege necessary in a Windows user's daily activities? What can be done to identify unnecessary dependencies on the Administrator privilege?
What I have seen is software that promotes these dependencies do so primarily in one or both of these ways:
1. Registry updates that require administrator privileges
2. File updates in areas that require administrator privileges
The key, I believe, rests in the developer. Most developers run as Administrators on their machines, and typically never notice these problems/issues while developing and testing. By learning to run as non-Administrators, like the users, they could find these issues and solve these problems much quicker.
One "complaint" I hear, of course, is that the tools themselves force developers to run as Administrators, but I have seen better tools coming out that no longer have this requirement. Even if they do, there are ways to work around this to continue to be productive.
More to follow ...
Benjamin Mitchell has a great post summarizing Clemens Vasters' talk today on managing state across layers.
I found this item most interesting:
Services shouldn't share databases
One of the gems I picked up from the talk was that we shouldn't necessarily tightly couple everything at the database layer by putting it all in one place. Sometimes this is done for speed, but the benefit may disappear if you put it in a cluster.
Benjamin quoted Martin Fowler's post today about this strategy:
Martin Fowler posted about this today:
The recent rise of Service Oriented Architecture seems to mean very different things to different people, but one plausible thread is a rise of autonomous applications with their own ApplicationDatabase that communicate through service interfaces - effectively replacing shared database integration with rpc or messaging based integration. I'm very sympathetic to this view, particularly favoring integration through messaging - which is why I encouraged the development of EIP. In this view of the world the integration database is no longer the default assumption.
Since I work on the development of middleware applications daily, all using central databases, and all of them more-or-less tightly coupled, I found this very intriguing. You could say it was my “Aha” moment! I have been reading the SO literature for a little while, and I have been thinking of it as WebServices, even though I “know” its not just WebServices. But this made a lot of sense to me, along with the rest of Benjamin's post (and Clemens' helpful background reading for this talk).
I don't know yet where this will be used, at least with my own clients, but I can see the possibilities for the changes that will come in the future.
One of the announcements today was the release of the SQL Server 2000 Best Practices Analyzer:
Microsoft SQL Server Best Practices Analyzer is a database management tool that lets you verify the implementation of common Best Practices. These best practices typically relate to the usage and administration aspects of SQL Server databases and ensure that your SQL Servers are managed and operated well.
Download it here.
Notice the requirements:
Microsoft Internet Explorer 6.0 or later
Microsoft .Net Framework 1.1
Nice to see .Net as a requirement for SQL tools ...
Michele Leroux Bustamante also
has a great post sumarizing Don Box's
and Doug Purdy's
presentation “Service Orientation and the Developer”
Aaron posts his notes from Richard Turner's talk on Service Orientation (SO -- notice the “A” has been dropped off now) Prescriptive Guidance.
Some key points I noticed (comments in red are mine):
Why is SO important?
- Services are meant to last. Microsoft is betting the farm on services being everywhere. Indigo is the is one of the most fundamental rewrites in a long time.
- A common tongue is needed for services to interact: boundary, schema, contract, and policy.
- An SO environment extends only as far as we agree on the expression onf the boundary.
- SO systems that want boroadest possible interop will build on the WS-I protocol family.
Which technology should I use and where?
At the service boundary:
- Build services using ASMX (default). Use ASMX at boundaries. (No surprises here)
- Components should stay within your service boundaries.
- Closely aligned to SO tenets.
- Closest alignment with Indigo.
- Great interop support.
- Use WSE for advanced Web services (WS-* protocols)
Inside the service boundary:
- Consider using ASMX within the service boundary too! (Interesting!)
- Use ES if you need its rich services, want to re-use/extend existing ES/COM+ components, path to "Indigo" from ES.
- (Notice: Remoting not mentioned ...)
- Use System.Messaging if you need asynchronous messaging, reliable messaging and queing, "first and forget" messaging. MSMQ is not going away. It's going to be the underlying engine in Indigo.
- The System.Messaging API and namespace does not move forward to Indigo. Indigo navtively supports queing semantics.
- Use remoting where it's absolutely appropriate. Only use it within the service. Great for getting close to the wire. Inproc cross appdomain communications, handling custom wire protocols.
- Remoting is not the fastest for cross machine access, DCOM is the absolute fastest. (This is definitely true. Yet, ES can be difficult (today), so there may need to be trade-offs until Indigo)
- Avoid exposing remoted components at service boundaries.
- Remoting is an object technology, not aligned with SO principles.
- Limited interop (e.g., only does SOAP rpc/encoded style).
- Limited future migration to Indigo.
- Remoting is not going away. It is moving forward, but there will be better solutions in Indigo.
- ASMX - avoid using low-level extensibility such as the HTTP context model.
- ES - avoid passing objrefs inside of ES (Yes!)
- Native COM+ and MSMQ - use System.EnterpriseServices and
- System.Messaging, do not use native COM+ and MSMQ APIs
- Remoting - avoid low-level extensibility such as sinks and channels
From the RSS feed:
Welcome to Pluralsight - founded by some of the top technical talents in .NET today, Pluralsight focuses on providing in-depth, clear, and thoroughly researched technical content. We are available to develop custom technical content in the form of presentations, training material, or whitepapers. For more information see our content, training, and community pages.
As mentioned before, Pluralsight are Keith Brown, Fritz Onion, and Aaron Skonnard. Best wishes to all for a very successful endeavor!
More Posts Next page »