November 2004 - Posts

I learned my lesson with Fable, so I'll try desperately not to start a flame war of any sort here. Up front, I'm giving the game a definite thumbs up. If you are the kind of person that likes to flame, then leave now knowing that I've given your favorite thing my personal approval.

Let's start with the good. The campaign and story is pretty nice. The cinematic effect is definitely there, something I don't approve of in games most of the time. In this case the cinematics were rather short and they appear to have answered all of the questions from the first Halo, about what in the hell is actually going on in this universe. Don't expect a major story though, in all there is about 30 minutes of video (maybe someone will time that eventually). It appears in most cases that the actual game engine was used to produce cinematic sequences. I'm a huge advocate of this process, since it generally reduces the size of the game even if it doesn't allow for as much eye candy through complex, non real-time, shaders.

Playing from both sides of the story is another great feature even if the movement features are identical between the arbiter and the master chief. Of course you get to use all of the weapons no matter which side you are. A couple of the new weapons are even pretty nice and if you add dual wielding then you can really do some drastic damage. Getting used to the new weapons is a short process, but for the most part, just realize everything is going to take a good amount of shots in order to take down. Nearly every enemy has energy shields now, so making use of a good pairing of weapons is almost always required (for a good run-down of the weapons, head over to GameFAQs where someone has posted a huge review of all of the weapons, relative damage, recommended threat ranges, etc...)

Movement has been speed up a bit from a basic land speed metric. The jump has been increased as well. Most of the same movement considerations from the first Halo are in place and the game still has the same feel, while at the same time having an increased level of agility. I noticed that my look sensitivity 10 from Halo one has been dropped to 8, and the new 10 is fairly insane. I've managed to work my way back up to 10 and I have to say it is much closer to the look sensitivity in UT now. Thats definitely a good thing since I'm tired of getting punked in the back by a lamer while I'm trying to turn around. Now my more precise shooting abilities will take them out while they wing half their shots by my head.

We'll do 3 good paragraphs and 3 bad ;-) Not all of these are bad, just things I'm not all that happy with. While all of the new weapons are great they feel drastically underpowered most of the time. I think this was a balancing issue and I definitely agree that some weapons in Halo were far too powerful in multiplayer. Losing my pistol is hard though since that was definitely my primary weapon. I loved picking off people with that weapon. It is now a closer quarters weapon and a few weapons have stepped in to take it's place. They didn't add any more grenade types, something that would have been extremely nice. The usage of the flashlight has been minimized drastically and it now lasts indefinitely. I'm not sure why they kept it in, if for nothing more than to add a parity feature with the temporary invisibility you get as the arbiter.

I think the explanation of the story is great, but I'm not all that happy with the wrap-up. Halo in itself was an epic FPS which is something I'm not getting from Halo 2. Maybe this is the curse of the sequel. More importantly, most of Halo's intrigue after the initial month was driven by its multiplayer. I haven't gotten a good chance to play a few thousand hours of multiplayer yet, so I can't judge whether or not this game is equivalently interesting. The epic value may still be there, especially as the tournaments and ladders start to form. All in all, the basic campaign was a bit of a let-down for me.

Now for the flat-out bugs. The physics engine is better, but in many cases broken. There are cases where the environment is moving and in turn the movement drastically impairs your ability to aim and fire. I'm not sure if that was meant to be or a side effect of a real physics engine in play without the proper controls to ensure realism. In general though, only accelerating bodies would apply forces that might throw off your aim. All of the moving platforms in Halo 2 are massive enough and travel at constant speed, that the aim issues shouldn't come into play. Even more odd is that it only happened to me in one location. In general, I think many of the vehicles fall prey to some poor physics as well. Apparently getting run over by a ghost now just pushes you out of the way, many of the flying vehicles are cumbersome even with the new boost tricks.

One more good paragraph. I've written quite a bit on AI, and I have to say that the AI in this game is pretty good. The allied unit code works well most of the time, something you don't see in many games, even if they do shoot you in the back. I'm supposed to run in damnit, I'm the master chief, so quit shooting me in the back! The mission guys should get shot in a few instances where they provide challenges that are nearly impossible if you've lost most of your allied group. In some cases the allies just disappear or fail to follow you, something else that I think could dearly be fixed. The path-finding, beast aggro, cross side fighting, and tactics make up for everything wrong with the allies. I'm still thinking about what the best way would be to handle the battle between the brutes and elites where the two hunters come out. I played just that spot 8 times beating it different ways and trying to work out the appropriate weapons to minimize my ammo usage and leaving me with the least number of enemies to fight when the battle was over.

All in all the game is a graphical beauty, definitely a tribute to the amount of time it took in production (this is how a 4 year development process SHOULD end). I have some goodies to go along with it. I managed to produce a series of custom controls that mimic portions of the Halo 2 UI. I'll try and get them up on Project Distributor. I know that I made a Form, ListBox, and GroupBox, but I'm not sure if I finished any others. They don't allow much customization, so I'm adding the ability to change colors and simplifying the asset production code (I currently precompute the images used by the controls and need to change that over to dynamic creation at run-time based on properties). Give me a heads up if this type of control is interesting to you.

Enjoy your Halo 2 and feel free to invite me over to any gaming parties. Address and telephone number are in the resume link ;-)

Posted by Justin Rogers | 135 comment(s)
Filed under: ,

Well, apparently you aren't allowed to have an opinion on the web anymore. I got flamed by an author after posting a personal review of his book. It wasn't an objective review, I didn't mark it as such, but I wasted a good deal of my life between reading the book and then turn that around with the extra hour I spent writing the review so I figured I'd put my real thoughts in there. Anyway, seems the author had some comments.

Guess what? Authors need to learn that not everyone can write a book. I don't care how technically able you are, how smart, or how much of an industry professional. I don't care if you've been writing X for Y years where Y > Z and Z is my age... Just because you've been working on technology since before I was born doesn't mean you have the ability to produce a book that is able to capture a wide audience and instruct them in a given area. I'll throw some points to back this up.

Microsoft Windows is a great piece of software and some insanely talented developers wrote the OS. But guess who wrote the documentation? Sure as hell wasn't the people that wrote the OS. What about the CLR? Super smart people doing super smart things over there. But how many of them dare write a book about it? Adam Nathan did a great job, but I think he took more than a year writing his. What about Brad Abarams and the annotated CLR? Well, that isn't a book of explanation but rather a book of comments that was very tactfully edited. The people that really write about the CLR are the tech writers that produced the oh so complained about .NET Framework SDK Documentation. If you think it's bad now, you wouldn't want to know what it would look like if there wasn't a dedicated team of technical writers with English degrees working on it.

You see, just being an expert isn't a license to write a book. You have to take many considerations into play. You have to design content around your audience, get down off of your soap-box, and explain things in a detail that your readership will comprehend and gain value from. It appears Edward doesn't agree with me. I pointed out that I got nothing from the text of his book, but then he points me to the free source code download. I already knew about the download and had perused the source before and after posting the original review, but I don't think that is important or relevant. When you buy a book, you are buying the material that you can read while you are in a bus, in your car, on a plane, while you are walking down the hall, or if nature calls on the toilet. You really aren't paying for the source code. The source code is an extra in the world of publishing. It is nice if the readers make use of it, but you want to provide everything in the text if you can. Popping between book and source is annoying, and even worse, nearly impossible when the book and source aren't logically connected.

Raise of hands, if I gave you a 65k file whose name was Form1.vb and I told you the compiled program would represent a rather complex regular expression validating GUI called ReLab, what would you do? How easy would it be to quickly find the information you needed in that file? Would you even bother trying to understand the behemoth? What if the text of the book didn't tell you about the code itself, but rather about the program and how it worked? What if they just gave you a bunch of pictures of the UI and some walk-throughs of how it would work? What would you say the target audience is when the book is filled with pictures and there is a huge backing source repository that contains almost no explanation?

You don't have to answer all that if you don't want, but I'm interested in what you have to say. Good or bad, wrong or right, I don't care, because this is MY opinion, but I'm interested in everyone else's opinion. I'm tired of paying 40-60 bucks for a book that doesn't stand on it's own merit. If the source is really what I'm buying then why give it away for free here http://www.apress.com/book/supplementDownload.html?bID=213&sID=1895. What in the hell would I buy the book if everything important is in the source code shown here http://www.apress.com/book/supplementDownload.html?bID=213&sID=1895. Go ahead, download it and check it out. It isn't easy to digest by any means, and the book itself won't help you at all.

Edward is taking this as a personal attack, but everyone that knows me knows better. I buy a book a week at least. Some are great, some are mediocre, but I never, ever buy the bad books. I invalidate them during my initial review process and I rely on my professional insight to quickly spot and discredit the bad ones. I don't always take the time to give those I've spotted a shining review on my blog, but there are certain things that really get my goat and this was obviously one of them. You can't ask for just good reviews as an author. When was the last time a movie released with not a single bad review somewhere on the web or published in some newspaper? But, "Oh", the actor says, "You'd like the movie better if you understood how many shots it took for that scene you didn't like and the technical difficulties behind it"... In reality, I don't care if it took them 1 shot or 50 shots, I don't care if the author produces 1 line of code or 50 thousand lines of code. I see the end result, I see what I read, and I'm going to rely on perusal process within the bookstore before deciding to buy. If you aren't going to give me the material in your book to enable that process, then I'm not going to buy your book, AND I'll post an honestly bad review.

Anyway, I responded to Edwards comments, and put my own right after. I'm sure the comment space will get heated if you are into that. In conclusion, don't put your heart and soul into a book and then get all parental when someone doesn't like it. If you can't take the criticisms, then you shouldn't be publishing. Build on it, forget about it, discount it, do whatever you must, but don't whine and use political bullshit to try and get me to take my criticisms down.

The comments in the code-only article are fairly decent, but I dislike being extremely verbose in my commenting because then I can't see my code. A little explanation of the problem is probably in order because of the lack of extremely verbose comments. First, what is base N encoding or alphabet encoding?

Most people assume that encoding into any base in some way equates to mapping a number to some digits, plus some additional characters to represent values we don't have digits for. This isn't always the case. An integer encoded as Alphabet{0,1} = 1001 = 9 decimal is identical to Alphabet{+,-} = -++- = 9 decimal. I've just change the represenation or alphabet, but the base is still the same (aka base 2).

Explaining bases could take a few years of college courses, as you take the concepts and create increasingly more abstract versions of them. In fact, bases are strange things in some theoretical maths where concepts of groups, colors, stripes, and other words are used to describe how they work. A very simplistic view of the base is available over on Mathworld. In general though, the concept is that any base has a number of digits equal to the base number b (aka radix) where the digits represent the values 0 through b-1. That is easy enough, and it gives us a very generic method for converting a number to any alphabet and back.

To start, we'll denote an alphabet as a char[] of digits. Digit in this sense is any character that will represent the array index at which it is placed. The base of the alphabet is the length of the character array. The first element in the array at offset {0} has a value of 0 and for all other indices n greater than 0 the value of the digit at n is equal to the index n. That's all there is to it. Any alphabet of characters can now be translated to and from an integer using this mapping table and the base.

Code-Only: Arbitrary alphabet encoding (aka BaseN encoding) for base2 through base36.

Posted by Justin Rogers | 23 comment(s)
Filed under:

Unfortunately I'm not talking about a Wiki that actually is artificially intelligent, summarily filling itself out and saving me gobs of time by learning off of the Google-Sphere. What I am talking about is a site focused on covering the algorithms that a first year AI student might be faced with during their coursework. Hopefully they'll get some additional material in there as well, but the initial focus is just that first yet.

http://ai.squeakydolphin.com/wiki.php?pagename=AIAWiki.HomePage

If you are a .NET supporter like me, maybe you'll try and throw in your hat by providing alternate versions of some of the programs seen on the site. I have my eyes on a few of them already.

Project Distributor: Introduction to our distributed web service model
So Darren and I have put in about a month now on the Project Distributor website. We are starting to reach that critical point where the site is pretty cool, we have plenty of users, we are thinking about running out of the allowable bandwidth for the demo site, and all sorts of other things that tend to happen all at once. Now, there are some problems you can design yourself out of, and others that you really have to throw some money at. Our latest enhancements can be summed up in a short list.

  • Buy a domain name and start hosting in two places. Project Distributor.com should be up fairly soon to accompany MarkItUp.ASPXConnection.com
  • Have people host their own versions of the application. And that means a big source release is in the future. At this juncture risk fragmentation.
  • Design away fragmentation with a series of ingenious features that will make everyone want to use the application at hand.

I'm here to talk about the last two, since Darren already bought some additional hosting for us. The concept will be to release a fairly stable version of the application so that groups can host tools, code snippets and other source/binary releases for their teams to share. The application is very lightweight and easy to set-up, so it won't require a bunch of hand holding and configuration to get up and running initially. From our standpoint we solve a number of issues at this juncture. The most obvious problem is what we classify the Lutz Roeder use case. .NET Reflector is the key type of application we'd love to get hosted because it makes it a bit easier to find, not that Google does a bad job, we'd just like to get a bunch of tools in one place, with some features for feedback, new releases, and some cool client tools for publishing.

Now, Lutz would put his application up and he'd whack our bandwidth. He is the prime example of someone that should be hosting their own tools, but possibly using our interface. He doesn't have to, we haven't even asked him yet in fact, but if he decides to do so, then all the better for the web application moving forward. Users such as Lutz probably want a certain level of control over their own sites as well in terms of branding and controlling access. This will only come from hosting the application yourself (and maybe some other features we'll see later).

From a security standpoint many teams will also want to host their own servers. In this manner they get control over the hardware their sources and binaries are stored on. They can accept tools up to any maximum (instead of our imposed limits) and provide unlimited download bandwidth if they choose. Or they can take advantage of our gating mechanisms to make sure their server doesn't get overloaded with downloads and open their tools up to the public.

The only major problem from this source release is that the initial problem we were trying to solve, promoting the visibility of tools, starts to erode. You see, the more sites that host their own tools the harder it is to find the right site with the right tools. We are trying to solve this in a number of ways. The first is allowing users of a site to store bookmarks to other projects and external resources. This is only a temporary fix, because it still doesn't allow a mass search and categorization infrastructure required to truly promote the visibility of the tools being hosted. We have to come up with a solution that brings all of the sites, but we don't want to create just another portal or gateway site. That is boring. Now you have the background, so how will we solve the fragmentation issue?

Designing away Fragmentation
I won't lie to you, I've implemented this model several times, but have never had a project that was capable of really showing off the feature set we are about to talk about. The concept is to unify all of the sites, by allowing them to easily manage views of data from all of the sites combined. Each site owns their own content, maintains their own users, but in turn peers with other sites to obtain additional content.

Web services provide a dual feature set in this model. At the current level they allow us to generate really great client-side tools for managing, well, your tools! We have a drop-client target right now so you can drag and drop new releases to existing projects in just a few seconds. Some new tools for working with build systems to promote the source code up to the server are in the works. We natively integrate with your RSS reader and will have our own alert services in the drop client just in case you don't have one. There aren't any search or local caching features, but those are also planned for the drop client so you can background download new releases, just like Windows Update.

That doesn't solve fragmentation though, that just makes me realize how much work I have left to do. The second feature of web services lies in the ability for each site to aggregate data from the many other sites that are out there hosting the application. Remember, everything we make available at the service layer can also now be remoted. The more caching we put into the data layer, the more performant the entire process will be, and we can even tune the caching depending on whether the data layer is merging off-site contents or database contents.

Peer Sites
I'm sure there is another name out there somewhere, but for the past 2 years I've called these peer sites. Each instance of the project distributor will have a number of options allowing for adding peers that will be aggregated and added to the local collection while users traverse the site. The first step is to get the peer sites running in a read-only mode. And set up some really great options so the entire process can be controlled. This solves a number of use case scenarios for us including the following.

  • Fragmentation can be mitigated through proper configuration. If everyone aggregates 5 or 6 sites into their peers, then we have a huge network now of interconnected peers and users can pick and choose which one they use for purposes of searching the tool network.
  • Peer connections are unidirectional or bidirectional. Access is configurable. Teams can include tools from external sites while keeping their own tools completely private. They can exist behind a DMZ or a private network.
  • Users can host their own personal tool sites in the same manner as the team sites. They can configure statically which projects to make available even. In this way you can build a collection of personal tools that you love, and have the latest information automatically update on your machine for your perusal.

Peer sites solve plenty of visibility issues, but that is pretty much all they solve for now. We still want to enable all of the features available to the client tools. After all, the web service methods and proxy infrastructure is in place to do so much more.

Master Sites
Well, we want to solve another problem. That is where you edit your data. A master site is where the users, groups, projects, etc... are all hosted, but thankfully, you'll be able to log in through any site (assuming it is peered with your master site) and then edit your own projects and such. This is a remote principal context and is actually one of the cooler features associated with the peering functionality of project distributor. We'll be fully secure in our login and credentials region, but unfortunately we'll still be transferring data in open text in the short term. Maybe we'll fix that with enough push back.

Clone Sites
A clone site is where we empower a site to act on behalf of a master site. For me, my local project distributor is currently cloned to the main project distributor site. What does this mean? Right now it means I get all of the data from PD, and that users who trust my site can log-in to their project distributor accounts and cross edit data. Pretty nice if you ask me. It basically means you can fully host a project distributor installation and never, ever have to install a database server. Users can just act on behalf of a remote server.

Configuration
This isn't a super reusable model like some of those you read about in the popular software architecture books, and it probably accounts for why master/peer/clone sites don't exist very often. The considerations for every option are heavily customized to the problem being solved, and I'm sure we'll be making modifications or updating the configuration context for a while. Right now you can independently configure your primary server type, whether master or clone, whether or not users can use you for a pass-through authentication and edit server, whether or not web services are enabled so peers can enable unidirectional only communications, setting up asymmetric security credentials. Man, you name it and it is in there

For the peer section we have full and selective modes. A full peer pulls all of the data on the remote peer locally for display (in a delay caching manner, just like you'd expect, unless you set up a scheduled pull which is also possible). I expect most people to configure full peers because they really are really easy to set up and maintain. A selective peer is where you specify the groups/projects that you want to display. This is best for a user setting up their own personal toolbox who wants to select a couple of items from many different peers.

We have an extensively exhaustive configuration module already and we'll be continuously adding more to it. The concept is to easily modify your toolbox to your own designs without having to touch the code. If we haven't given you enough options to satisfy your need then we'll have to make something up, because I'm just about running out ;-)

These are the basics of the model ideas I have for project distributor. That doesn't mean Darren doesn't have other great ideas happening as well. He has some pretty extensive UI enhancements, but I'll let him talk about those. We even have another product idea that is kind of a bolt-on for project distributor, but that is probably a couple of months out putting it into next year. Unfortunately we have too many ideas for our own good right now. Better than not having any ideas I guess. I'll try to drop some code with some of the ideas above, that way you can get a look at how the entire system is implemented. I have some diagrams as well, but I'm far too tired right now to add the img tags to the HTML view.

How'd you like that for an opening title? Did it grab your attention? Hell, your reading this far so I guess it did. The book I'm focusing on here is Build Your Own .NET Language and Compiler and please, don't click the link and then go buy it. I don't care about the 50 cents worth of referral money I'll get if you do. I wouldn't even recommend the book if I got 50 bucks of referral money (well, money talks, so maybe I would).

The book starts out with the basics of parsing and regular expressions and all that jazz. But the extent of the code is a bunch of screen shots. We are writing a parser/compiler dang it, we aren't WYSIWYGing our way through life at this point, you have to show some real frigin code. What you end up with is a bunch of screen shots of many tools for writing a compiler, but not really the code, unless of course you go grab the CD and break through all of the code without a lick of explanation from the book. God I hope the code is well documented with comments, or you just bought an issue of Compiler's Illustrated and this isn't the Swimsuit edition. I'll include some of my own links at the bottom, where I give actual code for many of these processes.

OK, so you get to see a bunch of tools, and what do you get? Well, you get a bunch of half-assed tools (sorry for the language if your kid is reading my highly technical blog... In fact, if he/she is I could use some interns, must type 50+ WPM and be proficient at C, C++, or C#). A mathematical expression evaluator is the first. I think it is always the first. People always trivialize math. So make sure you look at all the pretty pictures and try to glean some wisdom from the text. I have a mathematical expression evaluator by the way, it's called calc.exe and from what I can tell it has shipped since 16-bit windows. He also makes an attempt at a regular expression workbench. You can't have enough of those (actually I'm not being sarcastic here, I always appreciate a new regex tool), but then he never writes anything or demonstrates compiler technology that uses regular expressions. Does he go into NFA/DFA technology? Well, he does talk about it for a few sentences. BNF format? Again a few sentences here and there. But wait, another tool is what you get and this time it is a picture of a drop-down menu with all sorts of really tantalizing names (convert from BNF to XML, display a BNF parse tree, display formatted docs, etc...). At this point use one of the pages to catch the drool coming off your lip, because that is as close as you'll get in this book to anything cool.

OK, so forget the tools. At some point he actually starts talking about real compiler technology. I think around chapter 7 maybe? I really should dig up the TOC on Amazon, but I'm only going to waste enough time on this book to finish this posting. Anyway, they start talking about the various parsing techniques. Recursive descent (RD), Top-Down, Bottom-Up... I think there are some other odd names they throw in there to mystify the reader. After reading all of the major compiler design books I shouldn't be mystified by something that could classify as a 4 Dummies book (unless it is something like Cross Dressing 4 Dummies, I could probably use that after my Halloween party)...  Anyway, they really don't do the entire process justice, and I think at some point some more tools are used, Yacc might be mentioned, and bam, back to the pictures.

At this point I want to identify the worst problem I found throughout the entire book. Apparently the author didn't have time to finish the code so they left a bunch of exercises for the reader. Nah, nah... You don't leave the compiler as an exercise in a book on how to write a compiler. You leave bits and pieces, but not the important stuff. Going through my Knuth books, I'm actually surprised when he leaves problems as exercises that require more know-how than what has been provided in the chapter. I don't mind exercises for the reader, but there is a limit people. Imagine getting back from Home Depot with a 300 page picture book on building a house, that had a bunch of pictures of completed homes, and some text offering that the building of the house will be left as an exercise for the reader. Doh!

At the end of the book, it is apparent I'm not going to get anything of use and then it starts talking about code generation. Oooh, something with some meat. In reality, they've been naming their nodes for the calculator in such a way that the name of the node was pretty much the name of the op code that was going to be called. They may have some Quick Basic implementation code spits as well, but I'm confused at this point (and mystified) because I've been thumbing this book for an hour. In reality the act of spitting IL is probably worth an entire book of it's own (oh wait it is Inside Microsoft .NET IL Assembler and you really should buy this one so I get 50 cents). That isn't fair because that book is actually how IL functions and not how to spit it. But I'd think one does precede the other since eventually your going to run out of node names to match to IL op-codes and when opComplexOperation isn't mirrored by OpCodes.ComplexOperation I just don't know what you'll do.

How fair of a review is this? Well, I've read actual compiler books, quite a few of them. I've implemented my own parsers and compilers many times for many different circumstances. I don't think it is a hard process and I think extending the process to a more general development audience is important. There should be a relatively accessible book on writing your own .NET languages, but this book is certainly not it. I'll keep looking around, I hear there is another book focused on .NET language generation and I'll have to search it out. Maybe an O'Reilly publication? Can you get an accurate review from something in about an hour's time? Well, I read fast, the words were quite large, most of the content was entirely familiar and only about 30% of the page material was text, so I'd hope so. Take this for what it is worth, but if I see any referral money for that book, I'll know someone is going to be laughing hysterically when they get that book in a 2-3 days from Amazon. PS: I didn't and won't buy the book. I spent a couple of hours at Borders today running through two books that caught my eye when I was really looking for a great .NET Localization book. I need to dig up Michael Kaplan, since I'm sure he has written something somewhere.

Lexer/Parser/Compiler  Code and articles for different types of parsers
Lexer, Parser, Compiler, Oh My!  Postings, with code, on even more lexer/parser stuff
ftp://ftp.cs.vu.nl/pub/dick/PTAPG/BookBody.pdf A more hard-core text on parser technologies

Think really hard for a second about how layout logic works... Oh, not that hard, you're starting to smoke. Here, how about I think about it for you and then you can point out where you have a difference of opinion... That works better for me anyway. To start, layouts come in several different formats that will help use discuss the issues at hand. Here are several different formats that I can think of off the top of my head.

  • Fixed or Explicit Layouts - This is the most rigid layout type and it involves placing the controls precisely where you want them on the form and precisely sizing them. The main problem with fixed layouts is that you start to run into trouble as soon as you allow the form to resize. Either you have to augment the explicit layout with some additional code, or simply allow for the controls to stay in their place and forget the fact that your application looks ugly now.
  • Flow Layouts - Flow layouts are a bit more flexible. This is most similar to the way text wraps or flows in either your text editor or an html page. Think of each word as a control, and pretend you are simply trying to fit as many words per line as possible. Flow layouts change position, but not the size of an element. In this manner, if an element fits on the line it goes there, but if it would have to be resized, even if it could be, it won't. When forms are made extremely wide flow layouts tend to look terrible. Even worse, if the elements are all different sizes then you wind up with a jagged right hand edge. The answer here is justification where the appropriate amount of border space is placed in between so that all controls are flush.
  • Tabular Layouts - These are probably the most often used. They can either be versatile and allow a variable number of columns depending on the row, or fixed, where each row has the same number of columns. Some columns are given a specific size, a percentage size, or allowed to expand to fill the remaining space. The closest equivalent is the html table elements. Tables don't allow for complex layouts without having a series of place-holder columns. Each place-holder column added allows more and more freedom in placement, but at the same time, the more you add, the closer you are to simply using pixels (pixels are indeed a tabular layout mechanism).
  • Composite Layouts - Composite layouts are based around a hierarchy of nested layout types. At the top level of an application you often have a very specific fixed layout, with perhaps some resizing occuring between the few top-level elements. Outlook is a great example with its various configurable panes or windows. Within the panes more layout occurs. You can custom configure this lower level layout to be whatever you'd like, but it is easiest to support a two or three column tabular layout. Flow layouts are also popular.

When does layout go from fun to work? Well, laying out things in the designer is fun, but got forbid you have to do it in code. Code requires numerous properties be set on each control, that the control z-order be set properly, and that the programmer have a great grasp on exactly what the project looks like from a UI stand-point. Many programmers don't have this spatial visualization capability, hence the reason for sticking with the designer and forgetting about code based layouts.

A good mixture is required in my opinion. Certain things are easier to lay out in code and others much easier in the designer. Run-time UI generation is a requirement of many application as well that change dynamically based on configuration or data.

So how does that come into play during a resize? Well, someone has to write code. Explicit layouts look terrible when you resize them. They are based on precision and changing the precise sizes and locations of elements programmatically results in some nasty issues. Round-off errors also make for poor aggregate layout decisions and many times you can reset the form size and see a slightly different layout. Normally it is just good enough, but that doesn't fly with me. Flow and Tabular layouts are a bit better, they are based on more configurable and dynamic information and so they tend to resize much better. Composite layouts are the best because they localize the layout process to specific regions, and they allow you to configure each area most appropriately so that it looks best when things change.

Layout isn't just about the big scene, there are also some small scene items that have to be taken care of. Most layout systems don't control their container, in turn their container controls them. However, controlling minimum sizes at the container level is foolhardy. You don't know how small an area can be until you understand how small the control itself can be. Each layout manager should be able to determine what a safe minimum size is and a higher level component at the form level should aggregate this data in order to size the form appropriately. This disables scenarios where the user can screw the system.

I'm going to hit one more type of layout though. I call this the bump layout and it is a very programmer oriented approach. At the lowest level a bump layout is based around the first control placed on the form. As controls are added, using bump commands, a global bounding region is updated that can then be use to set  the size of the form. This is a mixture of fixed layout programming, but is code oriented rather than designer oriented. A sample bump layout might be as follows.

Layout(firstControl, left, right, width, height); // Prevents property setting
RightAlign(RelatedSize(firstControl, secondControl)); // relates second control to first control, returns second control
BottomAlign(RelatedSize(firstControl, thirdControl)); // same, you can wrap all sizings into a single call if you'd like

I call the above a bump layout because you bump the location of a control to an offset based on an existing control... Bump layouts have the ability to be turned easily into resizable layouts because everything is based around the first control. The first control could be sized based on the current form size, and then the rest of the layout just works correctly. You can also support FlowLayout and TableLayout quite easily by using the results of bumps. An overflow bump would put a control partially off the screen and so you can change the bump from a RightAlign to a BottomAlign. For tabular layouts you can align your columns based on the top row and then use those in bumps to generate the rest of the layout information.

I plan on putting some code behind this. There is some interest in programmatic layouts and I'm sure I could add some of my code to Whidbey and augment the existing layouts provided by the Windows Forms team. One of my more interesting layout patterns has been the radial layout, of which I'm very proud, that not only gives each control information about location based on a radius and angle, but also allows the control access to the layout information so that it can properly rotate and possibly resize itself to make sure its rotated contents fit within it's area. A second layout pattern, the markup layout pattern, is more based around the concepts of using an html table format, and an ID that matches the Name of specific controls. Using this layout mechanism is different from the more extensive XAML layout because it only focuses on a control's layout and as such can be easily added to an existing system.

Posted by Justin Rogers | 3 comment(s)
Filed under: ,
More Posts