## Multiplying numbers in the Middle Ages was considered difficult. The solution to this may surprise you.

Number theory is a near and dear subject to me, primarily due to it's purity of formulation.  At the end of the day, you can clearly and easily test your results (in most cases), and once something is proven it stays proven indefinitely (note things like Fermat's Last Theorem and Riemann's Hypothesis were never proven and while people claimed to prove them they weren't considered proofs).

That said, enter Nicomachus.  This guy attempted to sum up all of greek mathematics into a single volume, and he actually did a pretty good job.  However, some confusing assertions come to light.  Take the Regula Nicomachi as a prime example.  It is noted that during the middle ages multiplying two numbers was a very difficult process.  I guess they didn't have the same methods we teach our 2nd graders way back then.  So they had to come up with little processes to help them.  And here is one such process.

If three terms in progression are a - k, a, a + k then (a-k)(a+k) + (k*k) = (a*a), note I can't represent squaring very easily here.

The example given is for 98 and proceeds what is the value of 98 squared.  Okay, plug in using k = 2, and you get.

(98 - 2)(98 + 2) + (2*2) = (98*98)

Now, I'll be damned, but that new equation has TWO multiplications and the old equation only had a single equation.  Seems to me that the middle ages had some difficult times with numbers because they did too MUCH work.  But wait and let's see how this pans out:

96*100 + 4 = 98 squared, wow multiplying by 100 is easier because of the 0's
98 squared = 9604

Hopefully you start to see how they used this process.  The simplified the multiplication process by simplifying the numbers involved.  There may be more of them, but they are now easier to work with.  I mean any squire back then should have been able to multiply 96 * 100 and get 9600...  I was trying to think of how to generalize this to any situation.  Basically, given any number that you need to square, for which ones does the math involved become simpler using the progression and for which does it become more difficult.  Take something really large like, like 3456 and square that behemoth.  For us, this would take 4 single digit mults, and 1 large addition.  I'm not sure which processes they had back then and which they didn't, so to them, the process was probably far more complex.  In our case the numbers we need to simplify the equation live are k = 456, that is still huge, so maybe we could do something simpler and expand the problem out?

(3456*3456) = (3456 - 456)(3456+456) + (456*456), (3000)(3912) + (456*456)
(456*456) = (456 - 44)(456+44) + (44*44), (412)(500) + (44*44)
(44*44) = (44-4)(44+4) + (4*4) = 1936

(456*456) = 206000 + 1936 = 207936
(3456*3456) = 11736000 + 207936 = 11943936

Note, we are reducing the relative complexity in each multiplication to a single term at each step, and then solving for the complex k squared term by using another progression.  By doing back substitution, we can eventually find the complex answer.  May seem complicated to us because of our new tools for solving these types of problems, but you better bet that they found these equations much easier to use.

Posted by Justin Rogers | 2 comment(s)
Filed under:

## Reality Check: Chunked operations take a lot of code and are hard to get right (a thread safe chunked file writer)

I've been spending a good portion of the day writing the necessary facilities for doing a multi-threaded chunked download off the web.  I often run into situations where I want to grab a file, but I'm on Wi-Fi or the remote site is throttling the band-width.  The best way to overcome both of these problems is multiple connection and using content ranges in HTTP 1.1.

Deciding how to overcome the bandwidth problem is the easy part.  Implementing the code required to make this a reality is a different story.  Basically what you have is a nasty situation where 10-20 threads are all trying to write to a file at the same time and all at different locations.  The IO classes handle this scenario pretty well, they just aren't optimized for it.  They are optimized for file access from a single thread and so we'll have to handle all of the queuing ourselves.  I'm going to go backwards through most of the code and start with the Chunk.  This is the logical unit passed from the thread to the file writer.  We store a buffer to write out, an offset and length into the buffer, and a target location into the resulting file.

struct Chunk {
public byte[] ChunkData;
public int Offset;
public int Length;
public long FileOffset;

public Chunk( byte[] chunkData, int offset, int length, long fileOffset ) {
this.ChunkData = chunkData;
this.Offset = offset;
this.Length = length;
this.FileOffset = fileOffset;
}
}

Once we actually have these prepared, we just shell them off to the file writer class and use some locking to access a queue.  This is a Whidbey Queue<T>, so we are strongly typed and optimized.  I'm making use of a locking object instead of locking the collection itself.  Lately the MS guys have been kind of up in the air in regards to how the lock statement should be used and what the best usage is going to be.  For that reason, I used the tried and true method of having a special object who's job in life is to be the target of a lock statement.  I actually do this alot, since we have lots of methods to protect.

public Queue<Chunk> chunks = new Queue<Chunk>( 25 );
public object chunkLock = new object();

public void QueueFileChunk( Chunk chunk ) {
lock ( chunkLock ) {
chunks.Enqueue( chunk );
}
}

The big work goes on in the Start method.  We need to tell the file writer how long the final file is going to be so we can set the length as appropriate.  If the final file will be 8 megs, then we need to allocate the 8 megs up front.  If we couldn't do this, the process wouldn't work at all, we'd instead have to write out chunks as multiple files and combine them later.  That just sucks.

public void Start( long length ) {
lock ( writeLock ) {
if ( writeStarted ) { return; }

writeStarted = true;
}

this.finalLength = length;
this.currentComplete = 0;

try {
fileStream = new FileStream( output, FileMode.CreateNew );
fileStream.SetLength( this.finalLength );
} catch {
if ( fileStream != null ) {
fileStream.Close();
fileStream = null;
}
throw;
}

}

The method should return, so we are using the thread pool to handle the queue checking code.  If anything goes wrong making the file, we'll toss an exception back out.  If anything ruins the start process this object will be useless because we set a bool flag to prevent the Start method from being called multiple times.  This is just my attempt at not designing the proper API and code-path set to handle error like conditions ;-)  WriteChunks is another lesson in thread-safety and we use a couple more of those lock variables.  I wound up with a cancel-lock for terminating the file writing process (not used), we re-use the chunk lock for accessing our queue, and a progress lock so we always have a count of how much of the file has been written.

private void WriteFileChunks( object state ) {
while ( currentComplete < finalLength ) {
lock ( cancelLock ) {
if ( cancel ) {
break;
}
}

Nullable<Chunk> chunk = null;
lock ( chunkLock ) {
if ( chunks.Count > 0 ) {
chunk = chunks.Dequeue();
}
}

if ( chunk.HasValue ) {
fileStream.Position = chunk.Value.FileOffset;
fileStream.Write( chunk.Value.ChunkData, chunk.Value.Offset, chunk.Value.Length );
lock ( progressLock ) {
this.currentComplete += chunk.Value.Length;
}
}
}
fileStream.Flush();
fileStream.Close();
}

Last thing we'll do is adorn the class with some properties so we can get that progress data I was talking about.

public long TotalBytes {
get {
return this.finalLength;
}
}

public long WrittenBytes {
get {
lock ( progressLock ) {
return this.currentComplete;
}
}
}

Note the code above doesn't make a complete class, nor would I even want to put this into the hands of anyone else until I've gotten through the proper API design set.  It also doesn't help that you need another 100 lines of code or so in order to implement the HTTP range mechanism and bind it to this file chunking class.

## Packing single assembly apps that have resources and some notes about the delay loader...

If you haven't read my entry on packing assemblies into a single assembly for one file deployment, then you won't get much out of this posting.  So go back and read it if you haven't, refresh your memory by reading it again if you haven't read it in a while, and if you don't care about packing everything in one assembly, then move on to the next aggregated feed you've subscribed to ;-)

I got a question from a user today about using this technique for assemblies that had resources themselves and how this process would be resolved.  The user was concerned because the CLR was probing for culture specific resource libraries.  This is completely normal, but you never realize it until you hook the AssemblyResolve event.  I mean who would think that every time your app loaded, files that you never created, would actually be searched for.  Well it happens, and as you may have noticed a bunch of people get confused by it (read the common thread asking why loading a WinForms app over the web is so slow, while the probing logic of the app makes requests to the web server for files that don't even exist).

To point out, the delay runner can fully handle this situation.  In general, you'll have culture neutral resources in your own assembly.  Let the probing logic go ahead and fail out, and the your dynamically loaded assembly will become the resource provider just as would happen if you had run your assembly normally.  If you really wanted to pack all of your culture specific resources into a single assembly as well, you could easily get away with that, and simply use the techniques shown to read out those dll's and instantiate them as dynamic assemblies and return them from the resolver.

Personally, I never even saw this issue, since I wasn't using resources in my packaged apps.  I had always assumed that ResourceResolve would have been called instead of AssemblyResolve though.  Well, just goes to show that you never know.

As for the second note, I got some information that you might not even need a delay loader class.  I created a delay loader, because the types of applications I normally package aren't my own and I don't have the original source code.  However, if you do own the source code, it might be prudent to skip the use of the delay runner and instead embed the assembly resolving code right into the application you plan on packaging.  This has two side effects, first, you can't instantiate any types for dependent assemblies that are packaged as resources inside of the Main method.  If you do, the assemblies will be resolved and loaded before you get a chance to hook the AssemblyResolve event during the JIT process for the Main method.  If you move all of your normal Main logic into a separate function that you call after you've hooked the AssemblyResolve, then business as usual.  The second side effect is that you now have possibly two types of different resources in your executable.  The first type of resource will be embedded libraries, while the second type will be the actual resources used by your application.  The delay runner gets by this because the delay runner assembly ONLY contains embedded libraries and nothing more.

Anyway, figured I'd post these two useful snippets about single assembly packaging.  Enjoy!

## A quick note on security and anti-spam tactics that take advantage of human pattern matching abilities...

Okay, so the BlogX engine now has a security word.  Well, that is fine I  guess.  These strips take advantage of the ability of the human mind to process patterns and make out words in a distorted image.  So I'll start by saying, I've failed the recognition test 5 or 6 times on the same form before.  The whole darn process becomes guesswork as the images become more and more distorted and the spammers get more highly qualified processing software.  Eventually we won't be able to process the images ourselves and the test will be that if the correct answer is given, then the user must be a spammer.

Some issues I have with the pattern matching anti-spam measures.

• False sense of security - I remember a few years ago while I was working at Microsoft, that one of the employees there had actually written a bot that was able to process the images, and submit entries.  If I recall the entires were somehow linked to getting a small payout (possibly Paypal?), and the security mechanism was in place to simply prevent users from submitting thousands of entries and therefore turning the small money into an actual pay-day.  Well, the false sense of security the company had in their system would have cost them dearly.
• I can't read them half the time - Half the time I can't read them.  I actually wrote a small processing application that I will be using to post comments to Chris's blog from now on, since I couldn't read the image supplied to me.  Maybe this won't always be the case, but in the case of the word I was given, I simply couldn't read it.
• They suck for International Users - The features require not only the human ability to pattern match, but also the human ability to understand a written language.  That means they suck for children who are capable of reading well-formed text, but not obfuscated text, they suck for international users that might not even understand english, and they must really be a kick in the groin for users that spend 5 years learning english only to find out they can't make out the words.  So much for all that money you spent on english classes.

Anyway, in the interest of getting rid of these devices I'll give the spammers a little start.  If they weren't using .NET and GDI+ before, then they should be.  After running the below, you still need an OCR program to pull out words.  However, I have another piece of code that I use for non transformed fonts (hence the wavy lines that a lot of the sites are starting to use) that involves caching a bunch of font data  and super-imposing it over the resulting text I get from something like the algorith below.  It takes about 15 seconds unoptimized and gives you an 80% chance of getting the word right.  If you hook it up to a dictionary, it'll add a dictionary look-up to see if the word is real, the problem there is they are starting to use random letters and numbers.  The key there is they always use the same letter number formatting, so you know where to look for numbers and where to pattern match for letters.  These in my opinion are completely inferior as they let me cut my sample matching to just numbers or letters.

using System;
using System.Drawing;
using System.Drawing.Imaging;

public class FilterWord {
private static void Main(string[] args) {
Image img = Image.FromFile(args[0]);
int delta = 3;

Bitmap b = new Bitmap(img.Width, img.Height);
b.SetResolution(img.HorizontalResolution, img.VerticalResolution);
using(Graphics gfx = Graphics.FromImage(b)) {
gfx.DrawImage(img, 0, 0);
gfx.Dispose();
}

// Clear Space
for(int i = 0; i < b.Height; i++) {
for(int j = 0; j < b.Width; j++) {
// Top/Bottom third Check
if ( i > (b.Height * .35) && i < (b.Height * .7) ) {
// Grayscale check
Color check = b.GetPixel(j, i);
if ( check.R == check.G && check.G == check.B ) {
// Color range check
if ( check.R > 10 && check.R < 100 ) {
continue;
}
}
}

b.SetPixel(j, i, Color.White);
}
}

// Clear dots
for(int i = 1; i < b.Height - 1; i++) {
for(int j = 1; j < b.Width - 1; j++) {
// Up 3
Color check1 = b.GetPixel(j-1,i-1);
Color check2 = b.GetPixel(j,i-1);
Color check3 = b.GetPixel(j+1,i-1);

// Mid
Color check4 = b.GetPixel(j-1,i);
Color check5 = b.GetPixel(j, i);
Color check6 = b.GetPixel(j+1,i);

// Down 3
Color check7 = b.GetPixel(j-1,i+1);
Color check8 = b.GetPixel(j,i+1);
Color check9 = b.GetPixel(j+1,i+1);

if ( check5.R < 255 ) {
if ( check2.R == 255 && check4.R == 255 && check6.R == 255 && check8.R == 255 ){
b.SetPixel(j, i, Color.White);
}
} else {
int surroundingDots = 0;

// Left Right
if ( check4.R < 255 && check6.R < 255 ) {
// surroundingDots++;
}
// Up Down
if ( check2.R < 255 && check8.R < 255 ) {
surroundingDots++;
}

if ( surroundingDots > 0 ) {
b.SetPixel(j, i, Color.Black);
}
}
}
}

b.Save(args[1], ImageFormat.Bmp);
}
}

Posted by Justin Rogers | 11 comment(s)
Filed under: , ,

## Commenting on ChrisAn's reliability posting because he BlogX'ed himself into a no comment corner

No, what is the platform to do.  If you need a 5 nines reliability platform, then you allow for that type of platform to exist.  Mark enough pages to gracefully shut-down an appication when in a particular mode or use the extra pages to gracefully exit a soft OOM (by soft, I mean not entirely real because we actually have pages we are sparing).  As the platform you have the power to allocate a special region just for this type of usage.

I think as the developer, I should also get an opportunity to mark pages for this reason.  I might mark a single page, so that I can open a new file handle and write out some save data while I'm gracefully crashing.

I guess large platform changes are out of the question though, even though CER regions were created.  Wiping your butt with tree leaves is going out of style though, so maybe we'll get some toilet paper soon enough ;-)

I think what I hit on above is actually pretty interesting.  Within the realms of the CLR, all memory allocations controlled by the CLR itself.  They have a managed heap, their own private place to store information.  They normally control this region, but others can walk over it using API calls.  Barring users munging the managed heap, I think the ability to mark pages for use in special circumstances would be a great idea.  Mark say 4k of memory that can be used to allocate enough objects to walk out of an OOM.  I pose this in two parts really.

Part 1 - The CLR Tear-down mode: The CLR tear-down mode is when a really solid OOM is on the table and the CLR has to gracefully shut-down or recover.  In a normal OOM, the CLR can't allocate any more memory.  With the new concept of a reserved region, the CLR can now allocate memory as it steps out of the OOM, possibly granting more abilities to the GC because the GC can allocate special trees for storing compaction data, or whatever it might need.

Part 2 - The Developer Tear-down mode: If theygive the ability to the CLR, I want it as well.  Word currently has a feature where they save off your file every now and then in the case your computer tanks on you somehow.  This could be a Word crash, or the power going out, or you forgetting to save before walking away from your laptop while on battery.  The developer tear-down mode is similar to this, but it says the following, You are being torn-down and the process is being exited, use the 4K of memory you specially allocated so you can save your settings and come back up fighting.

I'd assume the types of things you can do in this mode would be highly restrictive so that you don't just throw the machine into another OOM by calling say some property to get state that in turn allocates a huge representation tree.  I'd be happy if the entire process was just a service of the CLR, maybe with me marking some special objects that are guaranteed to be serialized in the case of such an event.  In later events, as the CLR invades the OS more and more it could start to take advantage of the various reliability hardware that exists.  Taking events and signals from power devices in order to ensure state gets saved, or maybe even storing state into a persistent memory device if it exists on the machine.  Starts to give you more options.

Posted by Justin Rogers | 6 comment(s)
Filed under: ,

## Adding some notes on the string reversal and examining unsafe code (even though the original rules didn't allow for it)...

Austin Ehlers made the point of submitting an unsafe code-block to do the string reverse.  Since he took the time, and even managed to argue a good point with me, I figured I'd give him some verbage on my blog.  He made the point of saying, that when perf mattered forget who you stomp on.  I tend to agree with this most of the time, but with the concept of playing fair still in mind I made the following changes to his initial algorithm:

• Instead of partying on the passed in copy, I now do a string.Copy and get a private copy of the string to party on.  This makes the algorithm thread safe.
• I took out a bunch of pre-work he had done and changed it to inline.  This sped his algorithm up enough to thwart the pointer arithmetic version I'll be using as a second example.
• There are many ways to do unsafe code, so I'm throwing in a second example that uses pointer indirection.

Why wouldn't you want to use the unsafe version?  Well, it is unsafe, so you might not be able to do it within your security settings.  It is also very unsafe to party on top of an allocated string, since they are supposed to be immutable and all.  I'm going to argue that by making a private local copy of the string one problem with string immutability is solved and that is the multiple access problem.  The second problem is the hidden bits issue, and I propose we get around this because we never CHANGE the contents of the string only the ordering.  In other words, any bits that are set before we party on the string, will be set after, and the effects of us partying on the string, wouldn't have any effect to change the bits into a different state.  Brian, if you want to call bullshit on this one I'll be more than happy to hear about it.

 Contributor Code Austin EhlersMyself Austin propose the original algorithm in some comments. I'm posting it here in a revised form where I've sped his algorithm up by about 5-6% becaus he was being extra careful and I figured what the hell, why be careful. It isn't really less careful, we just don't do nearly as much pre-work and we get rid of a nasty div statement. I've kept his naming conventions as you'll see.```private unsafe static string private unsafe static string ReverseUnsafe2(string input) { string output = string.Copy(input); fixed(char* sfixed=output) { char* s=sfixed; char t; for(int x=0, y = output.Length - 1; x

For some strange reason that I'm still trying to figure out, the results of the two algorithms are spurious in terms of performance output.  Before the optimizations to the first algorithm, the pointer version always one, while the element access version always lost.  After the updates, the element access version actually wins more often than not, which defies conventional wisdom.  I'm hot to find out why this is, and I'm sure it'll be something bloggable.  Anyway, these methods are insanely fast compared to the other methods for obvious reasons (unsafe code and pointer arithmetic).  I'd labor to say though that the Array.Reverse method would probably be just as fast, if we were able to avoid the extra copy when creating the string from the character array (aka, if we had direct access to the character buffer of an existing string, Array.Reverse would perform at the same speed as our pointer shuffle above, because it is doing the same work).

## Is it just me or does the TableLayoutPanel not really help all that much?

First, let me start by telling you what I thought I was going to get.  You know when you hit a nice well-formed document on an HTML page and all of the labels are off to the left and all of the textboxes are off to the right?  Well, I really really like that.  Everything looks clean.  There are so many places that you'd like the same thing, I guess I'd call it the FormLayoutPanel, and it basically comprises a two column multiple row table, with the possibility of having a draggable sizer bar in the middle, or at least having the right side column auto-size while the left side stays the same (usually your labels don't need more space).

That is what I thought the tabular layout system was going to do.  Turns out it doesn't quite do that.  So what does it do?  Well, you throw it down on the form and it lets you set a number of rows and a number of columns.  These show up as *snap-lines* so you can see the bounds of the panel.  Now, the panel is just that, a panel, so it shows up as a normal control and you can set the Dock so that it fills your form.  Now you drag and drop your controls onto the form.  I started with labels.  What do they do?  Well, the AutoSize = true and snap to the upper left of their container.  Damn, I figured they'd auto-dock to fill, and turn that crappy auto-size off.  So now I have to AutoSize = false all of them, and Dock = DockStyle.Fill.  Jeez, I have to actually WORK!

The same goes for the stupid text-boxes, which don't default to Multiline and Dock = DockStyle.Fill.  Jeez, more WORK!  Okay, so I'm done doing that, and I run the thing and it doesn't look that bad.  However, I don't get my drag-lines to reformat the table when the form is running.  I guess that isn't too awfully bad, but it would be nice to allow the user to better customize their experience.  Setting my column styles up so the right hand side auto-resizes when the form does actually works, so I'm pretty stoked about that.  They seem to be so close to something really cool, just not quite there.  What I'd like to see:

• Intelligent control defaults when they get put into the different layout controls
• If possible, give me a FormLayoutPanel, else I'll have to make one, that allows for only two columns, and have it pre-prop the label/textboxes for me even.  That would be stellar, because it solves a common programming task.
• Give the Tabular Layout the ability to resize while the application is running.
• Allow me to drag/drop controls between layout positions.  Right now, once you lay out your control, you have to set properties to move it around.  It isn't letting me drag, so maybe this is an early revision of the designer.  Just give me the drag/drop support and I'll be much happier.

Overall a great job on this control, and I love the new LayoutEngine design.  I'll definitely be making my own layout engines for various common layouts that I tend to play with.  I like the new extender property feature set for judging where the controls go, rather than relying on the order they are added to the controls collection.  IMHO, relying on the order in which controls were added was extremely flaky and one of the primary failures in the first Windows Forms release.

Posted by Justin Rogers | 1 comment(s)
Filed under: ,

## I applaud the NetworkInformation namespace, however, where is my IsNetworkAvailable property?

With regards to the recent postings from Microsoft employees about productivity versus performance and other such nonsense, I find it strange that they would implement such a complex set of classes and interfaces and not provide some easy to use API for doing what everyone wanted the NetworkInformation stuff for anyway, and that is determining if the network is available.  Now, I found some code that does what I want, it involves enumerating adapters and finding out if there is a non Loopback adapter that has OperationalStatus.Up set on it.  However, I'm thinking the process is actually a bit more complicated.

class NetworkInterfaceTools {
public static void PrintInterfaces() {
NetworkInterface[] interfaces = NetworkInterface.GetAllNetworkInterfaces();
foreach ( NetworkInterface iface in interfaces ) {
Console.WriteLine( "Adapter Name: {0}", iface.Name );
Console.WriteLine( "Description:  {0}", iface.Description );
Console.WriteLine( "Status:       {0}", iface.OperationalStatus.ToString() );
Console.WriteLine( "Type:         {0}", iface.Type.ToString() );
Console.WriteLine();
}
}
}

The above is just an example of some code you'll need to determine if the network is up.  You could argue that I should just open a web request to a known site, but I argue that isn't a valid test of whether or not the network is up, but rather a test of another much more specific status.  To drive home why I want a simple property I'll state the following:

• KISS - There should be a simple property that does the work for me of enumerating adapters and determining if the network is up.  I shouldn't have to determine network availability using a bunch of heuristics.  This is a piece of information that a lot of programmers want to use in their applications.
• Security - Currently, you have to have security of a sufficient level in order to get network information.  There is a Full access, Read access, and No access.  That isn't a great deal of granularity.  I'm not inclined to give an application running on the web even Read access to my network information since that could potentially open me up to a great deal of possible attacks (aka, they know enough information to pick the right time to launch an attack against my network layer).  But I still want that application to be able to determine network availability.  And just the fact that the program is loaded from a web page is not enough to determine network availability, since the user launch the app on a spurious connection (wi-fi), and go off and on.
• Network Availability Changes - NetworkInformation has a new NetworkChange class that allows you to hook an event and determine when interfaces change their availability modes.  If this is protected by the security settings, then it doesn't really do me all that much good.  However, having the property, I could set up a system to check the property on intervals and not need the instant availability updates provided by the event model.  This would enable handy little status indicators in apps that use the web, but don't have NetworkInformation permissions.

It is still early enough in the product cycle for one of the Net Classes guys to pick this up and add it into the API, and that is why I'm tossing this out there.  Network availability is a piece of information that in today's world is classified as integral to the proper functionality of many applications.  Hiding it behind a bunch of other adapter information isn't very cool in my book, nor is giving an application access to my network information, just so they can find out if the network is available.

Posted by Justin Rogers | 8 comment(s)
Filed under:

## MTU Investigator using Whidbey and the Ping class

I previously posted a TraceRoute utility, this time I figured I'd do a little MTU investigator.  In most cases your MTU is going to be throttle by your local box or your DSL/Router unit, so adding this to the TraceRoute utility really isn't that good of an idea since it takes a good deal of time to run an MTU ping.  I guess you could have the algorithm start at a high number (say 1500), and then make revisions up or down depending on the result until it finds the *threshhold*.  This type of dual direction algorithm would let you find the MTU very quickly.  I went the easy route though.

Strangely enough, Whidbey still doesn't have any cool graphing controls built into Windows Forms, else I'd toss something graphical out.  I do have one more Ping utility planned though, and it focuses on a common ping scenario, so stay tuned.

class MTUPing {
private static Ping pinger = new Ping();

public static int PingForMTU( int startSize, int incSize, IPAddress endPoint ) {
if ( incSize < 0 ) { incSize = -incSize; }
if ( incSize == 0 ) { incSize = 10; }

bool complete = false;
while ( !complete ) {
PingReply reply = pinger.Send( endPoint, new byte[startSize], 5000, new PingOptions( 128, true ) );
if ( reply.Status == IPStatus.PacketTooBig ) {
int reviseSize = startSize;
startSize -= incSize;

while ( startSize < reviseSize ) {
reply = pinger.Send( endPoint, new byte[startSize + 1], 5000, new PingOptions( 128, true ) );
if ( reply.Status == IPStatus.PacketTooBig ) { break; }

startSize++;
}
break;
}
startSize += incSize;
}
return startSize;
}
}

Posted by Justin Rogers | 9 comment(s)
Filed under:

## [Terrarium] What impact will the up-coming source release have on the community?

I was asked this question which of course was followed by an hour long discussion of the impact of a source release on just about any game.  I want to try and keep things short (less than an hour), so I'll focus on a couple of topics:

• Can a user play with a modified client?
• Are you safe from users using modified clients?
• Is there any global game-play value remaining?
• What can we do as a community once we get our hands on the source?

First things first.  Yes a user can modify a client and play.  To tell you the truth, I've often connected and played on the global EcoSystem in order to test out new game features that I've been adding to the source.  We have some marginal protections against this type of game-play hack, however, most make use of a statistical equation to guarantee that at least some large portion of the EcoSystem is not being played on corrupt clients.  The source release ups this number considerably as people will want to make small changes.  It will take some time to overcome various other protections that were built into the game that ensure a safe client, but an astute programmer won't have much trouble identifying the code-blocks.

Are you still safe, even with modified clients?  Heck yeah, we did a great job making sure you were safe.  The user on the other end may be hacked, and his EcoSystem can be all screwed up, but you still have the experience available to you on your machine.  This is further protected by having your own personal IL verifier running over all incoming assemblies, network level anti-flood protection, and communications verifiers for all P2P interactions.  You are definitely safe.

There may still be some gameplay issues available for the hacker to take advantage of.  In fact the user now has full control over teleportation, whether or not they allow specific creatures to run on their machines, and a large number of other items.  They can effectively become incubation chambers capable of only spitting out more and more teleported versions of their animals.  We have some constraints in place to prevent too much of this, but they are generous towards normal playability, making them vulnerable to general attack.  I think this spells the general death of global ecosystem play (don't shoot me, this is a general observation), since the possibility of an equal playing ground won't exist.

That means once the source hits the shelves we have to make some interesting changes as a community.  Some of them I've already gotten prepared for you guys (check out my Terrarium category for more information), but you are going to have to keep aprised of the community movements in order to take full advantage of them.  The main changes are going to be with the new community edition Terrarium server.  I've detailed many of it's features, but it overcomes hacked clients through security channels, allowing friends to play with friends (aka, a true trust relationship).  There is also the ability of the server to interact with clients through a control channel so competitions can be organized more easily.  In prior versions of the Terrarium running pre-canned comps was tough, and we had to install a lot of creatures onto a lot of different machines, while the new server allows us to set up competitions and connect to a farm of clients.  Again, a true trust relationship will have to be created here, and someone will need to supply a farm of machines.  Time for the community to step up, else the Terrarium will just be another Starter Kit demo app with limited playability.

Posted by Justin Rogers | 1 comment(s)
Filed under: