Attention: We are retiring the ASP.NET Community Blogs. Learn more >

Tobler.SoftwareArchitecture()

John Tobler's somewhat ordered collection of thoughts and resources mostly related to software architecture and software engineering.

  • [Tools] Four nice screen magnifiers and a screen saver

    If, like me, you sometimes set your screen resolution so high that you can't quite read some of the smallest print and have a problem when you really need to read it accurately.  In the real world, you'd go find a magnifying glass.  In BitWorld, you need a screen magnifier. 

    Three utilities upon which I have come to depend are Virtual Magnifying GlassDragnifier, and Screen Loupe 2000.   As you may deduce from its URL, Virtual Magnifying Glass is both free and Open Source.   Dragnifier is free (but not Open Source).  Screen Loupe 2000 is low-cost shareware.  Screen Loupe 2000 was one of the first decent screen magnifiers I encountered and it was worth the shareware cost then (with Virtual Magnifying Glass and Dragnifier around now, though, I'm not too sure I would be quite so quick to pay for it).  If you want to play with some .NET source code for a simple screen magnifier, you might like my fourth recommendation, Magnifier.NET.

    Many other screen magnifiers are available.  In fact. from your [Start] menu, check out [Programs|Accessories|Accessibility|Magnify] and you may discover you have the free Microsoft Magnify that is distributed in Windows XP (and some other versions of Windows).  Microsoft Magnify tells you straight out, in a dialog box, that  it doesn't quite cut it.  If you want to dig deeper for other screen magnifiers, just type "screen magnifier" into Google (or be lazy and use this link).

    This article is not a comprehensive exposition about screen magnifiers; it's really just a way to say "Thank you!" for four of the ones I have personally found most useful. 

    If you like magnifying effects, though, I just can't help but recommend one of my absolute favorite screen savers, Remco de Korte's Bubbloids!  Endlessly fascinating!


  • [Tools] My Favorite Sticky Note Program: Stickies

    After using it for quite a while, now, it is time to declare Stickies as my absolute favorite free sticky-note program.  This is one light-weight utility that I would sorely miss, were it not available.  This one does not hog my resources and pretty much stays out of my way when I do not need it.  I find that I regularly use Stickies many times a day.  Tom Revell is an absolutely cool guy for providing this well-done little helper to the larger community.  Thanks!

  • [Software Architecture] Must read article: "The Secret Source of Google's Power"

    Count me late to the party for not seeing "The Secret Source of Google's Power" before.  There are enough interesting facts, ideas, and speculations in the article and its follow-up comments to keep you preoccupied for quite some time.  This was a fascinating read.  I think we need to think about the various potentials here -- along several dimensions.  Certainly, as a software architect, I would feel like a kid in a candy store at Google!  Lucky those guys!


  • [Java] Five-star Instructional Graphic Award: "Java Technology Concept Map"

    I admit to being intrigued by the Java Technology Concept Map not only for its Java-oriented content but for its very innovative graphical presentation using Concept Map techniques.  Concept Mapping derives from Mind Mapping, a technology I have personally used quite profitably for many years, not only for software design and development, but also for a lot of other things -- including general thinking.  The Java Technology Concept Map, with its interesting graphical design and dynamic Flash presentation (you can also download a PDF version) shows just how far we have come from those original simple mind maps we used to draw on whatever scratch paper was handy!   The designers of this instructional tool deserve an award, so CSharpener hereby bestows the "Five-star Instructional Graphic Award" upon them!

  • [.NET General] John Gough on "Vectors vs. Arrays"

    Below, I present a very interesting post on "Vectors vs. Arrays" where John Gough describes some important details about what goes on under the covers of certain seemingly minor differences in array notation.  John's post is contained in the archives of the DOTNET-LANGUAGE-DEVS email list on DISCUSS.MICROSOFT.COM but is only available, there, to list members.  I think this post is of more general interest so am making it accessible here. 

    A lengthier explanation is available in John's book, Compiling for the .NET Common Language Runtime

    Among other things, John Gough is the creator of Gardens Point Component Pascal (gpcp), an Open Source compiler for an object-oriented dialect of  Pascal that runs on the .NET Framework,

    NOTE: The following contains quoted material.

     Rod da Silva's Question:
    I was under the impression that there is a very real difference between
    the CLR type int[][] and int[,]. However, I am finding out that the
    both appear to be nothing more than instances of System.Array class.
    That is, they both exhibit pass-by-reference semantics in that I can
    pass either to a method and modify one of its elements, and the change
    will persist when I return from the method. I was expecting int[][] to
    have pass-by-value semantics.

    Can someone please describe the difference between int[][] and int[,]?
    Also is there any way to make int[][] have pass-by-value (i.e.;
    valuetype) semantics?
    John Gough's Answer:

    good question.

    You are correct, both int[][] and int[,] are reference types.
    I spend some time in my "Compiling for the .NET Common
    Language Runtime" (Prentice Hall 2002) explaining what a
    compiler has to do to get value semantics for its target
    language.

    The difference between the two types can be understood as
    follows. One dimensional arrays of any type are a primitive
    for the CLR. Thus int[] is a <<reference>> to an array of
    int. The type int[][] is a reference to an array of
    references to int. It is thus a "ragged array", and if you
    want it to be normal two-D array then in CIL the initializer
    must explicitly create each component int[] array to be the
    same length. Of course in some languages the compiler may
    hide this away from the programmer. Note that it follows
    that creating an array, say int[8][8], will require a total
    of nine(!) objects to be allocated.

    The type int[,] is not a built-in type of the execution
    engine, although the JIT does need to know about it.
    Instead it is one of the possible forms of System.Array. In
    brief, the memory allocated for such an array will be in one
    glob, and requires just one object creation. The only
    downside is that you cannot access the elements of such an
    array using just the raw instruction set of the CLR. It is
    necessary to call functions of System.Array and hope that
    the JIT gets to be clever enough to inline the code.

    Finally, how to get value semantics. Reading my book may
    help you write a compiler to do the trick, but if you are
    stuck with a language that does not do it for you then you
    need to write a method for each type, such as

    int[][] CopyOf(int[][] x) {
    // allocate correctly sized collection of 1-D arrays
    // now copy the elements, then return
    }

    So that instead of saying
    SomeMethod(myArray);
    you go
    SomeMethod(CopyOf(myArray));

    Hope this helps.

    John Gough

  • [General] Exploratory Data Analysis (EDA)

    I have long been fascinated by Exploratory Data Analysis (EDA), a very creative new statistical methodology that differs substantially from what most people know as statistics. 

    Most tools in the normal statistician's kit are intended to help analysts confirm the results of statistical experiments or to validate an hypothesis via statistical manipulation of pre-existing data.  We can classify these approaches as "confirmatory statistical analysis."  The "standard" confirmatory statistical techniques are only suitable if the problem under study meets  the very specific requirements and assumptions upon which parametric statistical theory is based.  Frequently, people -- including many professional statisticians who should know better -- blindly misuse the normal tools (e.g., mean and standard deviation) on data sets that do not come close to meeting the required conditions (such as having a normal distribution, etc.).  Only rarely can standard parametric statistical methods be used effectively to perform initial explorations on unknown batches of numbers.

    John W. Tukey, in his great classic text, Exploratory Data Analysis, gave us some cool tools for exploring data.  Sometimes, you end up with a bunch of data and have absolutely no idea what might be "in there."  Tukey's methods included some very interesting graphical techniques, such as "stem and leaf diagrams" and "box plots," that stand as excellent early modern data visualization examples.  I must hasten to add that many of the EDA techniques are not only effective but fun to do.  I strongly recommend EDA to absolutely anyone who must even occasionally attempt to find that elusive "something" in a batch of numbers.

    I consider it one of the canonical examples of the unfairness of the universe that Tukey's text appears to be out of print and is now somewhat difficult to find.  You can easily locate any number of derivative works but, IMNSHO, the true classics in any field should *never* be allowed to go out of print -- and Tukey's "orange book" certainly classifies as one of those.  Find it in some library somewhere and just take a look at it and I think you will agree.  Even the format and layout of this book is creative, special, and clear.  But the techniques, themselves, are things of beauty, developed by that extremely rare type of statistician, one who actually tried to do real things with real numbers.

    John W. Tukey died on July 26, 2000.  He certainly deserves to be ranked as one of the most influential statisticians of the late 20th century.  Oh, and by the way, you might be interested to know that it was John W. Tukey who first coined the term "software" in 1957.

    The immediate motive for this post is that I just discovered two nice introductory sites about EDA that I had not previously seen:  Exploratory Data Analysis and Data Visualization, by the unusual Dr. Alex Yu, Chong Ho (Alex), and the Exploratory Data Analysis section of the free online Engineering Statistics Handbook, provided by the Information Technology Laboratory (ITL) of NIST .  These resources give excellent introductions and give the beginner a great starting point.

    Enjoy!

  • [Security] Major Cryptographic Algorithms Broken by Quantum Bogodynamics

    It is definitely not April Fools' Day, but the article Crypto researchers abuzz over flaws will probably make you think it is.  As if all of the nasty viruses and worms and buffer overruns of late aren't enough, now MD4, MD5, HAVAL-128, RIPEMD, SHA-1, and other basic cryptographic algorithms currently in heavy production usage are under severe mathematical attack. 

    I think the only reasonable non-Occamian (Null-O) theory is that we must have recently experienced a serious rise in bogon flux density.  It's obvious (TM) that bogons and psytons have started poking their holes not only through electronic equipment but also even through basic theories and abstractions of all types.  Quantum bogodynamics has evolved into the abstract realm!  Start boning up on your quantum compudynamics or we are surely lost. Hmmmmmm?  Perhaps we're lost, anyway.

    "Caveat everybody!  She's gonna' blow!"