Lance's Whiteboard

Random scribbling about C#, Javascript, Web Development, Architecture, and anything else that pops into my mind.

News


Creative Commons License
Lance's Whiteboard Blog by Lance Hunt is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
Based on a work at weblogs.asp.net



Sponsored Ad
Sponsored Ad

Blogs I Read

My past year as a HTML5/Javascript developer… Part1

I’ve been fairly mum on work for the past year, but I figured I should do a brain-dump of some of the things I have learned and experienced as a HTML5/Javascript developer.

Before we begin…

(feel free to skip this section if you don’t care about my disclaimer or full disclosure)

I feel like I should start by explaining a bit of my perspective.  I’m not a traditional front-end web developer.  I come from a long history of server-side rendered web development primarily using Microsoft technologies such as ASP classic, ASP.NET, et al.

Having said that, I have spent over 15 years jockeying around HTML, images, css, and yes, lots of Javascript.   Back in the days when IE 4/5 won the browser wars, I was fairly successful building Javascript frameworks to power enterprise apps and extranet sites of varying sorts.   Not that javascript/html development was great (it was quite bad actually) but everything got much worse after that once Microsoft lost their way and then the browser-wars erupted anew.  During those bleak years I dabbled here and there with client-side dev but overall I was generally jaded towards treating JavaScript as nothing more than glue and spackling to smooth the cracks in my server-side rendering.  Frankly, before this past year I hadn't even really thought of Javascript as a first-class language.

So, now having built a fairly complex real-time stock trading app targeting only modern browsers using the latest HTML5 & Node.js features (and no Microsoft technology in sight), I want to break it down a bit and think about what I have seen & done.

State of HTML5

It amazes me every day how far browsers and client-side webdev have come since the late 90’s when I was involved more with startups.  The problem is that with new features and capabilities comes new compatibility problems. 

The good:

  • Enhanced Selectors
  • session/localStorage
  • Canvas
  • SVG
  • structural elements
  • CSS transformations
  • History navigation
  • WebSockets
  • WebWorkers
  • video element (*codec support notwithstanding)
  • audio element (*codec support notwithstanding)
  • Html Editing
  • MathML (*a cautionary ‘good’ at this point)
  • WebFonts

The bad:

  • new form fields and validation
  • File API
  • Drag and Drop (*bad mainly due to its mobile challenges)
  • Full Screen (*mainly IE’s fault, but tablets also fail)
  • Server-sent events
  • Security sandboxing sub-apps
  • WebRTC
  • WebCam (*very experimental in most browsers)

Sure, there are a handful of really great HTML5 technologies that are functionally equivalent across Mozilla, webkit, and IE10 but the devil is in the details.  If you are building fairly simplistic or single-purpose apps, you are fine, but as soon as you start trying to build something ‘interesting’ for a large audience you will quickly find that there are still important performance issues that arise. Add the shift to mobile and the plethora of new platforms such as Television and Gaming devices and the list of ‘good’ starts to get much shorter and muddled.

The holy grail of build-once-use-everywhere is still elusive with the need to adjust form factors for mobile devices and pair-down payloads to help meager bandwidth and processors.  Sure, the very latest tablets and phones are now quad-core or better with even more RAM but their performance are still network and IO bound for the most part.   This, plus the varying CPU performance leads to major issues with race-conditions and other fun device-profile issues.

The pitfalls of GDD.

Over the years, I have sampled approaches to software development ranging from RAD, XP, Waterfall, Agile, Scrum, SOA, TDD, and have recently started looking more seriously at the BDD/DDD(D) camps.   However, throughout my forays into this potpourri of acronyms and metaphors for programming, I continue to find myself falling back on the crutch of GDD – the least Agile and productive approach of all.

Yes, I’m referring to none other than the ubiquitous Google-Driven Development (GDD). 

Its like when I first realized how dependent upon Intellisense I had become, except now I find that GDD is far worse since it is simultaneously more subtle, insidious, and disruptive.    At least Intellisense tries (yet arguably fails) to help you get things done faster, but GDD despite its popularity and benign appearance is truly the greatest time-sucking vortex in the universe. 

GDD dulls the developer’s mind, lulls us into a complacency about trying to solve problems ourselves – since by simply Googling, you can let others provide an answer for you.  GDD is like dark-matter obscuring developers from grokking quality software engineering.  It is the elusive Higgs-Boson particle of development that is driving us toward anti-productivity and mediocrity.  It is the reason why aspiring developers’ growth often stalls mid-career, becomes stunted and eventually causes them to revert to (MS-style) demoware quality development rather than maturing into true software engineers, craftsmen, and thought-leaders of the industry.  GDD is more subtle than the common-cold and a greater pandemic than H1N1, and it must be eradicated. [1]

Inevitably if we continue to abuse GDD, we may one day be faced with a future similar to that depicted in the fictional (yet highly plausible) movie Idiocracy where our development communities are filled with below-average developers and hacks who are ruled by a few barely-average people and their well SEO implemented code repositories.

Diagnosis

I implore you to perform this self-diagnostic test today to see if you too have acquired the GDD addiction:

Ban yourself from Google (and/or Bing) for 1 day…If you find that you struggle to produce code without searching for 3rd party libraries & open-source, notice a sense anxiety at being unable to find blog code samples, feel concerned that you cannot validate your ideas against posts in forums and sample apps, or cannot make coding progress without seeking out online API’s and reference sheets, then you too may suffer from Google Driven Development. [2]

Remedy

GDD is difficult to completely eradicate from our lives, however here is a proven 7-step  approach that helps to reduce its harmful affects;

  1. Blank Browser - Change your browser start-page to about:blank (or equivalent) rather than a search page.
  2. Cleansing Period - Perform a 1 week cleansing period of total search abstinence.
  3. Moderation - Afterward, slowly reintroduce Google and other search tools with extreme moderation.
  4. Reward Abstinence - Reward yourself each time you successfully complete a task without search that normally you would have. (note: don't use GDD as a reward for GDD abstinence)
  5. Cheat Hour - Schedule one timed GDD Cheat Hour each week where you allow yourself to indulge in unadulterated hard-core GDD.  (Note: make sure the 1 hour isn't exceeded)
  6. GDD Diary - Keep a log of how much time you use search tools for development. (Tools like RescueTime.com may help)
  7. GDD Monitoring - After 21 days of intense anti-GDD focus & moderation, open up your calendar and schedule 1-2 days of GDD abstinence each month to measure your progress.   If abstinence of GDD still causes excessive anxiety, repeat these 7-steps.

Note: beware that its common to see GDD sufferers seek-out alternatives, or shift habits towards Twitter, StackOverflow, and other social networking sites.  These are just variants on GDD, each with its own set of problems, thus should be avoided. [3]

Good luck!

 

[1] Okay, I’m not really saying never reuse code, but hey, try writing some stuff yourself first and at least you will better understand the problem rather than taking other developers’ word on how to solve it.

[2] There is obviously some truth in this post, but I really hope you don’t think that I’m totally serious and off my rocker with all this stuff.  Its just a fun way of pointing out something we all know already, which is that we tend to go down rabbit-holes when we google anything, and waste more time googling and digging than it might have taken to write it ourselves in the first place.

[3] This is also a belated partial-rebuttal of Phil Haack’s 2007 post “Increase Productivity With Search Driven Development” in which he argues the value of Search for code/solution discovery.  I don't even mention the risks to Intellectual Property and job security if you are a commercial product developer…

T4 Template error - Assembly Directive cannot locate referenced assembly in Visual Studio 2010 project.

I ran into the following error recently in Visual Studio 2010 while trying to port Phil Haack’s excellent T4CSS template which was originally built for Visual Studio 2008.  

The Problem

Error Compiling transformation: Metadata file 'dotless.Core' could not be found

In “T4 speak”, this simply means that you have an Assembly directive in your T4 template but the T4 engine was not able to locate or load the referenced assembly.

In the case of the T4CSS Template, this was a showstopper for making it work in Visual Studio 2010.

On a side note:

The T4CSS template is a sweet little wrapper to allow you to use DotLessCss to generate static .css files from .less files rather than using their default HttpHandler or command-line tool.    If you haven't tried DotLessCSS yet, go check it out now! 

In short, it is a tool that allows you to templatize and program your CSS files so that you can use variables, expressions, and mixins within your CSS which enables rapid changes and a lot of developer-flexibility as you evolve your CSS and UI.

Back to our regularly scheduled program…

Anyhow, this post isn't about DotLessCss, its about the T4 Templates and the errors I ran into when converting them from Visual Studio 2008 to Visual Studio 2010.

In VS2010, there were quite a few changes to the T4 Template Engine; most were excellent changes, but this one bit me with T4CSS:

“Project assemblies are no longer used to resolve template assembly directives.”

In VS2008, if you wanted to reference a custom assembly in your T4 Template (.tt file) you would simply right click on your project, choose Add Reference and select that assembly.  Afterwards you were allowed to use the following syntax in your T4 template to tell it to look at the local references:

<#@ assembly name="dotless.Core.dll" #>

This told the engine to look in the “usual place” for the assembly, which is your project references.

However, this is exactly what they changed in VS2010.  They now basically sandbox the T4 Engine to keep your T4 assemblies separate from your project assemblies.  This can come in handy if you want to support different versions of an assembly referenced both by your T4 templates and your project.

Who broke the build?  Oh, Microsoft Did!

In our case, this change causes a problem since the templates are no longer compatible when upgrading to VS 2010 – thus its a breaking change.  So, how do we make this work in VS 2010?

Luckily, Microsoft now offers several options for referencing assemblies from T4 Templates:

  1. GAC your assemblies and use Namespace Reference or Fully Qualified Type Name
  2. Use a hard-coded Fully Qualified UNC path
  3. Copy assembly to Visual Studio "Public Assemblies Folder" and use Namespace Reference or Fully Qualified Type Name. 
  4. Use or Define a Windows Environment Variable to build a Fully Qualified UNC path.
  5. Use a Visual Studio Macro to build a Fully Qualified UNC path.

Option #1 & 2 were already supported in Visual Studio 2008, so if you want to keep your templates compatible with both Visual Studio versions, then you would have to adopt one of these approaches.

Yakkety Yak, use the GAC!

Option #1 requires an additional pre-build step to GAC the referenced assembly, which could be a pain.  But, if you go that route, then after you GAC, all you need is a simple type name or namespace reference such as:

<#@ assembly name="dotless.Core" #>

Hard Coding aint that hard!

The other option of using hard-coded paths in Option #2 is pretty impractical in most situations since each developer would have to use the same local project folder paths, or modify this setting each time for their local machines as well as for production deployment.  However, if you want to go that route, simply use the following assembly directive style:

<#@ assembly name="C:\Code\Lib\dotless.Core.dll" #>

Lets go Public!

Option #3, the Visual Studio Public Assemblies Folder, is the recommended place to put commonly used tools and libraries that are only needed for Visual Studio.  Think of it like a VS-only GAC.  This is likely the best place for something like dotLessCSS and is my preferred solution.  However, you will need to either use an installer or a pre-build action to copy the assembly to the right folder location.   Normally this is located at: 

C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies

Once you have copied your assembly there, you use the type name or namespace syntax again:

<#@ assembly name="dotless.Core" #>

Save the Environment!

Option #4, using a Windows Environment Variable, is interesting for enterprise use where you may have standard locations for files, but less useful for demo-code, frameworks, and products where you don't have control over the local system.  The syntax for including a environment variable in your assembly directive looks like the following, just as you would expect:

<#@ assembly name="%mypath%\dotless.Core.dll" #>

“mypath” is a Windows environment variable you setup that points to some fully qualified UNC path on your system.  In the right situation this can be a great solution such as one where you use a msi installer for deployment, or where you have a pre-existing environment variable you can re-use.

OMG Macros!

Finally, Option #5 is a very nice option if you want to keep your T4 template’s assembly reference local and relative to the project or solution without muddying-up your dev environment or GAC with extra deployments.  An example looks like this:

<#@ assembly name="$(SolutionDir)lib\dotless.Core.dll" #>

In this example, I’m using the “SolutionDir” VS macro so I can reference an assembly in a “/lib” folder at the root of the solution.   This is just one of the many macros you can use.  If you are familiar with creating Pre/Post-build Event scripts, you can use its dialog to look at all of the different VS macros available.

This option gives the best solution for local assemblies without the hassle of extra installers or other setup before the build.   However, its still not compatible with Visual Studio 2008, so if you have a T4 Template you want to use with both, then you may have to create multiple .tt files, one for each IDE version, or require the developer to set a value in the .tt file manually.  

I’m not sure if T4 Templates support any form of compiler switches like “#if (VS2010)”  statements, but it would definitely be nice in this case to switch between this option and one of the ones more compatible with VS 2008.

Conclusion

As you can see, we went from 3 options with Visual Studio 2008, to 5 options (plus one problem) with Visual Studio 2010.  As a whole, I think the changes are great, but the short-term growing pains during the migration may be annoying until we get used to our new found power.

Hopefully this all made sense and was helpful to you.  If nothing else, I’ll just use it as a reference the next time I need to port a T4 template to Visual Studio 2010. 

Happy T4 templating, and “May the fourth be with you!”

Minimum & Maximum Dates in code

When updating Sql columns that need a minimum or maximum date, consider using the defaults from the System.Data.SqlType namespace:

   1:  DateTime minDate = SqlDateTime.MinValue.Value
   2:   
   3:  // and
   4:   
   5:  DateTime maxDate = SqlDateTime.MaxValue.Value

This can be a lot safer than putting hard-coded "magic date" constants in your code.

What we dont know "will" hurt us...

I like this article by Nathan Henkel, its essentially about assessing risk and scope of projects and strikes me as a simple truth about the uncertainties you encounter in every project:

Information about any project can be divided into four categories:

1. Things we know (and know we know)
2. Things we know we don't know
3. Things we think we know, but don't (i.e. things we're wrong about)
4. Things we don't know we don't know

Obviously, if you were to try to actually figure out where everything falls, you would put everything into 1 or 2. Everything that should be in 3, you would put in 1 (you're not going to have known mistakes in your information), and everything that should be in 4 would simply be missing.


However, without dealing with specific items, I do think that it's possible to guess at how much "stuff" goes in each category. You can take into account your history ("I tend to often be mistaken about X"), or a general feeling of ignorance ("I've never used framework Y before") to guess how much goes in each category.

http://simplyagile.blogspot.com/2007/10/classifying-information-or-what-we-know.html

Sometimes, I think we get so wrapped up with what we “know” about a project that we fail to quantify what we don’t know, or the degree of certainty to which we actually know what we think we know.  As with solving any problem, the first step is to find a way to quantify and measure uncertainty and risk in order to minimize it. 

If you track this measurement over time, it should also help your estimation and planning on future projects.

Good stuff!

Argotic Syndication Framework 2008 released

I got an email yesterday that a major update to the Argotic Syndication Framework was released.   I have used the older versions of this framework several times for projects that need basic RSS & Atom parsing/generating so I'm looking forward to digging-in to the new release.

If you are not familiar with it, here is a quick blurb:

The Argotic Syndication Framework is a Microsoft .NET class library framework that enables developers to easily consume and/or generate syndicated content from within their own applications. The framework makes the reading and writing syndicated content in common formats such as RSS, Atom, OPML, APML, BlogML, and RSD very easy while still remaining extensible enough to support common/custom extensions to the syndication publishing formats. The framework includes out-of-the-box implementations of 19 of the most commonly used syndication extensions, network clients for sending and receiving peer-to-peer notification protocol messages; as well as HTTP handlers and controls that provide rich syndication functionality to ASP.NET developers.

To learn more about the capabilities of this powerful and extensible .NET web content syndication framework and download the latest release, visit the project web site at http://www.codeplex.com/argotic.

Also, here are some of the new features in this release:

a) Targeting of both the .NET 2.0 and .NET 3.5 platforms

b) Implementation of the APML 0.6 specification

c) Implementation of the BlogML 2.0 specification

d) Native support of the Microsoft FeedSync 1.0 syndication extension

e) Simplified programming API and better online/offline examples

Brian has done an amazing job on this project from the start.  I had intended (and still hope) to jump in and contribute some of my own work, so its great to see how far it has evolved from its first releases.

If you work with RSS, ATOM, or any other syndication format/protocol, you should definitely take a look at this framework for your next project.

I love ClearContext!!

After several months of using the Free version of the ClearContext addon for Microsoft Outlook, I just cant imagine what I would do without it.  It has reduced my email time, kept me more organized, and uncluttered my Inbox better & faster than any ad-hoc system I have devised in the past.

As a developer, I hate it when I have to "code in Outlook".  If it were up to me, I would ban all email during a project and deal with all communication via instant messenging, Scrum meetings, and whiteboards, but the truth is that email is a neccessary evil especially as a Tech Lead who needs to interface with the Project Manager, Customer, and IT personnel.

Enter ClearContext Information Management System...

First, I set it up to flag emails from my bosses in Red, so I dont miss them.  Plus, for good measure, I have an Outlook rule that sets a FollowUp flag to make sure I dont overlook them.  Also, ClearContext automagically ranks emails based upon my prior history with this person, so I know what to do when I get some nice blue and green colored mail too.

If I receive an email relating to my current project, I simply hit ALT-P to popup the CC dialog and flag it with the topic "projects/MyProject" then either leave it in the inbox for further review, or hit ALT-M to file the message for future reference.    Accordingly, if I receive some corporate or administrative relating email, then I assign it's topic appropriately and file the message to send it to its respective holding area.  

The act of assigning a Topic (ALT-P), automatically creates subfolders within my Inbox (e.g.  inbox/projects/MyProject) matching the topic name (Note the trick of adding a "/" to the topic name to create a nested subfolder at the same time).  The act of filing a message (ALT-M), moves it to the subfolder identified by the topic name.  This is great because the messages are nolonger visible in the Inbox listing, but are still within the Inbox via the subfolder.

At that point, my AutoArchive settings will take care of moving it off on a monthly basis in case I need it later.

At some point, I want to look at the full product, which has features for deferring emails, converting them to tasks & appointments, assigning them to other people, etc.   See their section for more on these areas.

If these features are nearly as useful as the ones I use now, then I could *gasp* become even more productive!  woot!

Manual CRUD operations with the Telerik RadGrid control

I have been working on a project lately that was already using the Telerik ASP.NET Rad Controls suite.  One of the new features was a fully editable web-grid, so I chose to use the existing ajax-enabled RadGrid control to speed my development.  I chose to use a 3rd party control, mostly due to time constraints since the project required a grid with inline-editing, full CRUD operations, plus custom column templates, all with heavy Ajax support to avoid postbacks and excessive page size.

I soon discovered, the Telerik controls are nice tool for simple uses where you can use asp.net DataSource controls and automatic databinding, but not so much if you need to get "fancy" with your implementation.  In my case I needed to do 2 things that cross over into the grey area where these controls excel.

First, I'm using an early 2.0 version of NetTiers for the DAL (with Service Layer implementation) with custom mods to the entities as the datasource,  and second, I'm doing some aggregate custom ItemTemplates that require custom data-binding.

This lead to extreme complexity in the implementation because, A) this version of NetTiers' had problems with properly generating CRUD operations for its EntityDataSource controls (NetTiers entities mapped onto a custom ObjectDataSource style control) which prevented me from using the declarative model, and B) the RadGrid control simply sucks if you cannot use automatic databinding and if you require custom databinding logic.

It would be great if I could upgrade NetTiers and/or Teleriki RadControls to the latest versions, but it wasnt possible in this situation, nor is it likely that this would have solved my problems.

Anyhow, all this discussion is basically just to share you this one link to a user-contributed example I found incredibly useful after 3 days of searching their forums, demos, and 3rd party blogs.   This example shows how to manually implement Insert/Update/Delete functionality within the RadGrid control by handling the events OnNeedDataSourceOnItemCommand, OnInsertCommand, OnUpdateCommand, and OnDeleteCommand:

http://www.telerik.com/community/code-library/subm...

The reason this link is important is because the Telerik website, with all of its dozens of examples, consistently shows very basic scenarios, even in samples labeled "advanced".  Also, not all of the API features are fully or well documented to help you figure this out on your own.

Hopefully this simple link (which should be promoted to Telerik's demos/samples page) will help someone else as much as it did me.

Posted: Oct 17 2007, 10:33 AM by CodeSniper | with 2 comment(s)
Filed under:
VPC 2007 Dual Monitor support

I have been trying to find a way to allow you to run Virtual PC 2007 with multiple monitors.  Natively VPC 2007 doesnt support more than 1 monitor, however you can "trick" it by using various techniques that expand the desktop area into a larger virtual desktop.

I tried using the awesome which can extend your screen across separate PC's (think "push" remote desktop), but the new multi-monitor compatibility feature of VPC 2007 (which inexplicably does not add multi-monitor support) made this difficult since it ensures that your desktop recaptures your mouse when you move it outside of the VPC window thus preventing the extended screen from being accessible.

So, instead I tried the Remote Desktop approach mentioned in Steven Harman's blog post.  

Here is a quick rundown on how it works:

Connect 2 monitors to your PC (more than 2 typically don't work with this approach).   Make sure to extend your desktop onto the 2nd screen via Display Properties -> Settings.  Then launch Remote Desktop (mstsc.exe) with the "/span" flag:

mstsc /span

Then just use Remote Desktop as usual by specifying your VPC's computer name in the connection dialog.

When I first tried this, it still didnt work exactly right.  It kept giving me annoying scrollbars instead of going full screen, so I added this extra flag to force it into fullscreen:

mstsc /span /f

Also, since I didnt want VPC to have the extra overhead of maintaining 2 sessions (the console and my new RDP session), I threw in one more flag to make it simply take-over the initial console window:

mstsc /span /f /console

NOTE: The /span flag is only present on the very latest version of Remote Desktop Connection.  Therefore, you must either be running Vista on both PC's, or install the update specified here:

http://support.microsoft.com/kb/925876

There are limitations on how your  monitors must be configured in order for this flag to work.

Also, keep in mind that this technique only enlarges your desktop area sufficiently large enouch to span both monitors, but it DOES NOT behave exactly like the native dual-monitor support you may be accustomed to.  For example, when you maximize a window, it maximizes across BOTH monitors instead of maximizing within the confines of a single monitor.   For now, I'm just dealing with that by avoiding maximizing and just manually resizing windows to fit 1 screen.

 

Advanced Users:

One way to avoid having to arrange windows each time is to use a cryptic, yet incredible tool called Hawkeye ShellInit. ShellInit is a small application that helps you manipulate your desktop & application windows via script.   Here is a small script that will move Visual Studio over the right-hand screen (assuming 1280x1024 resolution) and enlarge it to the correct size.

Position Window, *Microsoft Visual Studio, wndclass_desked_gsk, 1280, 0, 1288, 1002

If you decide to use this tool, make sure and read the readme.txt file for some good sample scripts and ideas.

Reporting Services administration changes in Katmai (v.Next)

some information on changes they are consindering to how you will administer Sql Server Reporting Services in the next version, codenamed Katmai.

Right now, administering Report Models exposed to Report Builder requires you to launch Sql Server Management Studio tool, while other features require you to launch the Report Manager website.   Also, there are some features that you rarely use, yet are exposed from the Report Manager portal, such as Job Management and System Wide Role & Security configuration.  

It appears that the end result of the proposed tool changes will be to correct these inconsistencies by consolidating server and system-wide configuration and administration tasks into Sql Server Management Studio, and moving some of the more user-facing admin features to the Report Manager.

Not a bad idea overall, now I just hope they fix support for FormsAuth throughout the entire solution (ReportBuilder, nudge nudge).

More Posts Next page »