Steve Wellens

Programming in the .Net environment

Sponsors

Links

Secret Covert Hush-Hush White Space in the DOM…EXPOSED!

Lurking in the shadows of the Stack Overflow website, selling aerosol cans of Bug-No-More to the rubes, I came across a question I thought I could answer. The OP (original poster) was using JavaScript to select and modify HTML elements without success.

The HTML was simple:

<ul>
    <li><a href="#" id="prev">Prev</a></li>
    <li><a href="#" id="middle">Middle</a></li>
    <li><a href="#" id="next">Next</a></li>
</ul>

First he created a simple function to make sure the basics were working (a very good practice). It colored the middle li element and it worked fine:

function middle()
{
    document.getElementById("middle").parentNode.style.backgroundColor = "yellow";
}
  • get the middle element using its id (an a tag)
  • move up one element via the parentnode (to the li)
  • color the li

 

 Next, he created a function to color the siblings of the middle element.

  • get the middle element using it's id (an a tag)
  • move up one element via the parentnode (to the li)
  • get the previous/next sibling
  • color the li 
function prevNextSibling()
{
    document.getElementById("middle").parentNode.previousSibling.style.backgroundColor = "pink";
 
    document.getElementById("middle").parentNode.nextSibling.style.backgroundColor = "cyan";
}

It didn't work.

When I debugged the code, I saw that nextSibling was returning an object that did not have a style property. What the heck was up with that?

In FireFox the debugging window looked like this:

In Chrome the debugging window looked like this:

 

In Internet Explorer the debugging window looked like this:

 

Hey, what the heck are those "Text = Empty Text Nodes" in the Internet Explorer window? What they are, my curious friend is the cause of the problem: nextSibling and previousSibling are returning Empty Text Nodes which do not have a style attribute.

To be perfectly clear:

  • ALL three browsers have Empty Text Nodes.
  • ONLY Internet Explorer displays them in its debugger.

Internet Explorer is the best browser in this situation.

So what are these Empty Text Nodes?

http://www.w3.org/DOM/faq.html#emptytext

 And what is the fix?

If the siblings are the same type, as they are in the original problem, use:

prevElementSibling

nextElementSibling

Otherwise, move two siblings to skip over the Empty Text Node sibling:

previousSibling.previousSibling

nextSibling.nextSibling

 Either way, it works:

 

Oooh, pretty.

So, in addition to learning about white space nodes in the DOM, we learned that having multiple browsers installed can be a good thing.

I hope someone finds this useful.

Steve Wellens

Code Builder for jQuery AJAX (Calling Web Services)

Getting stuck in the cycle of code-build-test, code-build-test… can drain you physically, mentally, emotionally and spiritually. In the back of your mind you know the clock is ticking and you are not making headway. Of course the boss keeps sticking his sweaty head into your tiny cube and asking, "Any progress?" What a jerk. The intern who is supposed to be helping you keeps asking stupid questions he could find on Google: the lazy meathead. The department secretary just told you they lost your time sheet so you'll have to fill out another one: Too bad you didn't make a copy. Your waist is getting thicker and your hair is getting thinner.

Well I can't help you with any of that.

Except, maybe for the first problem; if you are using jQuery's AJAX function, perhaps I can offer relief from the dreaded code-build-test cycle.

To help figure out the myriad of options when using the jQuery AJAX function to access Web Services, I created a web page with the most useful (as far as I know) options in a table with check boxes, values and descriptions. It builds the jQuery AJAX code automatically and you can execute it directly from the web page. We can break the cycle of code-build-test.

Here is a sample of the generated code:

Here is part of the Settings table:

Here are three divs I connected to three events to display the results:

So, as you enable options and change their values, the code is updated. You click the Execute button and wait for the server to respond.

I managed to get the code (JavaScript, CSS and HTML) into one file. Only the jQuery code is external. So if you are developing calls into a local project, you could download (steal/borrow) the page and put it into your own directory for testing.

Notes:

The HTML5 CSS property resize:both did not work correctly in all the browsers I tested (OK, it was IE) so I used the jQuery function resizeable.

I thought about putting up an hour glass cursor while waiting for the calls to finish or time out. But the 'A' in AJAX stands for Asynchronous, the web page is not blocked waiting for the server. An hour glass cursor would be inappropriate.

I put Save and Restore buttons in so if you get something that works, you can save it. Then feel free to experiment knowing you can always go back to a known good set of options. Typically most people will use this to find the minimum set of options. To remember the settings, I use localStorage, an HTML 5 feature.

Here are a few free public web services I tested the page with:

http://query.yahooapis.com/v1/public/yql?q=select%20item%20from%20weather.forecast%20where%20location%3D%2248907%22&format=json

 http://api.geonames.org/postalCodeLookupJSON?postalcode=55427&country=US&username=demo

I linked the jQuery libraries locally. In theory, CDN's are a great idea. In reality, they can paralyze sites. The next time you see a web page that is stuck loading, look at the status bar. Chances are it's loading from an external site.

I'm not an expert at jQuery AJAX by any means. If you see something I missed, or something useful that could be added to the page, please contact me.

And, before I forget, a link to the web page: jQuery AJAX Code Builder

This was a fun project. I hope someone finds it useful.

Steve Wellens

Calling WCF Services with jQuery…Stripped Down

Years ago, when I created my first ASMX Web Service and called it from an application, it was easy and it was fun. I was able to call functions on a server, from a client, with little effort. What a great and powerful tool!

Then WCF came out and was deemed the 'best practice' and the 'preferred' method. Developers were encouraged to use it. Sadly, it was a nightmare of complexity. Rube Goldberg would have been proud.

I recently began investigating calling WCF Services with jQuery. There aren't many articles or working samples that are simple enough to be a good starting point for development or proof of concept. I gathered what I learned and decided to cut out as much superfluous details from the paradigm as possible: I tried to make it palatable.

This is the smallest working example of jQuery calling a WCF Service anywhere (as far as I know).

Caveats:

WCF services are fragile. Use a good source control system or some substitute. If you make a change and everything is still working, check it in and label it. That way, when you break your code (and you will) and can't figure out what happened, you can revert to a known good version.

Normally the cache is your friend. When developing, the cache is your enemy. When your code changes don't seem to be having any effect, clear the cache and/or restart the web server. Sometimes I put message boxes or log strings in the code to see if it actually being executed.

We Don't Need No Stinking Interface:

I've got nothing against interfaces; I even blogged about them here. But I'm going to guess that less than one percent of web services require the use of an interface—maybe less than one in a thousand—maybe less than that.

The default template for WCF services uses an interface for a contract—an inexplicably stupid decision.

Here is how I removed the unneeded complexity.

In the interface file, the generated class looks something like this:

[ServiceContract]
public interface IService1
{
    [OperationContract]
    string GetData(int value);
 
  • I moved the contract attributes into the implementation class.
  • I added a WebInvoke attribute to the functions.
  • I removed the interface inheritance from the service implementation class.
  • I deleted (yes deleted!) the interface file.
  • In the config file, I changed the contract to be the implementation class, NOT the interface class.

Then I deleted all but one function in the implementation class…to get it really lean and mean.

Here is the final WCF Service CS code:

using System;
using System.ServiceModel;
using System.ServiceModel.Activation;
using System.ServiceModel.Web;
 
[ServiceContract]
public class MyService
{
    [OperationContract]
    [WebInvoke(Method = "POST",
               BodyStyle = WebMessageBodyStyle.Wrapped,
               ResponseFormat = WebMessageFormat.Json)]
    public string MyFunction(string Count)
    {
        return "The Count you passed in is: " + Count.ToString();
    }
}

Here is the HTML/JavaScript/jQuery code file.

<!DOCTYPE html>
<html>
<head>
    <title>Call WCF</title>
    <script src="Scripts/jquery.js" type="text/javascript"></script>
    <script type="text/javascript">
 
        var counter = 0;
        function CallMyService()
        {
            counter++;
 
            $.ajax({
                type: "POST",              
                url: "MyService.svc/MyFunction",
                data: '{"Count": "' + counter + '"}', 
                contentType: "application/json", // content type sent to server
                success: ServiceSucceeded,
                error: ServiceFailed
            });
        }
 
        // ---- WCF Service call backs -------------------
 
        function ServiceFailed(result)
        {
            Log('Service call failed: ' + result.status + '  ' + result.statusText);
        }
 
        function ServiceSucceeded(result)
        {
             var resultObject = result.MyFunctionResult;
            Log("Success: " + resultObject);
        }
 
        // ---- Log ----------------------------------------
        // utility function to output messages
 
        function Log(msg)
        {
            $("#logdiv").append(msg + "<br />");
        }
    </script>
 
</head>
<body>
    <input id="Button1" type="button" value="Execute" onclick="CallMyService();" />
 
    <div id="logdiv"></div>  <!--For messages-->
</body>
</html>

 

Note I am using a simple HTML page to call the service. It's my way of showing, "There's nothing up my sleeve." You can download the full project here. I recompiled it to .NET 4.5.

  • There is no project or solution file, open it as a Web Site.
  • Run/Browse the CallWCF.html file.
  • Click the Execute button.

You should see this:

I hope someone finds this useful.

Steve Wellens

Posted: Feb 03 2013, 10:50 PM by SGWellens | with 4 comment(s) |
Filed under: , ,
CSS3 box-shadow and Visual Studio Features

A few years ago, while creating an ASP.NET web site, I decided to add a gradient border to "sex up" the look of the site. Using a sophisticated image editing program, I created a small gradient image. I made sure the ending color of the gradient matched the color of the body in the target page. Here it is:

 

I added CSS to display it like this:

background-image: url(BlendLeft.ico);
background-repeat: repeat-y;

 The CSS replicated the image up and down the left side of the browser window:

 

Pretty cool, eh? I decided it would be even cooler to have the gradient on all four sides of the window. My goal was to make it look like a soft picture frame.

I spent hours and hours trying to create and use the eight images required.

  1. Top Left Corner
  2. Top
  3. Top Right Corner
  4. Right
  5. Bottom Right Corner
  6. Bottom
  7. Bottom Left Corner
  8. Left

I planned and created, sliced and diced, snipped and stitched, hundreds of different gradient images trying to keep the pixel dimensions and color transitions straight. I created several versions of web pages that displayed the images. I tried doing it programmatically where you could enter the start and end colors and the width of the border — the program generated and saved the eight pre-named images. However, I had no control over the gradient algorithms. I never could get smooth transitions between the images. I started to investigate writing my own gradient algorithms but ran out of time. Alas, I failed like a kite in a hurricane, like a fat kid in gym class, like a poodle in a shark tank.

Then CSS3 came out.

Two of the interesting new features in CSS3 are text-shadow and box-shadow. Shadows are traditionally used to give a three-dimensional effect to visual elements. While studying and playing with these new features, I was surprised to find the shadows could be placed anywhere and the box-shadow property had an inset tag….hmmm.

I used this on an html body element:

    box-shadow: 0 0 50px green inset;

…and got this:

 

Yippee ki-yay!!! This is EXACTLY what I spent hours trying to do…in one line of code.

Here is the full CSS for the body tag:

    body
    {
        width: 100%;
        height: 100%;
        padding: 0;
        margin: 0;
        position: absolute;
        box-shadow: 0 0 50px green inset;
    }
 

My holy grail has been found.

By the way, you can apply the box-shadow to other elements...like this div:

 

While playing and experimenting with the new CSS3 features, I came across a few helpful Visual Studio features you may not know about.

CSS Color Picker

In the CSS editor, when a color is expected, you are given a list of predefined colors to pick from:

 

The result looks like this:

    color: red; 

That's nice, I like seeing the word red instead of #ff0000.

If you type '#' to enter a color, you get the following pop-up window:

 

That's nice, you can click a color and it will supply the hexadecimal value:

    color:#f00; 

That's nice too. If you click the double down arrow on the right side of the pop-up and you get this:

 

You can select any color and it will insert the hex values.

And, you can use the eye dropper to select any color from any window showing on your PC.

And, try moving the Opacity slider. Then you get values that include the alpha component like this:

    color:rgba(255, 216, 0, 0.74); 

It's useful AND fun.

Multiple Browser Launch

First, if your project requirements include targeting multiple browsers, my sincere condolences. It is tedious and painful work.

But you do get some help from Visual Studio. In the following pull-down select "Browse With…"

 

Unsurprisingly, you get the "Browse With" window:

 

As you can see, you can select multiple browsers. Hold down the Control Key and click the browsers you want.

Then click the 'Set as Default' button.

You can hit Cancel to exit the window, the settings are remembered.

 

Now the selection says "Multiple Browsers" and when you start the web site, all the browsers you selected will be launched. It really speeds things up when you are in tweak-check-tweak-check-tweak-check mode. I wish there was a setting to automatically save all changes before launching the browsers.

I hope some finds this useful.

Steve Wellens

Posted: Jan 16 2013, 03:30 PM by SGWellens | with no comments |
Filed under: ,
Google and Bing Map APIs Compared

At one of the local golf courses I frequent, there is an open grass field next to the course. It is about eight acres in size and mowed regularly. It is permissible to hit golf balls there—you bring and shag our own balls. My golf colleagues and I spend hours there practicing, chatting and in general just wasting time.

One of the guys brings Ginger, the amazing, incredible, wonder dog.

Ginger is a Hungarian Vizlas (or Hungarian pointer). She chases squirrels, begs for snacks and supervises us closely to make sure we don't misbehave.

Anyway, I decided to make a dedicated web page to measure distances on the field in yards using online mapping services. I started with Google maps and then did the same application with Bing maps. It is a good way to become familiar with the APIs.

Here are images of the final two maps:

Google:

 Bing:

 

To start with online mapping services, you need to visit the respective websites and get a developers key.

I pared the code down to the minimum to make it easier to compare the APIs. Google maps required this CSS (or it wouldn't work):

<style type="text/css">
    html 
    {
        height: 100%;
    }
 
    body 
    {
        height: 100%;
        margin: 0;
        padding: 0;
    }

Here is how the map scripts are included. Google requires the developer Key when loading the JavaScript, Bing requires it when the map object is created:

Google:

<script type="text/javascript" src="https://maps.googleapis.com/maps/api/js?key=XXXXXXX&libraries=geometry&sensor=false" > </script>

Bing:

<script  type="text/javascript" src="http://ecn.dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=7.0"> </script>

Note: I use jQuery to manipulate the DOM elements which may be overkill, but I may add more stuff to this application and I didn't want to have to add it later. Plus, I really like jQuery.

Here is how the maps are created:

Common Code (the same for both Google and Bing Maps):

    <script type="text/javascript">
        var gTheMap;
        var gMarker1;
        var gMarker2;
 
        $(document).ready(DocLoaded);
 
        function DocLoaded()
        {
            // golf course coordinates
            var StartLat = 44.924254;
            var StartLng = -93.366859;
 
            // what element to display the map in
            var mapdiv = $("#map_div")[0];
 

Google:

        // where on earth the map should display
        var StartPoint = new google.maps.LatLng(StartLat, StartLng);
 
        // create the map
        gTheMap = new google.maps.Map(mapdiv, 
            {
                center: StartPoint,
                zoom: 18,
                mapTypeId: google.maps.MapTypeId.SATELLITE
            });
 
        // place two markers
        marker1 = PlaceMarker(new google.maps.LatLng(StartLat, StartLng + .0001));
        marker2 = PlaceMarker(new google.maps.LatLng(StartLat, StartLng - .0001));
 
        DragEnd(null);
    }

Bing:

        // where on earth the map should display
        var StartPoint = new  Microsoft.Maps.Location(StartLat, StartLng);
 
        // create the map
        gTheMap = new Microsoft.Maps.Map(mapdiv,
            {
                credentials: 'XXXXXXXXXXXXXXXXXXX',
                center: StartPoint,
                zoom: 18,
                mapTypeId: Microsoft.Maps.MapTypeId.aerial
            });
 
        // place two markers
        marker1 = PlaceMarker(new Microsoft.Maps.Location(StartLat, StartLng + .0001));
        marker2 = PlaceMarker(new Microsoft.Maps.Location(StartLat, StartLng - .0001));
 
        DragEnd(null);
    }

Note: In the Bing documentation, mapTypeId: was missing from the list of options even though the sample code included it.

Note: When creating the Bing map, use the developer Key for the credentials property.

I immediately place two markers/pins on the map which is simpler that creating them on the fly with mouse clicks (as I first tried). The markers/pins are draggable and I capture the DragEnd event to calculate and display the distance in yards and draw a line when the user finishes dragging.

Here is the code to place a marker:

Google:

// ---- PlaceMarker ------------------------------------
 
function PlaceMarker(location)
{
    var marker = new google.maps.Marker(
        {
            position: location,
            map: gTheMap,
            draggable: true
        });
    marker.addListener('dragend', DragEnd);
    return marker;
}

Bing:

// ---- PlaceMarker ------------------------------------
 
function PlaceMarker(location)
{
    var marker = new Microsoft.Maps.Pushpin(location,
    {
        draggable : true
    });
    Microsoft.Maps.Events.addHandler(marker, 'dragend', DragEnd);
    gTheMap.entities.push(marker);
    return marker;
}

Here is the code than runs when the user stops dragging a marker:

Google:

// ---- DragEnd -------------------------------------------
 
var gLine = null;
 
function DragEnd(Event)
{
    var meters = google.maps.geometry.spherical.computeDistanceBetween(marker1.position, marker2.position);
    var yards = meters * 1.0936133;
    $("#message").text(yards.toFixed(1) + ' yards');
 
    // draw a line connecting the points
    var Endpoints = [marker1.position, marker2.position];
 
    if (gLine == null)
    {
        gLine = new google.maps.Polyline({
            path: Endpoints,
            strokeColor: "#FFFF00",
            strokeOpacity: 1.0,
            strokeWeight: 2,
            map: gTheMap
        });
    }
    else
       gLine.setPath(Endpoints);
}

Bing:

// ---- DragEnd -------------------------------------------
 
var gLine = null;
 
function DragEnd(Args)
{
   var Distance =  CalculateDistance(marker1._location, marker2._location);
 
   $("#message").text(Distance.toFixed(1) + ' yards');
 
    // draw a line connecting the points
   var Endpoints = [marker1._location, marker2._location];      
 
   if (gLine == null)
   {
       gLine = new Microsoft.Maps.Polyline(Endpoints, 
           {
               strokeColor: new Microsoft.Maps.Color(0xFF, 0xFF, 0xFF, 0),  // aRGB
               strokeThickness : 2
           });
 
       gTheMap.entities.push(gLine);
   }
   else
       gLine.setLocations(Endpoints);
 }
 

Note: I couldn't find a function to calculate the distance between points in the Bing API, so I wrote my own (CalculateDistance). If you want to see the source for it, you can pick it off the web page.

Note: I was able to verify the accuracy of the measurements by using the golf hole next to the field. I put a pin/marker on the center of the green, and then by zooming in, I was able to see the 150 markers on the fairway and put the other pin/marker on one of them.

Final Notes:

All in all, the APIs are very similar. Both made it easy to accomplish a lot with a minimum amount of code.

In one aerial view, there are leaves on the tree, in the other, the trees are bare. I don't know which service has the newer data.

Here are links to working pages:

Bing Map Demo

Google Map Demo

I hope someone finds this useful.

Steve Wellens

 

Favorite Programmer Quotes…

 

"A computer once beat me at chess, but it was no match for me at kick boxing." — Emo Philips

 

"There are only 10 types of people in the world, those who understand binary and those who don't. " – Unknown.

 

"Premature optimization is the root of all evil." — Donald Knuth

 

"I should have become a doctor; then I could bury my mistakes." — Unknown

 

"Code softly and carry a large backup thumb drive." — Me

 

"Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live." — Martin Golding

 

"DDE…the protocol from hell"— Charles Petzold

 

"Just because a thing is new don't mean that it's better" — Will Rogers

 

"The mark of a mature programmer is willingness to throw out code you spent time on when you realize it's pointless." — Bram Cohen

 

"A good programmer is someone who looks both ways before crossing a one-way street." — Doug Linder

 

"The early bird may get the worm but it's the second mouse that gets the cheese." — Unknown

 

I hope someone finds this amusing.

Steve Wellens

The Low Down Dirty Azure Blues

Remember the SETI screen savers that used to be on everyone's computer? As far I as know, it was the first bona-fide use of "Cloud" computing…albeit an ad hoc cloud. I still think it was a brilliant leveraging of computing power.

My interest in clouds was re-piqued when I went to a technical seminar at the local .Net User Group. The speaker was Mike Benkovitch and he expounded magnificently on the virtues of the Azure platform. Mike always does a good job. One killer reason he gave for cloud computing is instant scalability. Not applicable for most applications, but it is there if needed.

I have a bunch of files stored on Microsoft's SkyDrive platform which is cloud storage. It is painfully slow. Accessing a file means going through layers and layers of software, redirections and security. Am I complaining? Hell no! It's free!

So my opinions of Cloud Computing are both skeptical and appreciative.

What intrigued me at the seminar, in addition to its other features, is that Azure can serve as a web hosting platform. I have a client with an Asp.Net web site I developed who is not happy with the performance of their current hosting service. I checked the cost of Azure and since the site has low bandwidth/space requirements the cost would be competitive with the existing host provider: Azure Pricing Calculator.

And, Azure has a three month free trial. Perfect! I could try moving the website and see how it works for free.

I went through the signup process. Everything was proceeding fine until I went to the MS SQL database management screen. A popup window informed me that I needed to install Silverlight on my machine.

Silverlight? No thanks. Buh-Bye.

I half-heartedly found the Azure support button and logged a ticket telling them I didn't want Silverlight on my machine. Within 4 to 6 hours (and a myriad (5) of automated support emails) they sent me a link to a database management page that did not require Silverlight.

Thanks!

I was able to create a database immediately. One really nice feature was that after creating the database, I was given a list of connection strings. I went to the current host provider, made a backup of the database and saved it to my machine. I attached to the remote database using SQL Server Studio 2012 and looked for the Restore menu item. It was missing. So I tried using the SQL command:

RESTORE DATABASE MyDatabase

FROM DISK ='C:\temp\MyBackup.bak'

Msg 40510, Level 16, State 1, Line 1

Statement 'RESTORE DATABASE' is not supported in this version of SQL Server.

Are you kidding me? Why on earth…? This can't be happening!

I opened both the source database and destination database in SQL Management Studio. I right clicked the source database, selected "Tasks" and noticed a menu selection called "Deploy Database to SQL Azure"

Are you kidding me? Could it be? Oh yes, it be!

There was a small problem because the database already existed on the Azure machine, I deployed to a new name, deleted the existing database and renamed the deployed database to what I needed. It was ridiculously easy.

Being able to attach SQL Management Studio to remote databases is an awesome but scary feature. You can limit the IP addresses that can access the database which enhances security but when you give people, any people, me included, that much power, one errant mouse click could bring a live system down. My Advice: Code softly and carry a large backup thumb-drive.

Then I created a web site, the URL it returned look something like this: http://MyWebSite.azurewebsites.net/

Azure supports FTP, but I couldn't figure out the settings until I downloaded the publishing profile. It was an XML file that contained the needed information.

I still couldn't connect with my FTP client (FileZilla). After about an hour of messing around, I deleted the port number from the FileZilla setup page….and voila, I was in like Flynn.

 

There are other options of deploying directly from Visual Studio, TFS, etc. but I do not like integrated tools that do things without my asking: It's usually hard to figure out what they did and how to undo it.

I uploaded the aspx , cs , webconfig, etc. files.

Bu it didn't run. The site I ported was in .NET 3.5. The Azure website configuration page gave me a choice between .NET 2.0 and 4.0. So, I switched to Visual Studio 2010, chose .NET 4.0 and upgraded the site. Of course I have the original version completely backed up and stored in a granite cave beneath the Nevada desert. And I have a backup CD under my pillow.

The site uses ReportViewer to generate PDF documents. Of course it was the wrong version. I removed the old references to version 9 and added new references to version 10 (*see note below). Since the DLLs were not on the Azure Server, I uploaded them to the bin directory, crossed my fingers, burned some incense and gave it a try. After some fiddling around it ran. I don't know if I did anything particular to make it work or it just needed time to sort things out.

However, one critical feature didn't work: ReportViewer could not programmatically generate PDF documents. I was getting this exception: "An error occurred during local report processing. Parameter is not valid."

Rats.

I did some searching and found other people were having the same problem, so I added a post saying I was having the same problem: http://social.msdn.microsoft.com/Forums/en-US/windowsazurewebsitespreview/thread/b4a6eb43-0013-435f-9d11-00ee26a8d017

Currently they are looking into this problem and I am waiting for the results. Hence I had the time to write this BLOG entry. How lucky you are.

This was the last message I got from the Microsoft person:

Hi Steve,

Windows Azure Web Sites is a multi-tenant environment. For security issue, we limited some API calls. Unfortunately, some GDI APIS required by the PDF converting function are in this list.

We have noticed this issue, and still investigation the best way to go. At this moment, there is no news to share. Sorry about this.

Will keep you posted.

If I had to guess, I would say they are concerned with people uploading images and doing intensive graphics programming which would hog CPU time.  But that is just a guess.

Another problem.

While trying to resolve the ReportViewer problem, I tried to write a file to the PDF directory to see if there was a permissions problem with some test code:

String MyPath = MapPath(@"~\PDFs\Test.txt");

File.WriteAllText(MyPath, "Hello Azure");    

I got this message: Access to the path <my path> is denied.

After some research, I understood that since Azure is a cloud based platform, it can't allow web applications to save files to local directories. The application could be moved or replicated as scaling occurs and trying to manage local files would be problematic to say the least.

There are other options:

  1. Use the Azure APIs to get a path. That way the location of the storage is separated from the application. However, the web site is then tied Azure and can't be moved to another hosting platform.
  2. Use the ApplicationData folder (not recommended).
  3. Write to BLOB storage.
  4. Or, I could try and stream the PDF output directly to the email and not save a file.

I'm not going to work on a final solution until the ReportViewer is fixed. I am just sharing some of the things you need to be aware of if you decide to use Azure. I got this information from here. (Note the author of the BLOG added a comment saying he has updated his entry).

Is my memory faulty?

While getting this BLOG ready, I tried to write the test file again. And it worked. My memory is incorrect, or much more likely, something changed on the server…perhaps while they are trying to get ReportViewer to work. (Anyway, that's my story and I'm sticking to it).

*Note: Since Visual Studio 2010 Express doesn't include a Report Editor, I downloaded and installed SQL Server Report Builder 2.0. It is a standalone Report Editor to replace the one not in Visual Studio 2010 Express.

I hope someone finds this useful.

Steve Wellens


Elegance, thy Name is jQuery

So, I'm browsing though some questions over on the Stack Overflow website and I found a good jQuery question just a few minutes old. Here is a link to it. It was a tough question; I knew that by answering it, I could learn new stuff and reinforce what I already knew: Reading is good, doing is better. Maybe I could help someone in the process too.

I cut and pasted the HTML from the question into my Visual Studio IDE and went back to Stack Overflow to reread the question. Dang, someone had already answered it! And it was a great answer. I never even had a chance to start analyzing the issue.

Now I know what a one-legged man feels like in an ass-kicking contest.

Nevertheless, since the question and answer were so interesting, I decided to dissect them and learn as much as possible.

The HTML consisted of some divs separated by h3 headings.  Note the elements are laid out sequentially with no programmatic grouping:

<h3 class="heading">Heading 1</h3>
<div>Content</div>
<div>More content</div>
<div>Even more content</div>
<h3 class="heading">Heading 2</h3>
<div>some content</div>
<div>some more content</div>
<h3 class="heading">Heading 3</h3>
<div>other content</div>
</form>
</body>
 
The requirement was to wrap a div around each h3 heading and the subsequent divs grouping them into sections. Why? I don't know, I suppose if you screen-scrapped some HTML from another site, you might want to reformat it before displaying it on your own. Anyways…

Here is the marvelously, succinct posted answer:

$('.heading').each(function(){
$(this).nextUntil('.heading').andSelf().wrapAll('<div class="section">');
});

I was familiar with all the parts except for nextUntil and andSelf. But, I'll analyze the whole answer for completeness. I'll do this by rewriting the posted answer in a different style and adding a boat-load of comments:

function Test()
{
// $Sections is a jQuery object and it will contain three elements
var $Sections = $('.heading');

// use each to iterate over each of the three elements
$Sections.each(function ()
{
// $this is a jquery object containing the current element
// being iterated
var $this = $(this);

// nextUntil gets the following sibling elements until it reaches
// an element with the CSS class 'heading'
// andSelf adds in the source element (this) to the collection
$this = $this.nextUntil('.heading').andSelf();

// wrap the elements with a div
$this.wrapAll('<div class="section" >');
});
}
 
The code here doesn't look nearly as concise and elegant as the original answer. However, unless you and your staff are jQuery masters, during development it really helps to work through algorithms step by step. You can step through this code in the debugger and examine the jQuery objects to make sure one step is working before proceeding on to the next. It's much easier to debug and troubleshoot when each logical coding step is a separate line of code.

Note: You may think the original code runs much faster than this version. However, the time difference is trivial: Not enough to worry about: Less than 1 millisecond (tested in IE and FF).

Note: You may want to jam everything into one line because it results in less traffic being sent to the client. That is true. However, most Internet servers now compress HTML and JavaScript by stripping out comments and white space (go to Bing or Google and view the source). This feature should be enabled on your server: Let the server compress your code, you don't need to do it.

Free Career Advice: Creating maintainable code is Job One—Maximum Priority—The Prime Directive. If you find yourself suddenly transferred to customer support, it may be that the code you are writing is not as readable as it could be and not as readable as it should be. Moving on…

I created a CSS class to enhance the results:

.section
{
background-color: yellow;
border: 2px solid black;
margin: 5px;
}
 
Here is the rendered output before:

 

…and after the jQuery code runs.

 

Pretty Cool! But, while playing with this code, the logic of nextUntil began to bother me: What happens in the last section? What stops elements from being collected since there are no more elements with the .heading class? The answer is nothing.  In this case it stopped collecting elements because it was at the end of the page.  But what if there were additional HTML elements?

I added an anchor tag and another div to the HTML:

<h3 class="heading">Heading 1</h3>
<div>Content</div>
<div>More content</div>
<div>Even more content</div>
<h3 class="heading">Heading 2</h3>
<div>some content</div>
<div>some more content</div>
<h3 class="heading">Heading 3</h3>
<div>other content</div>
<a>this is a link</a>
<div>unrelated div</div>
</form>
</body>

The code as-is will include both the anchor and the unrelated div. This isn't what we want.

 

My first attempt to correct this used the filter parameter of the nextUntil function:

nextUntil('.heading', 'div') 
 
This will only collect div elements. But it merely skipped the anchor tag and it still collected the unrelated div:

 

The problem is we need a way to tell the nextUntil function when to stop. CSS selectors to the rescue!

nextUntil('.heading, a') 
 
This tells nextUntil to stop collecting elements when it gets to an element with a .heading class OR when it gets to an anchor tag. In this case it solved the problem. FYI: The comma operator in a CSS selector allows multiple criteria.

 

Bingo!

One final note, we could have broken the code down even more:

We could have replaced the andSelf function here:

$this = $this.nextUntil('.heading, a').andSelf();

With this:

// get all the following siblings and then add the current item
$this = $this.nextUntil('.heading, a');
$this.add(this);
 
But in this case, the andSelf function reads real nice. In my opinion.

Here's a link to a jsFiddle if you want to play with it.

I hope someone finds this useful

Steve Wellens

Cool CSS 4 Feature: pointer-events

CSS 4? Really? CSS 3 isn't fully released yet! What on earth is going on here?

It all started when I was fooling around with GIMP, the extremely powerful free graphics editor. I took a public domain image, re-sized it, gave it a transparent background and then added a perspective shadow. 

It is beautiful.

I wanted to see the image in "action" so I put it on a web page. But because of the way HTML elements are rendered, there was nothing behind my gorgeous image to demonstrate the transparency. I could have used a background color but instead I gave the image an absolute position and positioned it over a button:

                   

It looks 3-dimensional but you can't click on the button with the mouse. The transparent part of the image is like having a sheet of glass over the button. You can tab to it and use the space bar but it's really not useable as it is.

I recalled reading about the CSS pointer-events property. I assigned it to the image and voila! It worked…at least on FireFox. IE hasn't implemented this feature yet.

Here is a jsFiddle that shows it in action.  If your browser supports pointer-events, you will be able to click the button:

Here is a direct link to the jsFiddle (just in case):  http://jsfiddle.net/Steve_Wellens/6GbwK/

When I did a bit of research on the pointer-events property, I found it had been pushed into CSS4 because it had "issues". Here is a link that documents re-assigning this feature from CSS3 to CSS4: http://wiki.csswg.org/spec/css4-ui#pointer-events

Here's a table showing what browsers/versions currently support it: http://caniuse.com/pointer-events.

The big question is how do you rationalize this to your boss? Here is a list of business justifications:




I hope someone finds this useful.

Steve Wellens

Log4net: Log to a JavaScript Console

While lurking and skulking in the shadows of various technical .Net sites, I've noticed many developers discussing log4net in their blogs and posts; log4net is an extremely popular tool for logging .Net Applications. So, I decided to try it out. After the initial complexities of getting it up and running, I was suitably impressed. Could it be…logging is fun? Well, I don't know if I'd go that far…at least I'd never admit it.

One of the great features of log4net is how easy it is to route the logs to multiple outputs. Here is an incomplete list of outputs…or 'Appenders' in the log4net vernacular:

AdoNetAppender Logs to databases
ConsoleAppender Logs to a console window
EventLogAppender Logs to Windows event log
FileAppender Logs to a file
RollingFileAppender Logs to multiple files
SmtpAppender Logs to an email address
TraceAppender Logs to the .Net trace system
AspNetTraceAppender Logs to the Asp.Net page trace
MemoryAppender Logs to a memory buffer
Etc., etc., etc.

Wow.

I've been doing a lot of work with jQuery/JavaScript and it dawned on me that seeing server side logging strings in a JavaScript Console could be useful.

So I wrote a log4net JavaScript Console Appender. Strings logged at the server will show up in the browser's console window. Note: For IE, you need to have the "Developer Tools" window active.

I'm not going to describe how to setup log4net in an Asp.Net web site; there are many step-by-step tutorials around. But I'll give you some hints:

  • Each step must be followed or it will not work (duh).
  • Putting the log4net settings in a separate configuration file is more complicated than having the settings in web.config: Start with the settings in web.config.
  • The name in this line of code: LogManager.GetLogger("MyLogger");
    ...refers to this section in the configuration file: <logger name="MyLogger">

I built the Appender and a test Asp.Net site in .Net Framework 4.0.  Here's the jsConsoleAppender.cs file:

using System;
using System.Collections.Generic;
using System.Text;
using log4net;
using log4net.Core;
using log4net.Appender;
using log4net.Layout;
using System.Web;
using System.Web.UI;
 
namespace log4net.Appender
{
    // log4net JSConsoleAppender
    // Writes log strings to client's javascript console if available
 
    public class JSConsoleAppender : AppenderSkeleton
    {
        // each JavaScript emitted requires a unique id, this counter provides it
        private int m_IDCounter = 0;
 
        // what to do if no HttpContext is found
        private bool m_ExceptionOnNoHttpContext = true;
        public bool ExceptionOnNoHttpContext
        {
            get { return m_ExceptionOnNoHttpContext; }
            set { m_ExceptionOnNoHttpContext = value; }
        }
 
        // The meat of the Appender
        override protected void Append(LoggingEvent loggingEvent)
        {
            // optional test for HttpContext, set in config file.
            // default is true
            if (ExceptionOnNoHttpContext == true)
            {
                if (HttpContext.Current == null)
                {
                    ErrorHandler.Error("JSConsoleAppender: No HttpContext to write javascript to.");
                    return;
                }
            }
 
            // newlines mess up JavaScript...check for them in the pattern
            PatternLayout Layout = this.Layout as PatternLayout;
 
            if (Layout.ConversionPattern.Contains("%newline"))
            {
                ErrorHandler.Error("JSConsoleAppender: Pattern may not contain %newline.");
                return;
            }
 
            // format the Log string
            String LogStr = this.RenderLoggingEvent(loggingEvent);
 
            // single quotes in the log message will mess up our JavaScript
            LogStr = LogStr.Replace("'", "\\'");
 
            // Check if console exists before writing to it
            String OutputScript = String.Format("if (window.console) console.log('{0}');", LogStr);
 
            // This sends the script to the bottom of the page
            Page page = HttpContext.Current.CurrentHandler as Page;
            page.ClientScript.RegisterStartupScript(page.GetType(), m_IDCounter++.ToString(), OutputScript, true);
        }
 
        // There is no default layout
        override protected bool RequiresLayout
        {
            get { return true; }
        }
    }
}

From the Asp.Net test application, here's the web.config file. In the pattern for the JSConsoleAppender, I added the word SERVER: to differentiate the lines from client logging. Note there are two other Appenders in the log…just for fun!

<?xml version="1.0"?>
<configuration>
 
  <!--BEGIN log4net configuration-->
  <configSections >
    <section name="log4net"
             type="log4net.Config.Log4NetConfigurationSectionHandler, log4net"/>
  </configSections>
  <log4net>
 
    <appender name="LogFileAppender"
              type="log4net.Appender.FileAppender">
      <param  name="File" value="C:\Log4Net.log"/>
      <layout type="log4net.Layout.PatternLayout">
        <param name="ConversionPattern"  value="%d %-5p %c %m%n"/>
      </layout>
    </appender>
 
    <appender name="TraceAppender"
              type="log4net.Appender.TraceAppender">
      <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="%date  %-5level [%property{NDC}] - %message%newline" />
      </layout>
    </appender>
 
    <appender name="JSConsoleAppender"
              type="log4net.Appender.JSConsoleAppender">
      <layout type="log4net.Layout.PatternLayout">
        <!--Note JSConsoleAppender cannot have %newline-->
        <conversionPattern value="SERVER: %date %-5level %logger:  %message  SRC: %location" />
      </layout>
    </appender>
 
    <logger name="MyLogger">
      <level value="ALL" />
      <appender-ref ref="LogFileAppender"  />
      <appender-ref ref="TraceAppender"  />
      <appender-ref ref="JSConsoleAppender"  />
    </logger>
  </log4net>
  <!--END log4net configuration-->
 
  <system.web>
    <compilation debug="true"
                 targetFramework="4.0"/>
  </system.web>
</configuration>

Here's the default.aspx file from the test program, I added a bit of JavaScript and jQuery:

<%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %>
 
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
 
<html xmlns="http://www.w3.org/1999/xhtml">
<head id="Head1" runat="server">
    <title>Log4Net Test</title>
    <script src="Scripts/jquery-1.7.js" type="text/javascript"></script>
    <script type="text/javascript">
 
        $(document).ready(DocReady);
 
        function DocReady()
        {
            if (window.console) 
                console.log("CLIENT: Doc Ready!");
        }
    </script>
</head>
<body>
    <form id="form1" runat="server">
    <div>
    <h5>Log4Net Test</h5>
        <asp:Button ID="ButtonLog" runat="server" Text="Do Some Logging!" 
            onclick="ButtonLog_Click" />   
    </div>
    </form>
</body>
</html>

Here's the default.cs file from the test program with some logging strings:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using log4net;
 
public partial class _Default : System.Web.UI.Page
{
    private static readonly ILog log = LogManager.GetLogger("MyLogger");
 
    protected void Page_Load(object sender, EventArgs e)
    {
        log.Info("Page_Load!");
    }
 
    protected void ButtonLog_Click(object sender, EventArgs e)
    {
        log.Info("Soft kitty, warm kitty");
        log.Warn("Little ball of fur");
        log.Error("Happy kitty, sleepy kitty");
        log.Fatal("Purr, purr, purr.");
    }
}

And finally, here are the three output logs:

From the JSConsoleAppender (IE Developer Tools), it includes a client log call with the server log calls:

From the TraceAppender, the Visual Studio Output window:

 

From the LogFileAppender (using Notepad):

You can download the solution/projects/code  here.

I hope someone finds this useful.

Steve Wellens

More Posts Next page »