Ryan Ternier

Killer ASP.NET ninja coding monkeys do exist!
JQuery image Upload & refresh using an ASHX File part 2.

A bit ago I wrote a blog post about how to hook up a JQuery upload script go a page to asynchronously upload files to an ASHX page on a server.

For the 2nd part I’ll show you an easy snippet of code that can be used to upload and/or view the image. The example below deals with signatures, though it can be easily and quickly modified for any type of image.

I’m storing the uploaded image in a Session variable. I do this because a user might have uploaded a wrong image or they might not want to save what they uploaded. As well, once you upload an image to the server, you are going to want to let the user see it. If the image was incorrect, you’ll be writing it to the database and then retrieving it back right away, not really efficient.

Remember to include the IRequiresSessionState interface when you are creating your ASHX class so you have access to the Session.

Let’s look at some code. I have to determine what I’m doing – uploading an image, or viewing one.


string fileName = System.IO.Path.GetFileName(context.Request.Files[0].FileName);
string extention = System.IO.Path.GetExtension(context.Request.Files[0].FileName).ToLower();
HttpPostedFile file = context.Request.Files[0];
if (file.ContentLength == 0)
    //no file posted.
    rMessage = "There was no data found in the file. Please ensure the file being uploaded is a valid image file.";
byte[] bImage = new Byte[file.ContentLength];
file.InputStream.Read(bImage, 0, file.ContentLength);
context.Session["Signature"] = bImage;
if (info.Successful)
    rMessage = "Signature upload successful.";
    rMessage = "There was an error uploading your signature.";

All files that are uploaded are located in context.Request.Files. Seeing I’m only uploading one image, I know it’s going to be at the 0 index in the FileCollection.

Once I have my file, I want to create a byte array (byte[]) of that file, and store that in memory.

And we’re done.


string contentType = Microsoft.Win32.Registry.GetValue("HKEY_CLASSES_ROOT\\.PNG", "Content Type", "application/octetstream").ToString();
utility = new Utility();
double scale = utility.GetScale(signature, 450, 50);
signature = utility.ScaleByPercent(signature, (float)scale);
context.Response.AddHeader("Content-Disposition", "attachment; filename=UserSignature.png");
context.Response.AddHeader("Content-Length", signature.Length.ToString());
context.Response.AddHeader("Cache-Control", "no-cache, must-revalidate");
context.Response.Expires = -1;
context.Response.ContentType = contentType;                
context.Response.BufferOutput = false;                
context.Response.OutputStream.Write(signature, 0, signature.Length);

After I upload an image I change the SRC of my <img /> control so it will download the new image. I do this by adding a randomized alpha-numeric string to the end so the browser will not cache the image.

In this example I’m telling the browser it’s getting a PNG image. I am also scaling the image to be 450 by 50 (Code below).

Once I have a scaled image, I send it back down to the client.

Scaling an Image

An image might be uploaded that’s 3000x4000 or some other ugly size. No one wants to get sent an image that size on the web for previewing… no one…. So I decide to trim it down.

public double GetScale(byte[] image, double width, double height)
        double scale = 1.0;
        System.IO.MemoryStream ms = new MemoryStream(image);
        System.Drawing.Image img = System.Drawing.Image.FromStream(ms);

        double sX, sY;
        sX = width / img.Width;
        sY = height / img.Height;

        ms = null;
        img = null;

        //we have the scale and the 64bit string.
        scale = Math.Min(sX, sY);
        return scale;
    catch (Exception ex)
        throw new Exception("Error geting scale", ex);

I’m passing in the byte[] and the width / height that I want. After I get the scale needed, I scale the image:

public Image ScaleByPercent(Image image, float percent)
        Bitmap result = null;

        if (image != null)

            int destWidth = (int)((float)image.Width * percent);
            int destHeight = (int)((float)image.Height * percent);

            Rectangle srcRec = new Rectangle(0, 0, image.Width, image.Height);
            Rectangle destRec = new Rectangle(0, 0, destWidth, destHeight);

            result = new Bitmap(destWidth, destHeight);
            result.SetResolution(image.HorizontalResolution, image.VerticalResolution);

            using (Graphics g = Graphics.FromImage(result))
                g.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic;//InterpolationMode.HighQualityBicubic;
                g.DrawImage(image, destRec, srcRec, GraphicsUnit.Pixel);

        return result;
    catch (Exception ex)
        throw new Exception("Error scaling image", ex);

public byte[] ScaleByPercent(byte[] image, float percent)
        System.IO.MemoryStream ms = new MemoryStream(image);
        System.Drawing.Image img = System.Drawing.Image.FromStream(ms);
        Image i = ScaleByPercent(img, percent);

        MemoryStream m = new MemoryStream();
        i.Save(m, System.Drawing.Imaging.ImageFormat.Png);
        image = m.ToArray();

        m = null;
        ms = null;
        img = null;
        i = null;
        return image;
    catch (Exception ex)
        throw new Exception("Error scaling image", ex);


And that’s pretty much it.

JQuery image upload & refresh using an ASHX file.

I recently finished a solution that uploads an image to my server, and then displays the uploaded image to the user. this is all done “asynchrously”, and it’s within an update panel to boot.

There are a good handful of solutions for doing this. One of the most popular is SWFUpload but I didn’t want to use any flash in my solution – some customers will not have it or it is blocked by some agencies… yes in the real world large agencies and corporations still use IE6 so HTML5… I’ll see you in a few years… back on track.

For the image upload I used a simple JQuery script written by Andris Valums. Out of all the scripts I looked at, this was the most straight forward, easy to implement, and had no inner tweaks that could cause issues in the future.

  1. function initSignature(userID) {
  2.     refreshSignature(userID);
  3.     var button = $('#dUpload'), interval;
  4.     new AjaxUpload("#dUpload", {
  5.         action: "../../handlers/ProcessSignature.ashx",
  6.         name: "Signature",
  7.         data: { "UserID": userID, "Type": "Upload" },
  8.         obSubmit: function (file, ext) {
  9.             if (ext && /^(bmp|jpg|png|jpeg|gif)$/.test(ext)) {
  10.                 // change button text, when user selects file.
  11.                 button.text("Uploading");                            
  12.                 this.disable();//don't upload any more files.
  13.                 interval = window.setInterval(function () {
  14.                     var text = button.text();
  15.                     if (button.text().length < 13)
  16.                         button.text(button.text() + ".");
  17.                     else
  18.                         button.text("Uploading");
  20.                     $("#dMessage").text("Uploading " + file);
  21.                 }, 200);
  22.             }
  23.             else {
  24.                 //Extension is not allowed.
  25.                 $("#dMessage").text("Error: only images are allowed");
  26.                 return false;
  27.             }
  28.         },
  29.         onComplete: function (file, response) {
  30.             button.text("Upload");
  31.             //although plugins emulates hover effect automatically it doesn't work when button is disabled
  32.             button.removeClass("hover");
  33.             window.clearInterval(interval);
  34.             $("#dMessage").text(response);
  35.             if (response.indexOf("successful") > -1) {
  36.                 refreshSignature(userID);
  37.             }
  39.             //enable upload button
  40.             this.enable();
  42.         }
  43.     });
  44. }

Seeing this is inside an update panel I wrapped the initialization script inside a function which I call from the script manager when the UpdatePanel is updated.

Once the image is uploaded I refresh the image by clearing out the containing div and recreating the image element.  This is a tricky one. Even though the image element was deleted and recreated, the SRC was the same so browsers were caching the image. I decided to throw in an extra random value to the URL so the browser would be forced to get the newest image.

  1. function refreshSignature(userID) {
  3.     $("#signature").hide();
  4.     var sPath = "../../handlers/ProcessSignature.ashx?type=View&UserID=" + userID + "&d=" + (((1 + Math.random()) * 0x10000) | 0).toString(16).substring(1);
  5.     $("#signature").attr({ src: sPath });
  6.     setTimeout(function () { $("#signature").imageScale({ maxWidth:450, maxHeight: 50 }) }, 1000);
  7.     $("#signature").fadeOut(100).delay(2000).fadeIn(1000);
  8. }

At the end I scale the image using a very simple JQuery image scaling plug-in. Below are the JS script files I used.

Hope this helps anyone looking to do something similar.

Visio 2010 forward engineer add-in for office 2010

I have been scouring the internet for ages trying to see if there was a usable add-on for Visio 2010 that could export SQL Scripts. MS stopping putting that functionality in Visio since 2003 – which is a huge shame.

Today I found an open source project from Alberto Ferrari. It’s an add-in for Visio 2010 that allows you to generate SQL Scripts from your DB diagram. It’s still in beta, and the source is available.


Check it out here:http://sqlblog.com/blogs/alberto_ferrari/archive/2010/04/16/visio-forward-engineer-addin-for-office-2010.aspx

This saves me from having to do all my diagramming in SQL Server / VS 2010. And brings back the much needed functionality that has been lost.

Java Script – Content delivery networks (CDN) can bite you in the butt.

As much as I love the new CDN’s that Google, Microsoft and a few others have publically released, there are some strong gotchas that could come up and bite you in the ass if you’re not careful. But before we jump into that, for those that are not 100% sure what a CDN is (besides Canadian).


Content Delivery Network.

A way of distributing your static content across various servers in different physical locations.  Because this static content is stored on many servers around the world, whenever a user needs to access this content, they are given the closest server to their location for this data.

Already you can probably see the immediate bonuses to a system like this:

  1. Lower bandwidth
    1. Even small script files downloaded thousands of times will start to take a noticeable hit on your bandwidth meter.
  2. Less connections/hits to your web server which gives better latency
  3. If you manage many servers, you don’t need to manually update each server with scripts.
  4. A user will download a script for each website they visit. If a user is redirected to many domains/sub-domains within your web site, they might download many copies of the same file. When a system sees multiple requests from the same  domain, they will ignore the download


Those are just a handful of the many bonuses a CDN will give you. And for the average website, a CDN is great choice. Check out the following CDN links for their solutions:

The Gotcha

There is always a catch. Here are some issues I found with using CDN’s that hopefully can help you make your decision.


If you are running a website behind SSL, make sure that when you reference your CDN data that you use https:// vs. http://. If you forget this users will get a very nice message telling them that their secure connection is trying to access unsecure data. For a developer this is fairly simple, but general users will get a bit anxious when seeing this.

Trusted Sites

Internet Explorer has this really nifty feature that allows users to specify what sites they trust, and by some defaults IE7 only allows trusted sites to be viewed.  No problem, they set your website as trusted. But what about your CDN? If a user sets your websites to trusted, but not the CDN, they will not download those static files. This has the potential to totally break your web site.

Pedantic Network Admins

This alone is sometimes the killer of projects. However, always be careful when you are going to use a CDN for a professional project. If a network / security admin sees that you’re referencing an outside source, or that a call from a website might hit an outside domain.. panties will be bunched, emails will be spewed out and well, no one wants that.

We call them software factories

When I first started designing and architecting online applications, we decided to move custom settings from a database to an XML file for more versatile settings for our customers. With hundreds of agencies using our system, we needed to have the ability to easily and quickly change settings on a customers website without making core-code changes, XML allowed this. However, a year after we released we found issues with this setup. 

The setup it self was solid, however we didn’t expect the needs of our customers to be so… unique.

Enter the 80/20 rule.

Where once a button click would say print a Circle on a screen, we found soon that 20% of our customers wanted that to be an oval – so we bite the bullet and make a few code changes to accept more settings and voila, a customer can either have an oval or a circle.

Oh, if it was that easy! Soon, every customer now wanted something different. A Triangle, square, 3D Sphear, dishwasher, and some wanted  a few Ninjas! Come on! Ninjas are rare and priceless, we can’t just give thos…. sorry I digressed.

What what to do? Do we throw in 100 conditional statements? Do we build a system that is intelligent enough to almost be self aware to know what our customers want? (Any programmer knows that we need to know what our customers need/want/deserve before they actually know they need/want/deserver it).

So where did we go? The solution was custom libraries for each customer, using reflection at run time to load those libraries up.

You see, the customer who wants a triangle, didn’t care about the sphear, the circle, the ninja. They wanted triangle, pointy edges and all.  The same went for other customers, so the answer was clear.

With this implementation, we rarely have to make any core-code changes. Instead, we derive a new library off of a base class, add the functionality needed for our customer, and release it. Plain and simple. 

A sample of this will be uploaded shortly.

The Model 



This project holds the 2 classes I’ll be working with. Each class(factory) has a static method called “Create” which creates the correct Object based on the passed in path.




There are two  classes in this project: Greeting and Shape.

These two classes will be the ones overridden by our “factories” to produce custom actions and events.


 BicNet.Projects.FactoryExample.Triangle / Square


This is derived from BicNet.Projects.FactoryExample.Base.  Greeting and Shape both inherit from Base.Greeting and Base.Shape. This allows me to cast a BicNet.Projects.FactoryExample.Triangle.Shape object as a BicNet.Projects.FactoryExample.Base.Shape, while retaining all the functionality of .Triangle.Shape (Gota love Managed Code).


I’m not going to show BicNet.Projects.FactoryExample.Square because it’s the same as Triangle


So how does this all work. Let’s look at the FactoryExample.Factory project.

Some Code

   1:  namespace BicNet.Projects.FactoryExample.Factory
   2:  {
   3:      public class FGreeting
   4:      {
   5:          public static BicNet.Projects.FactoryExample.Base.Greeting Create(string path)
   6:          {
   7:              string className = path + ".Greeting";
   8:              //Must use System.Reflection.Assembly - can not just use Assembly as the name as 
   9:              //it will not work. Can also use [Assembly] instead
  10:              return (BicNet.Projects.FactoryExample.Base.Greeting)System.Reflection.Assembly.Load(path).CreateInstance(className);
  12:          }
  13:      }
  14:  }

What’s being passed into the Create(..) method would be: “BicNet.Projects.FactoryExample.Square”.  What this then does is look for that assembly and return the “Greeting” object from that Library. It will then return it, giving me full control over that object.

   1:          protected void btnLoadFactory_Click(object sender, EventArgs e)
   2:          {
   3:              BicNet.Projects.FactoryExample.Base.Shape shape = BicNet.Projects.FactoryExample.Factory.FShape.Create(ddlFactories.SelectedItem.Text);
   4:              BicNet.Projects.FactoryExample.Base.Greeting greeting = BicNet.Projects.FactoryExample.Factory.FGreeting.Create(ddlFactories.SelectedItem.Text);
   6:              lblGetShapeName.Text = shape.GetShapeName();
   7:              lblGetShape.Text = shape.GetCustomText();
   8:              lblGetGreeting.Text = greeting.GetGreeting();    
   9:          }

What’s the Point?

Yes, this is very simple with shapes and such, however what about real world scenarios? One of the main questions I’ve been asked about with this model is:

If you just did market research and you knew what your customers wanted, you wouldn’t need to make a system like this. Why spend the extra time and effort to build a system that’s capable of doing this when you could just do what customers wanted in the first place?

Anyone in the Software industry can tell you customers don’t always know what they want, and customer’s needs change every day / week / year (hour?). And, for those that deal with $$, maintenance on any application is usually the big killer for time (effort) and money.

Another argument might be:

You have 500 customers working on an Item Tracking  System (Tasks, to-do’s etc.).

  1. 300 of those customers want it when they submit an item, for an email to be sent to themselves, and to the people assigned to the task.
  2. 100 of those customers want 2 sub-tasks created for each Task created, and both those tasks to be automatically assigned to “Mary Smith” in the system.
  3. 50 of the 100 remaining customers want a PDF created as soon as the Task is created to be saved as a snapshot of the original task.
  4. 25 of the 100 want any task that is completed to have a PDF snapshot taken, saved in a specific directory so their FTP program can look for them.
  5. The final 25 customers don’t know what they want, but we are safe because as soon as they come up with whatever customers come up with, we have the ability to plug it in without changing base code because we can just change their factory.

That’s it?

Well pretty much. Yes, you could get the above done with XML – if you knew what you were facing before you started. However, that’s not the problem we face in our industry. We are always changing and always improving. Having the ability to quickly change something for a customer or set of customers without interupting the flow of everything else, is what we want. Maintenance is always the big cost item with Web Applications because of the ongoing changes to it.

Microsoft Expressions – Web Super Preview

So Here I am trying to figure out how to test my application in IE6, IE7, IE8 and the rest of the plethora of browsers. Aside from the fact that my customers still use IE6… … I couldn’t find a computer / RDP / VM Machine that has IE6 on it.  Through my searching, I found this beauty:


This gives the ability to have multiple versions of IE looking at the same web page. Now, it’s not perfect but it’s a start (it’s still in beta). you can see by the picture below that the textbox I’ve selected on the left side (IE8), selects the same textbox on the right side, however the position is off.


My thoughts:

  • It’s good if you’re looking at a plain page, however there is no user interaction. You can’t navigate, you can’t type, you can’t do much except watch and look at positioning.
  • It does help with CSS issues… like putting : style=”display:none;” on a textbox. It works in IE7+ but not in IE6.
  • The download for the install is 236MB, yet the program you’re installing is only 36megs after install… what else is in there?


So if you’re looking for a page by page comparison, don’t need to test ajax, or anything else, this works.  Personally, I’m getting someone to build me a Win98 VM and a Windows XP VM so I can have those running to test my apps.

Hosts file auto updated to IP v6…

We had an issue today at work. A few developer machines couldn’t debug any ASP.NET 2.0 web application. After an hour of looking, we found out that some of our host files had the entry:

::0       localhost

instead of:     localhost

If you are getting issues with trying to debug your ASP.NET Website (Like the following):



Check your hosts file to make sure it’s correct.

Application logging – yes it is important

I’ve spent most of my professional career working on Enterprise Applications (mostly web based). Sometimes you release your code and everything goes smoothly (this is where praise is warranted by rarely given). However, there are times when code has been in place for many days, weeks, months without any issues and all of a sudden – all hell breaks loose and it stops (this is where harsh comments are never wanted but always received).

So, what do we do when that happens? If we cannot see what’s going on, we’re pretty much hosed. Sure we could start doing some knee-jerk reactions, do code changes to try trapping for situations, but come on… knee-jerk reactions are bad. Say it with me: “BAD”.

So this is where application logging comes in.  Below are 2 lists. The first list contains some things you just should not do… though everyone has probably done these. We learn through making mistakes, others and school (… yea), so the list below is mostly mistakes I’ve made in my career so hopefully some can learn from them.

What Not To do.

There are many ways to “log” what happens in an application. What I want to do here is list some of the ones I’ve encountered that really don’t give much info.

Rely on just Exception Logs

There is something to be said about logging exceptions, however, relying on just exception logs is a mistake. Exceptions are raised either when a code error, system exception, user exception or when a user manually throws it. However there are cases where there’s an error with your system, and it’s not throwing an exception.

Write out every action to the Hard Disk

I love this one. I’ve seen a system log over 100Mb an HOUR to the hard disk based on actions happening on it. Sure, it could be useful… if you had a hard disk that had a crap load of room on it. But come on, you let it run for a month un-checked. 100 * 24… um * 30… dot the i…  … that’s around 72GB of logging. Yes, some of it “might” be useful but who would want the daunting task of looking through that mess when we could doing real work.

Use a single log file

Whenever you log information to the system – split it up into files. One file per day, or one file per hour. If you have a log file that’s massive (look above) and you’re reading it every time you log something, you’ll be overloading your system in no time on read/writes. What i do is:    name files like:

  • BicNet.Enfora_LOG_19_FEB_2008_0800.log
  • BicNet.Enfora_LOG_19_FEB_2008_0900.log
  • BicNet.BlueTree_LOG_19_FEB_2008_0800.log
  • BicNet.AI.NinjaRevengePlan2000_LOG_59_SEPTEBURARY_2108_0800.log

If an error occurs at 8:30AM with some customers sending in GPS Data through an Enfora Modem, it’s pretty self-explanatory where I need to look.

Do nothing

YES! Awesome!… die… well actually this is better than the next one.

Log useless information

Have you ever seen a log file that looks like:

>Awesome function just called. Woo I rock. Go Canucks go!

>Another Awesome function was just called. It’s now 12:34PM. GO CANADA GO!


>… Stupid user did an error. Message is “Object reference not set to an instance of an object”. COME ON!

>ANOTHER ERROR!? Gah button1 was clicked. Check the code to see why I was written. DO IT! DO IT NOW!

If you looked through some of my college programs you might find some error logs like this.


What to do

Honestly, don’t do any of the above and you’re better off already. There’s an example of a logging system I have in place that writes logs out.

Don’t bloat your logs

When doing logs – I rarely use XML. Wha!? NO XML!? HERESY! Yes yes it formats nicely, and yes it’s well it’s XML. However, there’s an overhead on using it. Only use it if you need to. If you can get by with writing logs in plain text, do it. XML can be used when you have complex object you need to write out, or if you nave a lot of data, however, for simple logging – use plain text.

Log raw information coming into the system

Whenever you have a system that needs it’s information logged, log the incoming data. If you don’t log that data, it could be transformed or corrupted at the time you do log it. If you log it when it’s coming in you can later on run tests against that function with the correct data if there’s an error. For web services, there’s ways of tapping into the raw SOAP data (using ASP.NET) before it hits your web-service. This allows you to log/trace that information to your disk before your web-service gets the data, very useful.


Example – UDP / TCP Listener

I have a service running on our servers that listens on TCP and UDP Ports. Whenever a device connects to our servers, my listener service picks it up and processes the information.  Every time I get a connection on our server, I log who connected, the message received, and the time (I don’t write anything out at this time). Every time I send something to a user, I log the IP, the message and the time. Every 10 minutes I’ll stream the output to one file, and the input to another file. If an exception is raised, I write out all the information, as well as the exception to an exception log file. Every hour I create a new file so they’re never too large.

I don’t log every function, I don’t log simple operations, I don’t need to because if they ever throw an exception, I already have it. If there is bad data coming in which is compromising the integrity of my system, I have it already logged with the date/time. This gives us all the information we need to perform tests to see what has gone wrong, and it catches exceptions. It also allows me to quickly count the # of connections coming in, to debug connectivity issues.

Code Example – Generic Log Entry Method

Here’s a sample “Logging” method I use to fill my StringBuilder with the data I need to log out:

   1:   private void LogEntry(string message, string customer, int reportID)
   2:          {
   3:              string sReportID = null;
   4:              char c = ' ';
   6:              System.Diagnostics.Debug.WriteLine(message);
   8:              if (_sbLog == null)
   9:                  _sbLog = new StringBuilder();
  11:              if (reportID > -1)
  12:                  sReportID = reportID.ToString();
  14:              if (customer != null && customer.Length > 25)
  15:                  customer = customer.Substring(0, 25);
  17:              if (sReportID != null && sReportID.Length > 9)
  18:                  sReportID = "XX" + sReportID.Substring(2, 7);
  20:              if (message.ToLower().IndexOf("exception") > -1)
  21:              {
  22:                  _sbLog.AppendLine(
  23:                      DateTime.Now
  24:                      + " - "
  25:                      + (customer == null ? new string(c, 25) : customer + new string(c, 25 - customer.Length))
  26:                      + " - "
  27:                      + (sReportID == null ? new string(c, 9) : sReportID + new string(c, 9 - sReportID.Length))
  28:                      + " - "
  29:                      + "Exception occured.");
  31:                  _sbLog.AppendLine(message);
  32:              }
  33:              else
  34:              {
  35:                  _sbLog.AppendLine(
  36:                      DateTime.Now
  37:                      + " - "
  38:                      + (customer == null ? new string(c, 25) : customer + new string(c, 25 - customer.Length))
  39:                      + " - "
  40:                      + (sReportID == null ? new string(c, 9) : sReportID + new string(c, 9 - sReportID.Length))
  41:                      + " - "
  42:                      + message);
  43:              }
  44:          }

From here, you just have to tell it to write this out whenever you need.

Hiring more developers doesn’t always speed things up.

Who’s heard of the term : “Too many cooks in the kitchen”? It’s a fundamental problem if ruling.

I was talking to someone awhile ago about this problem where they needed to get Task A done in a specific time frame and his boss wanted him to just hire more developers.

So I’ll give the same answer to you that we talked about:

It takes 9 months for a baby to be born. If you add 9 women into the fray, would the baby be born in 1 month?

I’ll stop there so minds don’t wander… yea that means you!

Find All Databases with…?

I work with a lot of databases. Currently, we have over 500 databases on our one SQL box. Whenever I have to go in and modify a a specific XML section, I need a quick way to find any DataBase that has that specific XML text. I came up with a quick Script that goes through every database, searches for the table I want to see if it contains my XML text.

Here’s a script that has done well for me:

Exec sp_MSforeachdb @command1= 'USE ? IF ''?'' <> ''master'' AND ''?'' <> ''model'' AND ''?'' <> ''msdb'' AND ''?'' <> ''tempdb'' 
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(''[dbo].[tblProcessConfigurations]'') AND type in (''U'')) 
SELECT * FROM dbo.tblProcessConfigurations 


What this will do is print out the contents of every tblProcessConfigurations of every database on my server. However, I had an issue – what Databases were they from?

I add the simple function: db_name() to my query and voila:


Exec sp_MSforeachdb @command1= 'USE ? IF ''?'' <> ''master'' AND ''?'' <> ''model'' AND ''?'' <> ''msdb'' AND ''?'' <> ''tempdb'' 
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(''[dbo].[tblProcessConfigurations]'') AND type in (''U'')) 
print db_name() 
SELECT * FROM dbo.tblProcessConfigurations 


I can now document every DataBase that will need to be updated, and what the Updates need to be. 

Posted: Feb 18 2009, 08:07 AM by Ryan Ternier | with no comments
Filed under: ,
More Posts Next page »