Gunnar Peipman's ASP.NET blog

ASP.NET, C#, SharePoint, SQL Server and general software development topics.

Sponsors

News

 
 
 
DZone MVB

Links

Social

January 2013 - Posts

Using jQuery webcam plugin with ASP.NET MVC

I have to use webcam images in one of applications I build and when playing with different components I found free and easy to use Flash and jQuery based webcam component called jQuery webcam plugin. In this posting I will show you how to use jQuery webcam plugin with ASP.NET MVC application to save captured image to server hard disc.

Preparation

Here are some steps to take before writing any ASP.NET code:

  1. Create new ASP.NET MVC application.
  2. Download jQuery webcam plugin and extract it.
  3. Put jquery.webcam.js, jscam.swf and jscam_canvas_only.swf files to Scripts folder of web application.

Now we are ready to go.

Create webcam page

We start with creating default page of web application. I’m using Index view of Home controller.


@{
    ViewBag.Title =
"Index"
;
}
@section scripts
{
   
<script src="@Url.Content("~/Scripts/jquery.webcam.js")">
    </script>
    <script>
        $("#Camera"
).webcam({
             width: 320,
             height: 240,
             mode:
"save"
,
             swffile:
"@Url.Content("~/Scripts/jscam.swf")"
,
             onTick:
function
() { },
             onSave:
function
() {
             },
             onCapture:
function
() {
                 webcam.save("@Url.Content("~/Home/Capture")/");
             },
             debug:
function
() { },
             onLoad:
function
() { }
         });
    
</script>
}
<h2>Index</h2>
<input type="button" value="Shoot!" onclick="webcam.capture();" />
<div id="Camera"></div>

We initialize webcam plugin in additional scripts block offered by layout view. To send webcam capture to server we have to use webcam plugin in save mode. onCapture event is the one where we actually give command to send captured image to server. Button with value “Shoot!” is the one we click at right moment.

Saving image to server hard disk

Now let’s save captured image to server hard disk. We add new action called Capture to Home controller. This action reads image from input stream, converts it from hex dump to byte array and then saves the result to disk.

Credits for String_To_Bytes2() method that I quickly borrowed go to Kenneth Scott and his blog posting Convert Hex String to Byte Array and Vice-Versa.


public class HomeController : Controller
{
    
public ActionResult
Index()
     {
        
return
View();
     }
 
    
public void
Capture()
     {
        
var
stream = Request.InputStream;
        
string
dump;
 
        
using (var reader = new StreamReader
(stream))
             dump = reader.ReadToEnd();
 
        
var path = Server.MapPath("~/test.jpg"
);
         System.IO.
File
.WriteAllBytes(path, String_To_Bytes2(dump));
     }
 
    
private byte[] String_To_Bytes2(string
strInput)
     {
        
int
numBytes = (strInput.Length) / 2;
        
byte[] bytes = new byte
[numBytes];
 
        
for (int
x = 0; x < numBytes; ++x)
         {
             bytes[x] =
Convert
.ToByte(strInput.Substring(x * 2, 2), 16);
         }
 
        
return bytes;
     }
}

Before running the code make sure you can write files to disk. Otherwise nasty access denied errors will come.

Testing application

Now let’s run the application and see what happens.

jQuery webcam plugin: Flash needs permissions to use webcam

Whoops… we have to give permission to use webcam and microphone to Flash before we can use webcam. Okay, it is for our security.

After clicking Allow I was able to see picture that was forgot to protect with security message.

jQuery webcam plugin: Me in webcam

This tired hacker in dark room is actually me, so it seems like JQuery webcam plugin works okay :)

Conclusion

jQuery webcam plugin is simple and easy to use plugin that brings basic webcam functionalities to your web application. It was pretty easy to get it working and to get image from webcam to server hard disk. On ASP.NET side we needed simple hex dump conversion to make hex dump sent by webcam plugin to byte array before saving it as JPG-file.

How to make NLog create separate log per service thread

NLog on is popular logging component for .NET applications. I am using it in one project where I built multi-threaded server that is running similar threads with different configuration. As those thread are independent services I needed way how to configure NLog to create different log file for each thread. Here is my solution.

Server instances structure in file systemWhen new service starts in its thread I will configure NLog in code to create log file to correct folder with correct name and layout. Image on right shows how server is keeping instances data on disk. Each instance has its own folder and for logs there is subfolder because for every date there will be new log file.


var target = new FileTarget();

target.Name = InstanceName;

target.FileName = LogsFolder + "/${shortdate}.log";

target.Layout = "${date:format=HH\\:MM\\:ss} ${logger} ${message}";

 

var config = new LoggingConfiguration();

config.AddTarget(this.Name, target);

 

var rule = new LoggingRule("*", LogLevel.Info, target);

config.LoggingRules.Add(rule);

 

LogManager.Configuration = config;

 

_logger = LogManager.GetLogger(InstanceName);

_logger.Info("Logger is initialized");


This way I got all logging done on code level and as there is arbitrarily small probability that logging ever changes I am very sure that this seemingly temporary solution will live with project for long time.

Posted: Jan 30 2013, 07:51 AM by DigiMortal | with no comments
Filed under: , ,
New ASP.NET Single Page Application Template

I downloaded last version of ASP.NET and Web Tools 2012.2 and digging around in Single Page Application project template. It has changed a lot compared to last beta (it was not part of stable version) and it is interesting to me to see what’s going on. I also found some good resources about SPA-s to share with your.

New default project

There is new default project for SPA-s now and it looks different than previous one. Now there is support for multiple to-do lists and of course, new design.

ASP.NET SPA sample application

When we take a look at project structure we will still see almost same set of files than before but there are some changes.

Upshot is gone

Big surprise to me is the fact that I cannot find upshot anymore. It was there, it provided a little bit complex but still very powerful data layer with local caching and connection detecting etc. But now it’s gone. I actually hoped to see some progress on upshot library and support for local cache based data source.

Now it seems to be up to developers to handle data caching logic which is also not bad option as upshot code was at least for me a little bit hard to follow and debug. Now there is pure jQuery and Knockout and less JavaScript code than before.

Getting started

When searching some more information I found very good postings series by John Papa: Building Single Page Apps with Knockout, jQuery, and Web API – Part 1 – The Story Begins. Take a look at postings and examples the series provides. Although it is ten postings the postings are not too long and they are very well focused. Also I suggest reading John Papa blog post Inside the ASP.NET Single Page Apps Template Beta that explains how new SPA template works.

Conclusion

As upshot is gone and there is enough reading now about SPA-s in ASP.NET I think I come out with some new ideas when I have gone through all the materials. It’s good to see the progress and I’m happy because SPA template is back again. Hopefully it is now simpler beast to manage than it was during upshot days.

Using Visual Studio database projects in real life

Visual Studio database projects are good to support software development. I have successfully used database projects for years and I think it’s time to share my experiences also to global developers audience. In this posting I will introduce you how to effectively use database projects so developers are working with up-to-date schema and test data all the time.

Standardizing development environments

To make things easier I will always use standardized development environments. It makes also life easier for developers because all problems that appear in environments are similar and after some time there are known solutions for all environment specific problems.

One important aspect in standardization is virtualization. We (or at least those with stronger survival instinct or more experiences) use virtual machines for development because they are easy to restore and when VM is crashed then it doesn’t affect host system where usually office and other work supporting software is running. Plus you can increase or decrease the amount of resources that VM uses.

For databases we usually agree in couple of things to make development easier:

  • developers use local SQL Server (usually developer edition),
  • names of databases are the same in all development machines,
  • credentials to access databases are same in all development machines.

Yes, it is not always possible this way and there are exceptional cases but points given here work for most of projects.

Initializing database project

Some of your team who is responsible for developing database is the one who will be also manager of database project in Visual Studio solution. Writing database objects on Visual Studio, then deploying database to local machine just to test changes is non-sense. Management Studio and other WYSIWYG database management tools are way more productive and usually databases are built using these tools. I don’t want to force people to more work manually and I ‘m happy when they are moving on as fast as possible.

Suppose that database guy has already started building database. There are some tables and maybe views. Maybe even some stored procedures. Let’s also suppose there is empty database project also added to solution.

1. Set target database platform

Set target database platform

Here is Windows Azure SQL Database selected as target so Visual Studio will check that modifications done to schema project doesn’t conflict with SQL Azure.

2. Create schema comparison

To get changes made to database to schema project we create new database schema comparison. Schema comparison allows us to compare two different schemas. Right click on database project and select Schema Compare…

Create schema comparison

Empty schema compare window is opened.

Empty schema compare window

Now you have to specify source and target schema:

  • source schema will be your local database (you can make new connection and use Windows authentication in this case because you don’t run schema comparison under application pool or other limited account),
  • target schema will be the database schema project in your solution.

Here are my sample settings for source and target dialogs:

Source and target schema

When settings are done then click Compare button in schema compare window.

Schema compare window

You can see now what objects are in database and what is action to do with object on target. If I don’t have good reason I don’t add roles and user accounts to schema project as sometimes their deployment is problematic. Take a look at object definitions window – you can see diff between source and target there.

3. Update schema project

Schema project with tablesNow click Update button to get database objects to your schema project and save schema to file in database project folder. I usually call this file as LocalDbToSchema.scmp.

Under your schema project you should see now some new folders and database objects. On sample image on right you can see some database tables that were imported from database.

Now include saved database compare file to your database project. This way this comparison is always available for you and you don’t have to manually configure comparison again and again when you want to compare development database with schema project.

This is the schema compare that is used mostly by database developer. Of course, after making update to schema project you have to try to build it to see that there are no errors in schema project.

Adding schema project to local database comparison

As now you have way to get schema changes from database to schema project you are ready to make another comparison that is mostly used by developers who don’t develop database. They need to compare their local databases against schema project and apply changes or recreate their databases.

Steps are almost same as before:

  1. Right click on database project and select Schema Compare…
  2. For source schema select your schema project in solution
  3. For target schema select your local database
  4. Save schema as file to database project folder (SchemaToLocalDb.scmp)
  5. Include schema compare file to database project

This is it. As all virtual machines have same database settings then all developers can use the same schema comparison to update or recreate their schemas.

Adding test data

If you made breaking changes that cannot be deployed to existing database with data then developers need to create their databases again. Can you imagine how painful it is when new database is empty and you have to insert all data again to make even elementary things work again? The guy who manages database project will usually also manage data that is deployed to database after it is recreated.

1. Create new post-deployment script

Right click on database project, select Add…, select New Item… and select Post-Deployment Script.

Create new post-deployment script

Name it as Script.PostDeployment.sql by example. I’m using often only one script and that’s usually enough for me. Click Add and new post-deployment script will be added to database project. Here you can see the empty window of post-deployment script:

Empty window of post-deployment script

To this script you must add all test data and keep it up to date so developers have only minimum delays in work when something happens to database. Yes, it is additional work to keep data up to date but as database developer you should know very well how to produce test data and how to get data from database to clipboard.

Publish profile

You need two database publish profiles. Schema comparison doesn’t work for you when developer has to delete database and create it again. Yes, schemas are compared but data doesn’t appear to new database automatically. Of course, it is possible to take data with copy and paste from post-deployment script but why so much manual work when we have tools that work nice for us?

Right click on database project, select Publish… You are asked for target database connection string. Add connection to your local development database and then click on button Advanced…

Advanced publishing settings

Set publishing options like I have (should work for most projects you have) and then click OK.

Now save publishing settings to file called PublishToLocal.publish.xml and make sure this file is included to database project. When some developer explodes database then it is easy to recreate it again now.

Updating test and production databases

You have also test databases that are used in test environments where users and testers test the system. Also you have production databases. It would be nice if all changes get there automatically but life has shown that for these systems you have to deal mostly manually. You must be sure that you don’t break something and you don’t delete any important data by mistake.

You can still use database comparisons to compare current schema to test or production database but I recommend you to use database accounts that have no permissions to modify data or database objects. Schema comparisons will show you what changes are needed for target databases but you have to do these changes manually and not trust automatics. Even if you have publishing options set correctly you cannot always be sure that automatic modifications are done the way you expected. Make on mistake and face worst problems you have ever seen.

Until you just compare two schemas to find out differences and you use “read-only” accounts you are in safer waters.

How it works in practice

So far, so good. My teams have saved remarkable amount of time to fix mysterious database issues and setting up databases again.

  • All developers know how to use database projects regardless their actual role in projects.
  • Database and environment related issues are usually easy to find because development environments are standardized and unknown issues appear only when doing something “unusual”.
  • Crashed or screwed up database gets back to life with about 15 minutes at maximum.
  • Mismatches between database and code are quickly detected and fixed.

You need well disciplined developers to use things like this because guys who don’t really care about quality will just screw up all thing by out-dating schema project and post-deployment scripts. For good developers database projects save time and help them keep work quality high.

Conclusion

Database projects in Visual Studio are powerful tools to use. This posting gave you overview about how to use database projects when developing systems where databases with regular size and structure are used. For more complex and advanced scenarios also rules change and different problems like big amounts of data or impossibility to use local databases may need workarounds. Until it’s regular development you can easily use database projects like described here.

My experiences with Knockout

It has been quite here for some month as I was very busy on some critical projects and I also got used to be a father to one wonderful baby. Now it’s time to get back to community stuff and this time I will dig to fancy JavaScript library called Knockout. This was one of interesting journeys I have had together with fellow ASP.NET MVP Hajan Selmani.

How I got to Knockout

Knockout was decision that was to be made in one project where UI is making heavy use of AJAX. Although there were other ways how to go we got some technical decisions that were not adequate and that left not much room for us.

As we had business logic implemented in database level we built WCF service so UI can communicate with database and we can also solve some issues in server side code so we don’t affect UI with every change we make.

Layers in our system

This is how the communication between layers was organized in big picture. We had to find something to make communication between UI and server as painless as the choice was knockout. Actually Hajan suggested it at first place and it take me a little time to get to this idea but now I like the idea, of course.

What is Knockout?

Knockout is JavaScript library that allows to implement declarative model binding and use observable variables and collections. This is like client side implementation of MVVM but the engine that it is running on is JavaScript. I hope this explanation is simple enough to get the idea.

Coding with Knockout is as follows:

  • write code to get data from server,
  • write templates for repeated data,
  • add data binding attributes to HTML elements,
  • bind data to page.

Here is fragment on page that uses Knockout to show data. data-bind attribute is the one that Knockout checks. In this attribute you tell to Knockout all it has to do with element. On this example you can see foreach, click, text and clickBubble attributes.


<ul data-bind="foreach: $root.Products()">
    <li class="ui-state-default"
        data-bind="click: function(data){$root.SetCurrentProduct(data)">
        <span data-bind="text: ProductName"></span>
        <div>
            <span class="moveUp" 
                  data-bind="click: function(data){MoveProductUp(data)},
                             clickBubble: false"
>
</span>
            <span class="moveDown" 
                 
data-bind="click: function(data){MoveProductDown(data)},
                             clickBubble: false"
>
</span>
        </div>
    </li>
</ul
>

Syntax of attributes may feel weird at first but still it’s better to have one HTML-attribute to avoid possible clashes with other libraries.

Getting data from server

If you are building rich user interface you probably have enough calls for data to server that you need some service classes.


function ProductService() {

}

ProductService.prototype.GetProductsForUser =
function
(categoryId, callback) {
     $.ajax({
         type:
"POST"
,
         contentType:
"application/json; charset=utf-8"
,
         dataType:
"json"
,
         url:
"ProductService.svc/GetProductsForUser"
,
         data: {
"categoryId"
: categoryId },
         success:
function
(response) {
            
if (typeof callback === "function"
) callback(response);
         },
         error:
function
() {
            
if (typeof callback === "function") callback("error");
         }
     });
};

Classes like these help us divide service calls to special service classes on UI side so our code remains organized and clean.

View models

As a next thing we need view models for our pages. View models provide data and operations to HTML page and bindings are controlled by Knockout. If we are using collections that users may change through UI code then we can use observable collections that reflect changes through bindings automatically back to HTML DOM.

Using of view models like this is optional – you can organize your code the way you like. As we had some view specific logics we started using view models so we have view model for each HTML page and we can go on with object-oriented code that we love usually most.


function ProductViewModel() {
    
var self = this;
     self.Products = ko.observableArray();

}
 
ProductViewModel.prototype.Initialize = function
() {
     ko.applyBindings(
this, $('#ProductForm'
)[0]);
}
 
ProductViewModel.prototype.LoadProducts =
function
() {
    
this.Products.splice(0, this
.Products().length);
 
     productService.LoadProducts(null,
function
(data) {
        
if (data == 'error'
) {
             console.error(
'error loading products'
);
            
return
;
         }
   
        
for (var
i = 0; i < data.LoadProductsResult.length; i++) {
            
var
product = data.LoadProductsResult[i];
             product.Categories = ko.observable(product.Categories);
             productModel.Products.push(product);
         }

     });
}

When view model is initialized then Knockout is binding view model to products form element. After that products are loaded from server and products collection is filled with products. As products collection is array observed by Knockout then products list on screen is filled automatically. We can use templates for single products and keep this way our forms simpler to read.

What were main wins using Knockout?

As I am not big JavaScript guru and we had most forms not very simple then my biggest wins are these:

  • no need for long-long code to control the UI,
  • no need for long code to bind data to form elements,
  • HTML got way cleaner and shorter when using Knockout,
  • Knockout worked also in more complex situations where more than one view model per page was needed.

Practically Knockout solved almost all nightmares I was afraid of to happen.

How to get started?

I got started using Knockout home page. They have pretty good tutorial and also good examples:

I started from these pages and got on my feet with Knockout quickly.

Conclusion

Knockout is easy to learn although it takes some time to find out all powerful features it has and find answers to questions when working with more complex scenarios. Although materials at Knockout site are very good they don’t explain all things but using search engines I was able to solve all problems I faced. I also had some problems when Knockout got confused but mostly these were simple issues. After week or two with Knockout I found it to be extremely valuable library to use even in more complex scenarios.

More Posts