Gunnar Peipman's ASP.NET blog

ASP.NET, C#, SharePoint, SQL Server and general software development topics.

Sponsors

News

 
 
 
DZone MVB

Links

Social

June 2012 - Posts

Session I plan to visit at TechEd Europe 2012

teched-europe-2012-blogTechEd 2012 at Europe is on next week. As I’m going there I already took a look at sessions and made my own favorites list of these. For every time slot I selected more than one session because if one session is not what I’m expecting I can always go to another.

Here is my list grouped by dates.

26.06

27.06

28.06

29.06

Hope to see you in Amsterdam next week! :)

Posted: Jun 20 2012, 08:42 PM by DigiMortal | with no comments
Filed under:
Consuming ASP.NET Web API services from PHP script

I introduced ASP.NET Web API in some of my previous posts. Although Web API is easy to use in ASP.NET web applications you can use Web API also from other platforms. This post shows you how to consume ASP.NET Web API from PHP scripts.

Here are my previous posts about Web API:

Although these posts cover content negotiation they give you some idea about how Web API works.

Test application

On Web API side I use the same sample application as in previous Web API posts – very primitive web application to manage contacts.

ASP.NET Web API test app

Listing contacts

On the other machine I will run the following PHP script that works against my Web API application:


<?php  
// request list of contacts from Web API
$json = 
file_get_contents('http://vs2010dev:3613/api/contacts/');

// deserialize data from JSON
$contacts = json_decode($json);
?>
<html>
<head>
    <meta http-equiv=
"Content-Type" content="text/html; charset=utf-8"
 />
</head>
<body>
    <table>
    
<?php
 
    
foreach($contacts as $contact)
    {
        ?>
        <tr>
            <td valign=
"top">
                <?php 
echo $contact->FirstName ?>
            </td>
            <td valign=
"top">
                <?php 
echo $contact->LastName ?>
            </td>
            <td valign=
"middle"
>
                <form method=
"POST"
>
                    <input type=
"hidden" name="id" 
                       
value=
"<?php echo $contact-/>Id ?>"
 />
                    <input type=
"submit" name="cmd" 
                       
value=
"Delete"
/>
                </form>
            </td>
        </tr>
        
<?php
    }
    ?>
    </table>
</body>
</html>


Notice how easy it is to handle JSON data in PHP! My PHP script produces the following output:

PHP test application

Looks like data is here as it should be.

Deleting contacts

Now let’s write code to delete contacts. Add this block of code before any other code in PHP script.


if(@$_POST['cmd'] == 'Delete')
{

    $errno = 0
;
    
$errstr = ''
;
    
$id = @$_POST['id']
;
    
    
$params = 
array('http' => array(
              'method' => 'DELETE'
,
              
'content' => ""
            ));

    $url = 'http://vs2010dev:3613/api/contacts/'.$id;
    
$ctx = 
stream_context_create($params);
    
$fp = 
fopen($url, 'rb', false, $ctx);
     
if (!$fp) {
        $res = false
;
     
} else {
        $res = stream_get_contents($fp)
;
     
}
    
fclose
($fp);

    header('Location: /json.php');
    
exit
;
}


Again simple code. If we write also insert and update methods we may want to bundle those operations to single class.

Conclusion

ASP.NET Web API is not only ASP.NET fun. It is available also for all other platforms. In this posting we wrote simple PHP client that is able to communicate with our Web API application. We wrote only some simple code, nothing complex. Same way we can use also platforms like Java, PERL and Ruby.

Posted: Jun 20 2012, 12:16 AM by DigiMortal | with no comments
Filed under: ,
ASP.NET MVC–How to show asterisk after required field label

Usually we have some required fields on our forms and it would be nice if ASP.NET MVC views can detect those fields automatically and display nice red asterisk after field label. As this functionality is not built in I built my own solution based on data annotations. In this posting I will show you how to show red asterisk after label of required fields.

Here are the main information sources I used when working out my own solution:

Although my code was first written for completely different situation I needed it later and I modified it to work with models that use data annotations. If data member of model has Required attribute set then asterisk is rendered after field. If Required attribute is missing then there will be no asterisk.

Here’s my code. You can take just LabelForRequired() methods and paste them to your own HTML extension class.


public static class HtmlExtensions

{

    [SuppressMessage("Microsoft.Design", "CA1006:DoNotNestGenericTypesInMemberSignatures", Justification = "This is an appropriate nesting of generic types")]

    public static MvcHtmlString LabelForRequired<TModel, TValue>(this HtmlHelper<TModel> html, Expression<Func<TModel, TValue>> expression, string labelText = "")

    {

        return LabelHelper(html,

            ModelMetadata.FromLambdaExpression(expression, html.ViewData),

            ExpressionHelper.GetExpressionText(expression), labelText);

    }

 

    private static MvcHtmlString LabelHelper(HtmlHelper html,
       
ModelMetadata metadata, string htmlFieldName, string labelText)

    {

        if (string.IsNullOrEmpty(labelText))

        {

            labelText = metadata.DisplayName ?? metadata.PropertyName ?? htmlFieldName.Split('.').Last();

        }

 

        if (string.IsNullOrEmpty(labelText))

        {

            return MvcHtmlString.Empty;

        }

 

        bool isRequired = false;

 

        if (metadata.ContainerType != null)

        {

            isRequired = metadata.ContainerType.GetProperty(metadata.PropertyName)

                            .GetCustomAttributes(typeof(RequiredAttribute), false)

                            .Length == 1;

        }

 

        TagBuilder tag = new TagBuilder("label");

        tag.Attributes.Add(

            "for",

            TagBuilder.CreateSanitizedId(

                html.ViewContext.ViewData.TemplateInfo.GetFullHtmlFieldName(htmlFieldName)

            )

        );

 

        if (isRequired)

            tag.Attributes.Add("class", "label-required");

 

        tag.SetInnerText(labelText);

 

        var output = tag.ToString(TagRenderMode.Normal);

 

 

        if (isRequired)

        {

            var asteriskTag = new TagBuilder("span");

            asteriskTag.Attributes.Add("class", "required");

            asteriskTag.SetInnerText("*");

            output += asteriskTag.ToString(TagRenderMode.Normal);

        }

        return MvcHtmlString.Create(output);

    }

}


And here’s how to use LabelForRequired extension method in your view:


<div class="field">
    @Html.LabelForRequired(m => m.Name)
    @Html.TextBoxFor(m => m.Name)
    @Html.ValidationMessageFor(m => m.Name)
</div
>

After playing with CSS style called .required my example form looks like this:

LabelForRequired in action

These red asterisks are not part of original view mark-up. LabelForRequired method detected that these properties have Required attribute set and rendered out asterisks after field names.

NB! By default asterisks are not red. You have to define CSS class called “required” to modify how asterisk looks like and how it is positioned.

Posted: Jun 17 2012, 08:42 PM by DigiMortal | with 6 comment(s)
Filed under: ,
Using TPL and PLINQ to raise performance of feed aggregator

In this posting I will show you how to use Task Parallel Library (TPL) and PLINQ features to boost performance of simple RSS-feed aggregator. I will use here only very basic .NET classes that almost every developer starts from when learning parallel programming. Of course, we will also measure how every optimization affects performance of feed aggregator.

Feed aggregator

Our feed aggregator works as follows:

  1. Load list of blogs
  2. Download RSS-feed
  3. Parse feed XML
  4. Add new posts to database

Our feed aggregator is run by task scheduler after every 15 minutes by example.

We will start our journey with serial implementation of feed aggregator. Second step is to use task parallelism and parallelize feeds downloading and parsing. And our last step is to use data parallelism to parallelize database operations.

We will use Stopwatch class to measure how much time it takes for aggregator to download and insert all posts from all registered blogs. After every run we empty posts table in database.

Serial aggregation

Before doing parallel stuff let’s take a look at serial implementation of feed aggregator. All tasks happen one after other.


internal class FeedClient

{

    private readonly INewsService _newsService;

    private const int FeedItemContentMaxLength = 255;

 

    public FeedClient()

    {

         ObjectFactory.Initialize(container =>

         {

             container.PullConfigurationFromAppConfig = true;

         });

 

        _newsService = ObjectFactory.GetInstance<INewsService>();

    }

 

    public void Execute()

    {

        var blogs = _newsService.ListPublishedBlogs();

 

        for (var index = 0; index <blogs.Count; index++)

        {

             ImportFeed(blogs[index]);

        }

    }

 

    private void ImportFeed(BlogDto blog)

    {

        if(blog == null)

            return;

        if (string.IsNullOrEmpty(blog.RssUrl))

            return;

 

        var uri = new Uri(blog.RssUrl);

        SyndicationContentFormat feedFormat;

 

        feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);

 

        if (feedFormat == SyndicationContentFormat.Rss)

            ImportRssFeed(blog);

        if (feedFormat == SyndicationContentFormat.Atom)

            ImportAtomFeed(blog);            

    }

 

    private void ImportRssFeed(BlogDto blog)

    {

        var uri = new Uri(blog.RssUrl);

        var feed = RssFeed.Create(uri);

 

        foreach (var item in feed.Channel.Items)

        {

            SaveRssFeedItem(item, blog.Id, blog.CreatedById);

        }

    }

 

    private void ImportAtomFeed(BlogDto blog)

    {

        var uri = new Uri(blog.RssUrl);

        var feed = AtomFeed.Create(uri);

 

        foreach (var item in feed.Entries)

        {

            SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);

        }

    }

}


Serial implementation of feed aggregator downloads and inserts all posts with 25.46 seconds.

Task parallelism

Task parallelism means that separate tasks are run in parallel. You can find out more about task parallelism from MSDN page Task Parallelism (Task Parallel Library) and Wikipedia page Task parallelism. Although finding parts of code that can run safely in parallel without synchronization issues is not easy task we are lucky this time. Feeds import and parsing is perfect candidate for parallel tasks.

We can safely parallelize feeds import because importing tasks doesn’t share any resources and therefore they don’t also need any synchronization. After getting the list of blogs we iterate through the collection and start new TPL task for each blog feed aggregation.


internal class FeedClient

{

    private readonly INewsService _newsService;

    private const int FeedItemContentMaxLength = 255;

 

    public FeedClient()

    {

         ObjectFactory.Initialize(container =>

         {

             container.PullConfigurationFromAppConfig = true;

         });

 

        _newsService = ObjectFactory.GetInstance<INewsService>();

    }

 

    public void Execute()

    {

        var blogs = _newsService.ListPublishedBlogs();       

        var tasks = new Task[blogs.Count];

 

        for (var index = 0; index <blogs.Count; index++)

        {

            tasks[index] = new Task(ImportFeed, blogs[index]);

            tasks[index].Start();

        }

 

        Task.WaitAll(tasks);

    }

 

    private void ImportFeed(object blogObject)

    {

        if(blogObject == null)

            return;

        var blog = (BlogDto)blogObject;

        if (string.IsNullOrEmpty(blog.RssUrl))

            return;

 

        var uri = new Uri(blog.RssUrl);

        SyndicationContentFormat feedFormat;

 

        feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);

 

        if (feedFormat == SyndicationContentFormat.Rss)

            ImportRssFeed(blog);

        if (feedFormat == SyndicationContentFormat.Atom)

            ImportAtomFeed(blog);           

    }

 

    private void ImportRssFeed(BlogDto blog)

    {

         var uri = new Uri(blog.RssUrl);

         var feed = RssFeed.Create(uri);

 

        foreach (var item in feed.Channel.Items)

         {

             SaveRssFeedItem(item, blog.Id, blog.CreatedById);

         }

    }

    private void ImportAtomFeed(BlogDto blog)

    {

        var uri = new Uri(blog.RssUrl);

        var feed = AtomFeed.Create(uri);

 

        foreach (var item in feed.Entries)

        {

            SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);

        }

    }

}


You should notice first signs of the power of TPL. We made only minor changes to our code to parallelize blog feeds aggregating. On my machine this modification gives some performance boost – time is now 17.57 seconds.

Data parallelism

There is one more way how to parallelize activities. Previous section introduced task or operation based parallelism, this section introduces data based parallelism. By MSDN page Data Parallelism (Task Parallel Library) data parallelism refers to scenario in which the same operation is performed concurrently on elements in a source collection or array.

In our code we have independent collections we can process in parallel – imported feed entries. As checking for feed entry existence and inserting it if it is missing from database doesn’t affect other entries the imported feed entries collection is ideal candidate for parallelization.


internal class FeedClient

{

    private readonly INewsService _newsService;

    private const int FeedItemContentMaxLength = 255;

 

    public FeedClient()

    {

         ObjectFactory.Initialize(container =>

         {

             container.PullConfigurationFromAppConfig = true;

         });

 

        _newsService = ObjectFactory.GetInstance<INewsService>();

    }

 

    public void Execute()

    {

        var blogs = _newsService.ListPublishedBlogs();       

        var tasks = new Task[blogs.Count];

 

        for (var index = 0; index <blogs.Count; index++)

        {

            tasks[index] = new Task(ImportFeed, blogs[index]);

            tasks[index].Start();

        }

 

        Task.WaitAll(tasks);

    }

 

    private void ImportFeed(object blogObject)

    {

        if(blogObject == null)

            return;

        var blog = (BlogDto)blogObject;

        if (string.IsNullOrEmpty(blog.RssUrl))

            return;

 

        var uri = new Uri(blog.RssUrl);

        SyndicationContentFormat feedFormat;

 

        feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);

 

        if (feedFormat == SyndicationContentFormat.Rss)

            ImportRssFeed(blog);

        if (feedFormat == SyndicationContentFormat.Atom)

            ImportAtomFeed(blog);           

    }

 

    private void ImportRssFeed(BlogDto blog)

    {

        var uri = new Uri(blog.RssUrl);

        var feed = RssFeed.Create(uri);

 

        feed.Channel.Items.AsParallel().ForAll(a =>

        {

            SaveRssFeedItem(a, blog.Id, blog.CreatedById);

        });

     }

 

     private void ImportAtomFeed(BlogDto blog)

     {

        var uri = new Uri(blog.RssUrl);

        var feed = AtomFeed.Create(uri);

 

        feed.Entries.AsParallel().ForAll(a =>

        {

             SaveAtomFeedEntry(a, blog.Id, blog.CreatedById);

        });

     }

}


We did small change again and as the result we parallelized checking and saving of feed items. This change was data centric as we applied same operation to all elements in collection. On my machine I got better performance again. Time is now 11.22 seconds.

Results

Let’s visualize our measurement results (numbers are given in seconds).

Feed aggregation results

As we can see then with task parallelism feed aggregation takes about 25% less time than in original case. When adding data parallelism to task parallelism our aggregation takes about 2.3 times less time than in original case.

More about TPL and PLINQ

Adding parallelism to your application can be very challenging task. You have to carefully find out parts of your code where you can safely go to parallel processing and even then you have to measure the effects of parallel processing to find out if parallel code performs better. If you are not careful then troubles you will face later are worse than ones you have seen before (imagine error that occurs by average only once per 10000 code runs).

Parallel programming is something that is hard to ignore. Effective programs are able to use multiple cores of processors. Using TPL you can also set degree of parallelism so your application doesn’t use all computing cores and leaves one or more of them free for host system and other processes. And there are many more things in TPL that make it easier for you to start and go on with parallel programming.

In next major version all .NET languages will have built-in support for parallel programming. There will be also new language constructs that support parallel programming. Currently you can download Visual Studio Async to get some idea about what is coming.

Conclusion

Parallel programming is very challenging but good tools offered by Visual Studio and .NET Framework make it way easier for us. In this posting we started with feed aggregator that imports feed items on serial mode. With two steps we parallelized feed importing and entries inserting gaining 2.3 times raise in performance. Although this number is specific to my test environment it shows clearly that parallel programming may raise the performance of your application significantly.

More Posts