If every future 0.1 update will be like this, I don’t dare to imagine what 1.0 update will be like. So far it was a very pleasant experience. In the past updates where more of a roller-coaster: you expected a lot, got some of that, and eventually found that there’s still a lot that was missing. For the first time that I have windows phone an update contained more than I have expected to see.

One thing that has drastically been improved is main screen. Bellow is the transformation I went through. Ironically, I now have more screen real estate than items to put on it :)

wp_ss_20140419_0002 

wp_ss_20140419_0001 wp_ss_20140419_0004 wp_ss_20140419_0008
Before After Background image Polishing

A few observations so far:

- App Folders is nice, but with so much real estate it is more of a grouping for convenience than an actual need

- Notification centre is helpful to cut on wasted time spent navigating around

- Cortana integration with phone/web is helpful – I found making reminders by dictating faster than typing those pesky details (remarkably “she” gets it even with my horrable accent)

- Quiet Hours is really helpful. It forces me to leave my email/sms/twitter a side after specified hour and no more buzzing during night time

- Volume control per functionality (media, bluetooth, headphones, etc.) – I know quite a few that hated windows phone because it lacked this feature. Well, too bad for them, it’s here now

- Swipe typing – I’m still not good at it, but now it allows me to type in Russian w/o know the freaking grammer. I’m a champion now. Too bad Hebrew is not supported, could use some help with grammar as well :)

There’s more. Watch Scott Hanselman`s video for other features.

For those who were fast to bury it, here’s some update:

image

If you’re running into a bug like I did, there are good and bad news. Good – Microsoft acknowledged this bug and will fix it, bad – it will happen in the next release (not sure then). In MS favour I have to admit that they turn things really quick, so who knows, it might be already in the release notes of the next release :)

The idea behind this is extremely simple: to run a deployment machine in Azure that will be used to deploy updates to production. In my environment we get everything packaged on build server, but deployment has to happen in a controlled environment and from a manual kick-off. Deployments are not performed at night, hence compute time is wasted. To save 50% (at least) of compute time, VM has to be down.

Plan for solution: get a single instance VM down during night and up at work hours. My initial thought was to use Azure Cloud Services auto-scale feature.

Pros:

- Defined under Cloud Service VM instance belongs to

- Scaling is a property of Cloud Service

Cons:

- Requires Availability Set to be created and associated with VM instance

- Requires at least 2 VM instances to run Auto-Scaling (deal breaker!)

 

Fortunately, with the new release of Azure Automation, this can be done with Runbooks (a Runbook is a workflow that contains a PowerShell script, that can also call child Runbooks).

Pros:

- Doesn’t require multiple instances of VM (hence saving money)

- Runs under the context of subscription, therefore has access to all resources

Cons:

- Scheduling is not as flexible as with Auto-Scaling and needs to be associated with a Runbook

 

What I ended up doing was quick and dirty, but it does the job for now.

1. Create an automation object (Virtual-Machines)

clip_image001

2. Imported two Runbooks into automation (Start-AzureVMsOnSchedule and Stop-AzureMVsOnSchedule)

clip_image002

Note that “import” is as simple as importing PS script wrapped in workflow name_of_workflow { #powershell script }

3. Published imported runbooks

clip_image003

4. Associated a schedule with each runbook

clip_image004

5. Specified for each schedule to execute on a certain time daily (for Start runbook to run at 7AM, for Stop runbook to run at 7PM). BTW, parameters can be passed to individual runbooks, so that job (executed runbook) becomes a parameterized job. Also, resources can be used from all

clip_image005

6. Once a job (schedule runbook) was executed, it’s logged (you can drill into details of each command – can see Bruce`s eyes light up)

clip_image006

7. Job executed (spike at 7AM) and VM is up and running. Big Success *Borat accent* :)

image

image

Note that when you edit Draft of your runbook, you can run (test) it before publishing. Also, you can import existing modules (Azure module is imported by default) using command toolbar at the bottom, add settings that can be shared by multiple runbooks, and insert Activity (powershell command) / Runbook / setting.

clip_image007

Azure Automation is a great feature to leverage. Excited to see all of these things shaping up and making work easier, allowing to cut down costs at the same time.

Update 1: I’ve noticed that while VM was started and stopped, scripts didn’t execute cleanly. To solve that, I had to wrap commands in InlineScript { #start/stop-AzureVM … } construct

A year ago I have started looking and evaluating cloud options. AWS and Azure were two options. I have decided to go with Azure for a couple of reasons that are still valid today (even more than a year ago)

  1. As an MSDN subscriber, I could leverage MSDN credits that are sufficient to learn
  2. PaaS on Azure was very appealing while on AWS it seemed a little foreign and not as friendly as AWS`s IaaS offer (a year ago Azure`s IaaS was very poor, so for infrastructure purposes only I’d probably wouldn’t choose Azure back then)
  3. Simplicity, or at least perception I had

As an MSDN subscriber you get a LOT. You get credits on Azure that you can use towards anything you want (PaaS, IaaS, developer services, etc.) You can run production on MSDN subscription, but you can learn. And learning is important. I would strongly recommend to go beyond a single MSDN subscription discount and get a Pay-as-you-Go subscription to test out things. You can read documentation, play with cost calculator, but nothing, I repeat NOTHING, will replace the actual usage where real people are hitting your service, storage and CDN transactions are happening, when you run your minimal viable product with scale out as needed, leveraging developer services. If you try to shave off that cost, you’ll never fully learn. After all, without errors there are no successes. If you try to minimize the risk to none, you better not get on cloud at all.

PaaS or SaaS? Or both? This is a question only you can answer. My answer to this is: it depends. Certain things will require IaaS, other things will require PaaS. And PaaS on Azure has changed over time. It used to be just Cloud Services with Web and Wroker roles and a very complex process of deployment. But now we also have Azure Web Sites (ironically AWS) that have simplified the process of deployment and introduced some great options such as continuous deployment from code repository, IIS always on, web sockets, web jobs, etc. Today, one can build a globally scaleable web application without resorting to complex Cloud Services (thanks to Traffic Manager that made AWS a first class citizen). For scenarios you’d like to have pure VMs, you can leverage Azure IaaS that has become even richer with recent announcements at Build 2014. Automate it, schedule it, scale it, do anything you want.

Azure is simple. Interface is simple. Powershell cmdlets are easy. Will it stay simple for long? Azure new portal is making an attempt to address growing complexity by making it visually aesthetic and pleasant, something to validate in future to come.

So did this investment pay off? Yes. On several fronts:

  1. Application hosting cost – now we know what it cost us to run a web application X
  2. Infrastructure / hosting cost reduction – no argument here, costs went significantly down
  3. Scaling – we can scale out with ease (once proper architecture is implemented. Do not dream of taking your application “as-is” and have it in the cloud)
  4. Less dependency on IT – IT now can concentrate on more important things than spinning up VMs or monitoring response time
  5. Automation – this has been addressed so many times, and yet I’ll say this again, with Azure automation is so easy that it’s a sin not to take advantage of it. And once you’ve automated a process, you’ve documented it and ensured that others can understand “the magic”.

We’ve implemented PaaS with Cloud Services, Plain AWS, IaaS, and recently have completed a spike that involved all above with an alternative approach for Sitecore CMS (that in its core is not so cloud friendly). Preview is available at http://ta-mcit.azurewebsites.net/  and hopefully launch to come soon. When that happens, I’ll brag a little more about how we did it.

Early testing is amazing. I am not talking about TDD and developers testing their own creation. I am talking about testing performed by professional QAs with mindset to hack the heck of your system (code, server, deployment, you name it). The value of early testing vs. late testing in SDLC is very easy to show to those that deal with with software on a daily basis and live it everyday. But how do you translate it to the business? Visualize it. One is high level – show one approach vs another and start asking questions.

image

vs

 image

Current project my team is working on we’ve decided to start testing early, while code in its dippers. Not only test it, but get a Sentinel service from WhiteHat that is a on-going testing while we develop the product. Won’t outline all the benefits of the system, but mention what we got out of this as a business.

  1. Finding serious deficiencies in our software
  2. Fixing software flaws that would be security issues and bugs
  3. Saved time on design changes while it is easy to accommodate those
  4. Removed “surprise” component out of our software

#4 is the last but not the least from business point of view. Having confidence in your product is extremely important. We owe it to our stakeholders and our customers. Label it agile or common sense, it is great to do the right thing and see the results.

We are ramping up on our development of a new version of existing system that Learning NServiceBuswill utilize NServiceBus for communication between its various parts. Learning NServiceBus is a great resource to get going, especially if you need in short time to go from 0 to 100. The books falls a little short on testing IMO, but it gives you enough to move in the right direction. In case you are planning to deviate from a standard transport (MSMQ), you won’t find a lot of help in this book. Though frankly, outstanding NServiceBus team and amazing community behind it will answer any of your questions, if those were not already answered. Since NServiceBus is now under Particular, this is the new user group you want to send your questions to.

 

In my future posts, I will talk a little more about how we are designing our new system and going to use NServiceBus with Azure transport, which opens up so many opportunities and fascinating architecture options.

What is a tag?

In the marketing world tag (or pixel) is used for tracking purposes. Historically it was based on a call to retrieve a 1x1 transparent image. Rational behind this was to retrieve client side information of a site visitor on a 3rd party server. Information would include browser standard information including cookies. There’s a lot of things that can be done with this information from analytical and marketing point of view.

Why is is so Messy?

Tracking and conversion tags (pixels) were supposed to be “add-ons” that any non-technical web master (and later “business user”) should be able to drop into mark-up and be done. After a little bit information flows into tag vendor server and reports are available. But it is not as simple as it sounds when you need to work with multiple tag vendors. Imagine following scenarios.

Scenario 1:

We need to know how many unique visits we had to a page X.

Scenario 2:

We need to count how many times visitors clicked a button Y.

For scenario #1, traditionally it is achieved by adding a tag to the HTML. Something like

   1: <img src="http://vendor/tag.jpg?client=id&page=code" />

For scenario #2, again, traditionally it is as “simple” as embedding code into onclick event

   1: <button onclick="vendorTagFunction(params)" />

It’s almost good, except mixed concerns and need to constantly change. In the marketing world, tags come and go. And it happens frequently. And with multiple vendors. Therefore you end up with a few issues doubled (when tag is added and removed), multiplied by number of vendors.

  1. Constant need to modify mark-up
  2. Constant need to modify client side code (JavaScript handlers)
  3. Constant need to deploy changes
  4. Mixing of concerns (marketing vs. development)

What’s a Solution?

Separation of Concerns. Tags are not needed for markup and client side code. Developers and designers shouldn’t be concerned with those. Marketers should (well, ideally at least). In order to achieve that, tags should be placed and managed separately from markup and code. This is where Tag Management tools are handy.

A tool I have tried so far was Google Tag Manager (or just GTM) and it works great for these kind of things.

How Tag Management Helps?

These are a few things that GTM does for you:

  1. Takes tags code and markup out of your markup and code and by that makes it clean and lean
  2. Injects tags dynamically based on rules execution
  3. Allows to manage rules and tags outside of your main solution
  4. Versioning by marketers – a very strong feature
  5. Publishing* of a specific version
  6. Preview and debugging to ensure things work before get published
  7. Ability to add/remove tags w/o main site re-deployment
  8. and more…

* Publishing that is happening within GTM, no connection to your main markup/code publishing

How Simple it is?

Simple. There’s really not that much to it, but once you utilize the power, you’ll not go back again to embedding tags in markup/code ever again.

Another benefit is integration. If you use Google Analytics, you can easily integrate that one (another cross-cutting concern is removed from you markup).

Are there Alternatives?

Plenty. Google is not the pioneer in this area, and the tool is far from perfect. Lots of other companies have offerings that are good and viable solutions. We found GTM to be simple, clean, and cost effective (free for now) to address our requirements.

RavenDB is amazing. You don’t have to work with it for a long time to get that. What’s even more amazing is the extensibility and testability of it. This post is about the last two.

In my recent work I needed to have versioning of documents with very specific requirements that are not matching RavenDB built in versioning bundle. Default versioning bundle would generate revisions of all documents upon any change that occurs to a document. In my scenario, I needed only 1 revision at any given time, and revision should be generated only for the documents that have a Status field and its value is changing to “Published”. Very specific to the business requirement. After poking around, reading documentation, and bugging people on user group, I learned a few things about testing custom bundle/plugin RavenDB style.

Testing

If you are doing unit testing, RavenDB.Tests.Helpers is your friend. Once nuget package is installed, your tests can inherit from RavenTestBase class that will wire a new embedded document store for you, optimized for testing, and allowing additional modification needed for testing scenario(s) (#3). For bundle/plugin testing, I needed to register all of my triggers (optionally, you could register one at a time, or all of the triggers found in assembly) in Raven’s configuration. The base class exposes ModifyConfiguration for that purpose (#1). In addition to that, RavenDB needs to be told that we are activating our bundle (#4). Logging (#2) was more for me to see what happens with RavenDB while test is running.

image

Custom Triggers

One this that I haven’t seen in documentation, but was helped with at the user group was the attributes needed for each custom trigger. InheritedExport and ExportMetadata are both needed. BundleName is the name that is registered with Raven’s configuration.

image

Enabling Bundle in RavenDB

In order to get custom bundle to work, it has to be copied into Plugins folder under RavenDB location and database setting has to be updated to let Raven know we want bundle to be activated.

image

Bundle in Action

image

image

image

I have ran into an interesting message when opened a user group site in Visual Studio (not something that I usually do) and it made me wonder

1. What version of IE Visual Studio 2012 Update 2 CTP is using if not the one found on machine (IE9 on my Windows 7)

2. Google is playing dirty… this message would only show up in IE version 8 and lower.

Capture

More Posts Next page »