WPF with Web API and SignalR – Why and How I Use It

Preface: This blog post is less about going deep into the technical aspects of ASP.NET Core Web API and SignalR. Instead it’s a beginner’s story about why and how I started using ASP.NET Core. Maybe it contains bits of inspiration here and there for others in a similar situation.

The why

Years ago, I wrote a WPF application that manages media files, organizes them in playlists and displays the media on a second screen. Which is a bit of an understatement, because that “second screen” is in fact mapped onto the LED modules of a perimeter advertising system in a Basketball arena for 6000 people.

Unfortunately, video playback in WPF has both performance and reliability problems, and a rewrite in UWP didn’t go as planned (as mentioned in previous blog posts). Experiments with HTML video playback, on the other hand, went really well.

At this point, I decided against simply replacing the display part of the old application with a hosted browser control. Because my plans for the future (e.g. synchronizing the advertising system with other LED screens in the arena) already involved networking, I planned the new software to have three separate parts right from the beginning:

  1. A server process written in C#/ASP.NET Core for managing files and playlists. For quick results, I chose to do this as a command line program using Kestrel (with the option to move to a Windows service later).
  2. A (non-interactive) media playback display written in TypeScript/HTML, without any framework. By using a browser in kiosk mode, I didn’t have to write an actual “display application” – but I still have the option to do this at a later time.
  3. A C#/WPF application for creating and editing playlists, starting and stopping media playback, which I call the cockpit. I chose WPF because I had a lot of existing code and custom controls from the old application that I could reuse.

The how: Communication between server, cockpit and display

For communication, I use Web API and SignalR.

Not having much prior experience, I started with the “Use ASP.NET Core SignalR with TypeScript and Webpack” sample and read the accompanying documentation. Then I added app.UseMvc() in Startup.Configure and services.AddMvc in Startup.ConfigureServices to enable Web API. I mention this detail because that was a positive surprise for me. When learning new technologies, I sometimes was in situations where I had created an example project A and struggled to incorporate parts of a separate example project B.

For quick tests of the Web API controllers, PostMan turned out to be a valuable tool.

Before working on “the real thing”, I read up on best practices on the web and tried to follow them to my best knowledge.

Web API

I use Web API to

  • create, read, update or delete (“CRUD”) lists of “things”
  • create, read, update or delete a single “thing”

In my case, the “things” are both meta data (playlists, information about a media file, etc.) as well as the actual media files.

Side note: I wrote about my experiences serving video files in my post “ASP.Net Core: Slow Start of File (Video) Download in Internet Explorer 11 and Edge”.

SignalR

While I use Web API to deal with “things”, I use SignalR for “actions”:

  • I want something to happen
  • I want to be notified when something happens.

Currently the server distinguishes between “display” and “cockpit” roles for the communication. In the future, it’s likely I will have more than one “cockpit” (e.g. a “remote control” on a mobile device) – and more roles when the application grows beyond simple media playback only one display. Using the SignalR feature of groups, the clients of the server receive only those SignalR messages they are interested in as part of their role(s).

Example

When I select a video file in the “cockpit”, I want the “display” to preload the video. This means:

  • The cockpit tells the server via SignalR that a video file with a specific ID should be be preloaded in the display.
  • The server tells* the display (again, via SignalR) that the video file should be preloaded.
  • The display creates an HTML video tag and sets the source to a Web API URL that serves the media file.
  • When the video tag has been created and added to the browser DOM, the display tells the server to – in turn – tell the cockpit that a video with the specified is ready to be played.

*) When I write “the server tells X”, this actually means that the server sends a message to all connections in group “X”.

The cockpit

In my WPF applications, I use the model-view-view model (MVVM) and the application service pattern.

Using application services, view models can “do stuff” in an abstracted fashion. For example, when the code in a view model requires a confirmation from the user. In this case, I don’t want to open a WPF dialog box directly from a view model. Instead, my code tells a “user interaction service” to get a confirmation. The view model does not see the application service directly, only an interface. This means that the view model does not know (and does not care) whether the response really comes from a dialog shown to the user or some unit test code (that may just confirm everything).

Application services can also offer events that view models can subscribe to, so the view models are notified when something interesting happens.

Back to Web API and SignalR: I don’t let any view model use a SignalR connection or call a Web API URL directly. Instead I hide the communication completely behind the abstraction of an application service:

  • Web API calls and outgoing SignalR communication are encapsulated using async methods.
  • Incoming SignalR communication triggers events that view models (and other application services) can subscribe to.

Minor pitfall: When a SignalR hub method is invoked and the handler is called in the WPF program, that code does not run on the UI thread. Getting around this threading issue (using the Post() method of SynchronizationContext.Current) is a good example of an implementation detail that an application service can encapsulate. The application service makes sure that the offered event is raised on the UI thread, and if a view model subscribes to this event, things “just work”.

2 Comments

  • Excellent write-up, thanks for taking the time to make this! It sounds like you have all the services and other moving parts well isolated. One questions come immediately to my mind -- are you the only person who understands all of this or are you part of a larger team? ie. If something, god forbid, happens to Roland is the arena going to lose advertisers and have to scramble to figure all this out? I ask b/c we have to be conscience about having a 'backup' dev that can pick up the pieces if needed.

    Also, any pictures of this stuff in action? :)

    John

  • > are you the only person who understands all of this

    Code: Yes. I'm the only developer, which at the moment has more advantages than disadvantages.
    Usage: No, I have two other (non-technical) persons who help me with the home games.

    > [ If something happens ] is the arena going to lose advertisers

    The other people know how to operate the software. Good usability has always been one of the driving forces for the UI of the software. At first only for those parts that are used during the event, while admin tasks involved editing XML configuration files. That has already changed in some parts, e.g. placing advertising videos on their spots in the playlist is done via drag'n drop. The vision for the future is "100% Notepad-free configuration".

    > any pictures of this stuff in action? :)

    In 2014 I gave a short talk about the challenges of user experience under pressure at a UX conference in Lisbon (my first talk in English). The slides are on SlideShare (https://www.slideshare.net/RWeigelt/fun-confusion-fear-and-basketball-ux-lx-2014). The information on the slides may no longer be up-to-date, but the photos give you a good overall idea.

Comments have been disabled for this content.