For various debugging processes, I find myself needing to find several bits of runtime information for my properties. This means I want to get the property's PropertyInfo object by Reflection when that property's code is running.
Dictionaries have been an annoyance when serializing for a while now. IDictionary in v1.1 and now IDictionary<K,V> in v2.0 are both non-serializable, forcing us to find workarounds or use DataTables or any number of unsatisfactory solutions.
I was sitting around munching some code, and needed to filter a list I had:
Just a quick tip to prevent others from feeling as foolish as I do:
A relatively obscure new keyword in C# 2.0 is the default(T) keyword.
It's shorthand for checking if the given generic parameter T is a value type or reference type. If it's a reference type, it returns null. Otherwise it returns 0/false/0.0 or whatever the default value for the type is.
As I mentioned in my previous post, I have a need to consistently pass an operation context to any thread I choose to spin. Doing so manually, as I did here, involves a lot of ugly, repetitive code for every thread. A lot of room for mistakes and bugs.
So using the power that .NET delegates give us, I've built a generic Context-bound threadpool queue class to wrap this logic for me.
WCF Services tend to be big, heavy duty things. When I write a service I often want it to do a lot of work for a lot of clients, and it should do it efficiently. This usually means that I will use multithreaded code to get things done concurrently. Whether using the ThreadPool or instantiating a thread of my own, I expect this is a common scenario for WCF service writers.
That's why I was suprised today to find out that the OperationContext of the current service call, our entry point for getting context information about the current call, its headers and so forth - is marked as [ThreadStatic]. This means that the moment I fork off to another thread, I lose all context information. If I want it available, I have to do it myself.
I don't know how ASP.NET deals with this problem. If I spin a new thread under ASP.NET, I don't lose my current HttpContext. A quick glance with Reflector shows that there's no [ThreadStatic] anywhere. Whatever features of IIS they use there, it's probably unavailable for WCF, so we have to do it manually.
The simplest way to pass the context to a thread is just to send it as a parameter:
void Method ()
void ThreadMethod(object state)
OperationContext.Current = state as OperationContext;
// Do whatever.
Side note: Note the automatic delegate inference that .NET 2.0 does, rather than forcing me to manually create a WaitCallback delegate:
ThreadPool.QueueUserWorkItem(new WaitCallback(ThreadMethod), OperationContext.Current);
I don't think this was possible in v1.1.
If we want to spin a thread of our own, we can use the ParameterizedThreadStart delegate:
void Method ()
Thread t = new Thread(new ParameterizedThreadStart(ThreadMethod), OperationContext.Current);
If we have parameters to pass to our method, though, we need to be even hackier - maybe define a struct or class to hold our OperationContext as well as the custom parameters, and pass that on to the ThreadMethod and have it disassemble it.
There is a better way to build a Thread Launcher than can pass the ObjectContext. I'll elaborate on that in my next post.
When using WCF services, we have several options for creating the proxies. We can create them statically using SVCUTIL or the build-in IDE support (Add Service Reference...), or we can generate them dynamically using the ChannelFactory<T> or GenericProxy<T> approaches.
The advantage of the static proxy approach is that it we local code for us based on remote metadata (WSDL). Even if the service is somewhere out of our reach and control, all we need is for it to expose metadata for us to access it. We get a copy of the service contract and interfaces and we're good to go.
The problems with it is that I have to maintain that proxy. If the service changes, our proxy needs to adapt too. I'm not referring to versioning issues and new methods added, but to big changes in the service contract itself. This may not be an issue when accessing stable services, but certainly happens a lot during development.
The second approach is much more limited - for it to work we need to have a reference to a shared assembly containing the contracts. It means our services and contracts are .NET classes using the WCF framework, rather than generic WS-* services that can have any implementation. This approach is only useful when we have access to the our services' code or assemblies, so it's out of the question for public services.
In short, the dynamic proxy approach is only for use when we control both client and server in the scope of a single application (or group of familiar applications).
But in this context, this is the best way to work. I'll stress the point again - if we meet all the criteria above for using dynamic proxies, we should use them without hesitation.
The amount of work that goes into maintaining the static proxies, making sure that the client and server copies of the contracts are identical, hours of debugging mysterious errors caused by contract mismatches - all with perplexing error messages and little documentation - all these things are simply not worth it.
I'll say it again - if you're writing an N-tier application that uses WCF for communication, have the contracts shared by both client and server and use the GenericProxy<T> class to access it rather than relying on generated proxies and SVCUTIL. Trust me. Your deadline will thank you for it.
Another error message that stumped me today (after I had removed the IsOneWay parameter and actually got to see it) was the following exception:
The server did not provide a meaningful reply; this might be caused by a contract mismatch, a premature session shutdown or an internal server error
This little gem was caused when calling a method on a service with an interface as a parameter, something like this:
void DoSomething (IMessage message)
When deserializing the IMessage, WCF had no way of knowing what type to deserialize it as, so it through the charming message above.
I don't know how this situation came to be, since the proxy generated by svcutil seems to create the message as DoSomething(object message) and not IMessage, but the principle should be the same.
The immediate solution that fixed this was adding the [KnownType(typeof(myMessage))] attribute to the method. This allows the deserialization engine to understand the message and do something constructive with it rather than crashing.
Naturally, I am less than pleased with this solution. The whole purpose of using interfaces is that I won't have to know, when coding, what objects will be passed to my service. I just want to expose the interface.
One way to keep this flexibility can be found in the very last paragraph of the long and detailed Data Contract Known Types article on MSDN - it seems that the list of Known Types can be defined globally in the system using the
section in the config file. Details about that are also sparse, and I'm not at all sure if it's possible in the January CTP - the section names seem to have changed from to between January and February, and there are no samples to explain the proper structure for the January version. I hope this gets clearer with the next beta.
The first step in understanding WCF error messages is making sure you actually get them.
You can have an OperationContract defined with the IsOneWay parameter set to True, thus optimizing it by not requiring a reply message.
Since faults and exceptions are represented as replies to the message sent, however, setting IsOneWay to true will cause WCF to simply swallow the exception and not report it - as far as the framework is concerned the message was sent successfully and that's it.
If you want to use the IsOneWay optimization, it's best to save it for later parts of development, after the basic debugging work is done.
As a follow-up to this post: another reason why we get CommunicationObjectAbortedExceptions is because our client channel definition does not send any credentials (using a binding that has the SecurityMode set to None) while the service is still set to the default settings, expecting Windows authentication. Rather than throwing some sort of SecurityException, the exception is raised.
Just a heads up.
In TechEd Israel 2006, I listened to Roy Osherove's excellent talk on the new reflection features in .NET 2.0 - a great assortment of topics on things ranging from DynamicMethods to the ReflectionOnly- set of properties.
One of the things he mentioned was that the GetCustomAttributes method that's exposed by various reflection types was unsafe. When the method returns the collection of custom attributes, it has to instantiate each one of them, and this means calling the constructor on each one of them. This could be the "crack in the door" for an attacker to use an extensible plug-in architecture to slide his own code by inheriting a custom atttribute with malicious constructor code and decorating a class with it.
The solution in .NET 2.0 comes from the CustomAttributeData class, which has a single static method - GetCustomAttributes - which returns the list of attributes without calling their constructors. This seems a cleaner and safer solution, and I couldn't wait to go and fix up my reflection code in the current project with it.
The problem with this implementation, as always, is in the little details. The GetCustomAttributes method we have on the reflection types are implemented via the ICustomAttributeProvider interface, which is implemented by Type, ParameterInfo, MethodInfo and all those other attribute-supporting constructs. The GetCustomAttribute method of the CustomAttributeData class, though, has several overloads taking MemberInfos, Assemblys, ParameterInfo or Module - the four classes that implement ICustomAttributeProvider. This means that while it is theoretically equivalent to the older method, it does not allow me to cast my member/type/assembly to an ICustomAttributeProvider and perform operations on it - I have several attribute-related functions that take either a Type or a Property, so in this case I can cast them both to MemberInfos and use that as my lowest common denominator. But what if I had needed to handle Assemblies as well?
A rather annoying ommision. Whenever I'm forced to use concrete classes rather than base-classes or interfaces, I get slightly nervous. It may be fine for the time being, but I know that somewhere down the line, I'll try to refactor something and get bitten.
EDIT: searching Google, it seems that I am the very first to use the "Reflection.Omit" pun, or at least to publish it on a Googleable medium. Cool.
I was writing a very simple WCF service. Nothing fancy - just returning an array of structs via TCP, just like 3 others already implemented. Started testing, stepped through the server code, returned the value from the service interface, then WHAM - a big scary exception:
The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue.
An existing connection was forcibly closed by the remote host
This rather scary error message is not very helpful. Was it because of a timeout? (Unlikely, as it happened immediately). Underlying network resource issue? Happens too predictably. Error processing your message? What does that mean, anyway? WHAT error processing my message? And anyway, the inner SocketException seems to imply that it's a network error, not a message error.
But of course it is. WCF didn't really point me towards it, but it seems I had forgotten to mark my struct as a [DataContract] and its members as [DataMember]s. No contract, nothing to serialize, no message.
Perfectly understandable, once you know what's going on. Totally perplexing until you do.
Now let's give this entry some Googlejuice so people who are experiencing this problem can find it:
WCF, Indigo, CommunicationObjectAbortedException, SocketException, DataContract.
There. That should do it.
It turns out that you can easily close a tab with one click on the middle mouse button. On most modern mice, this would be the scroll wheel button. No fuss, no right-clicking, no selecting the tab and going to the other end of the screen for the X button. Just a single middle-click.
This works in Visual Studio 2005, in IE7, Firefox 1.5 and RSSBandit, to go by the applications I've tested it in. Seems to be a common UI convention that I have been entirely ignorant of. It's probably common knowledge for many people, but I only recently discovered it.
When I shared the discovery with my office-mates, it turned out that quite a few of them weren't aware of this trick either, so I figured I'd share it here for all the others who have missed it:
This one may have slipped beneath the radar, with all the new .NET 2.0 and WinFX and other improvements and APIs and language changes. It appears that C# 2.0 adds a new operator to the mix, and I never even knew it was there.
The operator is ??, and it's a conditional non-null operator. Since that doesn't really mean anything, I'll simply and say that the ?? operator returns the left hand operand if it is non-null, otherwise it returns the righthand operand:
string a = null;
string b = "String";
Console.WriteLine (a ?? b); // Will output "String".
This is shorthand for the common pattern we see using the ternary conditional operator, like this:
Console.WriteLine (a != null ? a : b);
which is in turn shorthand for:
if (a != null)
The ternary conditional is held by many to be a sin against nature and code readability, though I personally find it quite clear and convenient. The non-null conditional is a bit less clear - there's nothing in its syntax to suggest an either/or relation. Perhaps it will clear things up. Perhaps it will lie unused and forgotten . Time will tell.
Note: For some reason, the MSDN library groups the ?? operator with the Assignment operators, rather than the Conditional operators. Strange.