Great response to Sun's new benchmark from Greg Leake of MS:
Good to see you finally posted the code for your benchmark (albeit two months later), and that you understand the mis-applied settings for maxnetconnections in the .NET client driver program. However, you did not post what your new setting was, and you did not post your new results that you mention for your corrected version of the benchmark that you claim still offers better performance than .NET. Why not? I would be interested to see these new results and how much better the perf was for .NET in your corrected implementation vs. your original results. Some other comments:
-We stand by our results with our posted code, and since you don't dispute our findings with our implementation of your benchmark suite, we assume you agree with them. Is this true? These results clearly show .NET outperforming the shipping JWSDP 1.4 in most tests, with the performance difference widening in .NET's favor as the message payload is increased. The differences we find could be explained by the below points.
-Interesting to see you tested with a newer beta version of JWSDP (1.5 beta 2) whereas in your original tests you used shipping code. We tested with all shipping product (.NET 1.1.). Why did you decide to use beta product for the re-test?
-You used two different driver implementations in your tests (a .NET client for the .NET implementation, and a different Java client for the Java implementation). I would argue that if you want to compare the perf of strictly the backend web services as your paper claimed to do, you need to use the same benchmark driver program to test both implementations. This would be the correct testing methodology to keep everything outside of the system under test (or "SUT" in benchmark geek-speak) the same. Did you test the .NET web services with the Java driver and the Java web services with the .NET driver? Would like to see what you get when you do. Would also verify basic .NET/J2EE interop over the standard SOAP protocols as we did with our tests.
-Your newly published code includes a Web.config file for the .NET Web Service app. Assuming the web.config file you published (and hence you are encouraging customers to use to verify the results) is the one you used in your tests, there are a couple of issues. First, you have the compiler set to "debug"; second, you have not properly adjusted http authentication to match JWSDP/Tomcat. To do so, you should set the authentication mode to "None" for .NET authentication, since you are doing none on the Java side. Testing .NET with a higher security setting than J2EE would be an improper test.
-IIS also has an http authentication setting for the web service virtual directory which defaults to windows integrated authentication (kicks in when the client is also Windows). Since JWSDP/Tomcat does not have any such authentication setting between client and web server, you should turn off IIS authentication for the web service virtual directory (done simply via the IIS admin console by right clicking the virtual directy, brining up properties sheet, and unchecking 'windows integrated authentication'). This might also impact your .NET results.
-You make no mention of testing larger message sizes, as we did, which showed .NET outperformking JWSDP 1.4 by wider and wider margins as the message payload was increased. Did you test larger message sizes and what did you find?
-As for standards-based benchmarks, these are fine if they add customer value. The article you pointed to in eweek clearly explains why we decided not to participate once pricing and price-performance was removed as a mandatory disclosure with results, and I also encourage customers to read it:
-We have not yet had the chance to go over your .NET code, but will do so in the hopes of pointing out any potential perf differences between our implementation and yours, since these might be useful for the general public if they exist.