Just saw on slashdot,a link to a salon story about how tech call center handle calls, they are filled with people who know nothing about the things they are supposed to support. Their tactic, try to dispatch people in the smallest fraction of time (not caring with the problem in the first place). I don't know if this should make me laugh or cry.
How can it be that in a sane world, companies survive (and probably) thrive by providing disservices to their (paying) customer?
How can people complain about job offshoring when presented with stories like this one? Offshore companies can do a job (not saying they do, just stating they CAN) as good as this one, only cheaper and in the end both have the same effect on the customer.
In my (quite limited) experiences with technical support, i've got a bit of everything, before buying some linksys networking gear (pre Cisco buyout). I emailed their technical support in order to eliminate some doubts i was having about it. I sent an email to their tecnical support, their response was promptly and they answered all my questions in an impecable manner. I thought, nice support they deserve my money. I bought a WRT54G wirelesss router (now deceased !"#"#$"$#) and a WPC54G card. (mind you, they will never a get a single cent from me).
The card worked ONCE and then stopped working (their software said the card wasn't there, even tried beta drivers). Some emails, not a solution in sight, i saw in the their site they have a local support number (not very common for such a small country as Portugal) so i thought it's was worth a shot. I phoned, someone answered, hummm a guy with a funny accent. Not a biggie,we could comunicate, after a while i understood this guy wasn't going to help me. I was talking to someone in India (who could speak portuguese fluently), someone who was friendly and tactfull, but wasn't really going help me. He was just reading a stupid script. After the usual suggestions, remove/insert card, install/uninstall drivers, check the IP configurations routines he said my card was dead and i would need to RMA it.
This was too much trouble, send an email somewhere, get an RMA number, package it, go to the postal office and ship it (all it my expense). I decided i was going to try another route, the google one. A lot of googling time, i saw someone recommending to install Belkin software but use linksys drivers (they seem to share the same broadcom chipset). And voilá it worked. Nice, anyway this seemed like a hack and i emailed linksys support saying, i have this model, my computer full configurations, some screenshots, said it was working it belkin software but not their own which seemed shamefull. (no help there, dubious responses). The card now worked, but didn't seemed to be very good, sometimes it it failed to pick up the router signal, even though it was just a few meters from it (and with line of sight). I got tired of it, and bought an Asus card. Ah what a joy, it works flawless and works anywhere in the house with a strong signal (it even picks out of the box some AP's located a few KM's from my house). A few months after the episode the router started failing. Sometimes all the lights would start flashing (which didn't seemed very healthy to me) and stopped working. This time i tried to use the live chat support. The response was fast, and the people were helpfull. Taught me to reset the router and that seemed to solve the problem (at least for a day or so). After a while i got tired of resetting the router. Decided to buy a new one (RMA it, was too much trouble),i said no more money to linksys, bought a nice (and cheap) Asus router and never looked back (and have a nice paperheight on my already crowded desk)
And all this do say,that some tech support is not like the one described in the article and really try to help their customers,but from what i've seen on the web, support like the one describe in the article seems to be the norm. :-(
On an unrelated subject, while checking my spam folder looking for false positives i saw an email with the following subject "$160 Rolex Replica Watch %RANDOM_WORD". Seemed someone failed to write the correct macro while trying to fool anti spam filters. Ah the arms race continues. :-(
[I know the title sucks, but it's a little hard to synthesize this into a better title :-)]
On this post, sam states at a certain point:
" Gosh, we were doing this in Windows DNA with COM+. So why isn't it being done in .NET systems today? Sure, it's going on but you would never know it from any .NET books, blogs, conferences (virtually nothing on ES) out there. In sharp contrast, the Java community gets it. They tend to think enterprise. The Java developers tend to know how to build distributed applications that scale. "
it seems to me that mainsoft thinks this too and sees it as an opportunity. It is launching a tool, that allows you to run .Net applications inside a J2EE server. They have a flash demo, from what i've seen their pitch seems to be, design your backend on J2EE, and your front end on .NET.
Apparently they have a compiler that transforms MSIL into Java Byte code, and have some neat integration running inside visual studio (debugging included).
Didn't really understood if IIS is totally bypassed or it is still used for ASPX serving.
Oh well, not planning on trying it, just passing the information, someone might find it interesting.
I've been wondering for a long time why we should we use Enterprise Services, what benefits can we reap from them, when they have (in my point of view) a high price. Basically the questions i still have today, were clearly presented in november 2002 (a long time ago,by .Net standards by John Lam), mine are still present i don't know if in the meanwhile John has changed his mind.
This post launched a more heated internal debate on my head,i posted some thoughts about it and hoped that either Sam or Robert could explain me better the architecture they have been developing on the last few months and involves heavy use of ES. I didn't want to know any kind of details about the client nor it's internal architecture, i just wanted some rationales and why they did feel that ES were the way to go.
Sure i know ES provide queued components, role based security, object pooling, distributed transactions among other things, so i can think on top of my head a few reasons to use it,but i think they are weak per se and are not worth the trouble. (yes i know the whole can be bigger than the sum of the parts)
For argument's sake let's say we don't need distributed transactions (i'm not talking about inter components transactions, will try to address that later i'm thinking in terms of transactions spanning multiple databases (or other sort of transactional resource (MSMQ for example)).
The most used argument for going the ES way is scalability, i've really tried to understand this argument; since ES are based on COM+, calling ES will make us pay the penalty toll for crossing managed/unmanaged boundaries. So i can't really see how ES will help us on the scability balance, the cost seems to high for the achieved benefit (none that i can see).
If we need object pooling, we need to either measure it it's worth doing it by hand or using enterprise services. For example here, Don Park implemented some simple object pooling by hand (with great results for such a cheap effort), on the other hand i remember Clemens advocating the use of ES object pooling for guarding acess to a limited resources (the example was a terminal emulation program used for screen scraping some mainframe application with limited logins).
When it comes to distributed transactions, i separate them into two groups:
Transactions spanning multiple transactional resources
The ones that span more than one transactional resource. When we encounter such a kind of distributed transaction, using ES seems like a nobrainer to me. I really love the way that we can use declarative programming, to enlist ES components in a transaction. Doing it manually is really painful, using DTC's and manually enlisting transactions in a DTC is really something i would not like to do, so here i think ES are really useful and would have no doubts in using it. But to be honest, i've never seen that many cases where distributed transactions were needed. To be honest i've been presented with a lot of scenarions when distributed transactions were needed, but either the technology to support wasn't present (heteregenous systems) or they could be considered long duration transactions and sagas (transaction/compensation pattern) were a lot more appropriate.
Transactions spanning multiple components
In here i use the term components in a broad way. I could as well used the term modules, methods or whatever.
In this scenario i mainly see 2 camps.
- Those that defend that a database is something that should be merely used as a data store (period).
- Those that defend that business logic does have a place in the database
For the first group, there aren't many choices there, it really is a degenerate case of the a transaction spanning multiple transactional resources. Either you manage transaction manually (a nightmare to manage, real easy to leave the database hanging by forgetting to close a transaction) or use ES and do it declaratively. For these kind of scenario ES are really a great fit. However there is danger lurking, when junior people are involved (people who barely know what a transaction is let alone what a lock is, so they use transactions indiscriminantly (i've seen people opening transactions for reading a database, i've seen people calling database insertions in a tight loop without any sort of transaction, and everytime i think i've seen it all bang someone amazes me and ). People on these camp, also argue that they want database independence. That someday they might switch providers (while such an argument is comendable i've never seen it happen,and unless you are using some kind of O/R mapper or code generation such a task will never be easy unless we are dealing with a toy database). People on this side of the trenches,defend that (rightly i might add) if more performance is necessary you can allways buy some more iron (either scale vertically or horizontally), because you have saved a lot in developing and in maintenance. They also tend to favor code generation techniques.
[Update: where is such an opinion]
The second group, are normally the control freaks, the ones that want to squeeze the last millisecond out of the database. A lot of business logic inside the database in order to reduce round trips and to micromanage the transactions (again inside the database)(making sure the transactions time windows are really small and really know what kind of locks are involved when they do what they do.
I've found that the people entrenched on this kind of reasoning, are people who normally develop applications that are highly contained, the composition of components use,components interactions (relations, dependencies,use, whatever) are rather small (and highly predictable) so they can manage all these interactions on their heads, therefore can easily control them all.
I will pass this on to Robert, he tooked the plunge and explained from a security point of view how he thinks ES presents himself as a nice solution.
No arguments from me here. I guess in this case ES seems to have a nice fit (and from previous posts they seem to use distributed transactions extensively).
[Update: Robert added some clarification to his original post]
[Update] Ian griffith has some thoughs on security here
Conclusions? (not really)
Using a DTC for "local transactions", seems like a heavy price too. However i've read (heard?) somewhere that in whidbey and yukon, transactions will only be escalated into the DTC if needed (until they, are are placed upon Sql server exclusively).
If you have reached this far, you are probably confused. Everything i say, implicitely says the ES are the way to go and yet i've never grasped the need to use ES. Oh well maybe it's a paradox, or perhaps this post should reply my own questions and the light has presented to me by myself. (perhaps it's the train coming running at full towards me, but i bet it's not the clue train) :-)
Oh well. Keep those thoughts coming.[update]: where are some nice thoughts on this subject too Oh where art thou Business Logic?
Rico Mariani has written a very nice article about code profilling. It argues no measuring is worse than measuring, and then goes into describing the kind of measuring profilers can do. And how should these into account when measuring things.
As Dan Bernstein as caught repeating ad nauseum, "Profile don't speculate"
I guess this quote from him could complement Rico article quite well:
"A well-known phenomenon in programming is that programmers who fail to profile waste lots of time on dinky little speedups that no user will ever notice. Often the speedups are swamped entirely by the very real cost of code bloat. "
As someone who performed is graduation work on "performance and scalability" this brings me back some memories, i once had to write a TCL profiler without making any changes to the interpreter itself. I instrumented code, by rewriting source code on real time (scripting is wonderful) at load time. It sounds a lot more complicated than it was, TCL was powerfull enough to allow me to do this, by writing no more than 100 lines of code and without making a single change to it's interpreter. This was all managed on "user space" (can't find a better analogy).
Sam Gentile, has written a *must read* post about why .Net developers don't grok scalable distributed systems. He cites lack of literature,among other things.
I don't totally agree with him, i usually choose to scale my applications horizontally. Either have subsystems living on different machines (preferably with their own data), or equally configured machines that are load balanced (can be easily replicated and when more horsepower is needed,you just need to add some iron to cope with the higher load). From what i've seen, this is easily acomplished but the database seems to be (invariantly) the bottleneck. If you don't acommodate this from the start [replication,data partitioning,etc], you are doomed sooner or later. (from my experience the later seems to happen when the database server vertically scalable is not feasible anymore (or no budget to do it :-))]. Adrian Bateman seems to share the same opinion and describes it much better then me. :-)
But there is something i would really love to see. A blueprint of the architecture him and Robert have been writing for the last few months.
I would really like to understand .Net Enterprise Services, and their advantages but i've failed to do it so far, i guess as sam says no sufficient literature. :-)
Maybe Sam will write a book about it? send me the amazon link, and i will surely pre-order it today. :-)
Meanwhile TheServerSide.Net is compiling a top 50 list, on Who's Who in the Enterprise .NET World?
On the afternoon we had 2 sessions. I think i will once again separate this post into 2, because i think i have a lot to talk (digress is probably more correct) about the afternoon sessions.
The first session on the afternoon is allways the thoughest for a presenter. It has to pass what i call the nap test. You want to listen what the presenter is trying to say, but your fully filled stomach just wants you to sleep (kinda like captain picard trying to defeat the borg by issuing a sleep command into the collective :-) ).
The first session “Indigo”: The Longhorn Communications Architecture, was presented by Clemens Vasters. For me this was the most waited session. The sesssion content and format changed a little bit before the Lisbon session and i got a little heated :-). Clemens replied that i was exagerating and he didn't removed that much, he was right, my fears were unfounded, to paraphrase Twain, the rumours about the death of a good presentation were greatly exagerated. Since i only have version 1.0 of the slides, i can only compare them by memory. But from what i remember, the presentation has not been "dumbed" down at all, the removed slides were acessory (SP?) and the ones that were introduced only made the presentation better (although i had already seen them on the scalability tour).
Your honor before we proceed i want to say a few words in my defense. Dariuz jumped on to the thread and stated :
- As this are in most places one day events with one track you automatically stay in the session. Even if you never designed distributed systems. I guess this is still pretty much of the audience as there is more to see like Avalon and WinFS for example.
- The bits are still in development and especially the Indigo team mentioned that the next milestone will have a bunch of changes. So to demo something is not without risk as what you show could be completely changed in a few weeks."
I feel i was a little misunderstood (the story of my life). When i read clemens post i thought, he had transformed "what is indigo, how can it make your life better and how can you prepare for it" kind of session into a "distributed systems 101" (only). I didn't thought he only removed demos. I don't care for demos that much, nor in syntax examples for the matter. I only care for "what is", the juice. I feel that such kind of presentations are not meant for schooling people into why, to lay down the foundations of a sound architecture or schooling the basics into people foreheads. Those should be prerequisites (Period). When i read this great story. It makes me wonder if these people didn't knew squat about transactions,a presentation like this with all the whys would have helped?
Little me present you a little story that is mildly related (just barely, but with a stretch of imagination this makes some sense and seems to be related :-)).
Some years ago (~3 years ago) i inherited a system which 99.5% had already gone into production. The previous team left, and i was in charge of placing the remaining 0.5% into production (it was already developed but barely tested). Without going into details, the small missing part, was a tipical system integration. My system had to be called by some backoffice application (ASP), i would return some information from my database and the system would format the information in a way that it could be drilled down (again calling my system). Not exactly rocket science. My memory may be a little blurred on the details, but this is what i found:
- Some COM+ components in visual basic that gathered information,they were to be executed in my system machines.
- COM+ components being exposed to the outside as a web service,via a wizard present in the Soap Toolkit 2.0. (the components would be activated locally on the machine, client machines would only invoke a web service, so they didn't care if it was COM,cobol or little green meen inside the machine issuing soap envelopes and xml by hand)
- Some ASP pages invoking the COM+ components using the MS Soap Toolkit 2.0 and formatting HTML using XSLT
[Context: i had full control over my system machines (all seven of them) but absolute no control over the client system (3 machines if memory doesn't fail me)]
I started scratching my head, and doubts begun to wandering through my mind:
- COM+? for exposing some data of the database? Sure direct database was out of the question, i didn't want some external system acessing my DB server and there were firewalls involved.
- Install Soap Toolkit on the client? i was already seeing the system administrator telling me:
- You want to install what? why? how it will affect my system?
- Does it require a reboot? these guys take their SLA's very seriously and i was not the idiot who was going to be put between them and their SLA's. [yes the system was fully load balanced, but who likes to take chances?]. Although this new component were not vital, these would run inside a system that was critical (both internally and externally).
- We can schedule the intervention in 2 months.
- Have to deploy the COM's in all machines? (and the damn wizard DIDN't worked right after installation, you had to manually install some dll that was distributed with Visual studio)
Since i am stupid by design, the KISS principle is something i live and breathe by. So the questions kept poundering my head. Why COM, why COM. (i hope it wasn't just a case of the love of COM mantra)
Ah it must be because there are distributed transactions involved i thought. I started looking at the source code, and found some SetComplete and SetAbort calls. But something wasn't adding up, as far as i knew there where no writes involved, only database reads. I continued digging, and alas only read accesses [simple fire & forget selects, there were no repeatable read semantics involved or anything like that]. Now i was completely puzzled, why using MTS at all for a simple read?
I (think) i don't have the NIH sindrome but a rewrite seemed to be in order. I probably rewrote the damn thing (less than 20 minutes) in less time than it took me to find the visual studio CD's and install VB on my machine :-). I used winHttp on the client and SqlXml on the server. Simple and effective.
I don't know if this was a classic case of a resume driven architecture,astronaut architecture or something else, i will let you make your own judgement :-).
[update] perhaps Fabrice is right when he says " Architects are much likely to come up with complex and costly solutions, especially if the problem is simple! ". But i highly doubt this i was the case,noneteless this is an interesting angle.
That said, let's talk about the presentation. I think the presentation was quite good, Clemens has been up to it's usual level (very good). Indigo is surely upping raising the bar when it comes to distributed architectures. From what i've understood it will give us asynchcrouns calls, one way calls, callbacks,secure transactions,and distributed transactions the whole shebang (and some other goodies). Using standards and only standards, fully interopable with different systems as long as they are standards abiding. Most of all it will allow us to abstract from the plumbing (thank god).
I think the presentation was a little short on the juice, but there was nothing that it could be done about it. It was merely not possible to compress more content on that schedule. On overall it was great.
From what i've read (rumors?) indigo will include some orchestraction borrowed from BizTalk as a standard but that wasn't mentioned. Pity i would like to hear something about it. I guess it's still too early for that.
But i must say there is something that worries me about indigo (aren't you starting to see a pattern here? i'm never happy, there is ALLWAYS something i must complain about :-)), it's not about indigo per se, it's about distributed transactions (WS-Transaction). [Sure Clemens, Steve Swartz and Pat Helland already wrote a lot about this (fiefdoms and such)].
A Distributed transaction is not something i would take lightly (on a intranet it is debatable; on the internet i would probably say out of the question buster), on most of the RDBS (the ones not implementing Multi-version-concurrency-control, like sql server) an ACID transaction performs a lock on the table(s). I wouldn't like to risk my system performance (or risking it being halted for that matter) by a system i know nothing about. I would put most transactions using web services in the long duration transaction category. For that problem we already have a long proven solution already: Sagas and compensations (i think it's presented on Jim Gray's book). What i'm trying to say is, while i think this feature of indigo is very interesting, i'm very wary of it's use. It will probably be used (misused?) and abuesed by those that just follow the flavor of the day or just press the knobs because they are there. Because they don't know what is under the hood or understand it's consequences (like use MTS transactions for a single select :-)). Oh well maybe, it's the price of progress. :-)
This could bring us to a plea such as this A plea for Architectural training. I totally subscribe to this kind of reasing, but i also think such a mindset change will take a lot of time. (i probably won't live to see the day)
While looking for Saga references, found this interesting message from Jim Gray. Also found this paper (that seems to be very interesting)(only skimmed, but i've put it on the pile "to read later"). It is titled Web Services and Business Transactions
Let those comments flow.
[Decompression time of the day]
When you have a domain named fcuk, any little mispelling can prove to be quite fatal:
Don't mind the picture, notice the product name.
If you buy this product for S. Valentine, let's hope the invoicing system is not using the same database. (yeah, yeah information silos and replicated data i know, perhaps WinFS will solve this :-))
Today i went to Longhorn Developer's Preview, and these are my impressions, since it was a long session i will split this into two post. The morning Sessions and the afternoon ones.
Overall it was a great show, left some questions opened, but good nonetheless.
The first session was entitled "Keynote: The road to Longhorn", and was presented by David Chapell. The presentation was great, the content was presented in a very clear and sucint manner. David is a fun and articulate presenter, the presentation only lacked one thing in the longhorn vision, what is the place reserved for both ASP.NET and for current HTML (and future one) standards. The presentation was almost perfect. After this first session i could simple go home with a clear vision of Longhorn. :-)
Disclaimer, i don't like interface or graphical programming, i have no artistic or aesthetics sense at all. When it comes to art and creativity even a monkey can do better than me.
The second session "“Avalon”: The Longhorn User Experience", and was presented by Lester Madden. I didn't liked the session very, maybe it's a though subject and hard to present, but i almost fell asleep. Has i said before smart clients make me wary, and Avalon is no exception, while i find the technology very neat, and hope with avalon i will never have to write any HTML or DHTML crappy tricks in my life, but during the session i kept asking (no answer then, and still no answer now). What is the place that microsoft is reserving for ASP.NET and for HTML in the future? Is it going to be replaced by XAML or will they cohexist and XAML is only a replacement for WinForms?
The third session "WinFS: The Longhorn File System" by Hans Verbeeck. What a great session, and a great presenter. Very good presentation, and very well knitted. I don't know if the presentation is a rehash of the PDC presentation or a brand new one. I suspect the presentation was original, because terabytes was spelled the way Hans talked (Terrabytes :-), i googled it just to make sure and terrabytes don't seem to exist). Either way the presentation was brilliant, i (think) i now have a very clear picture of WinFS, i don't think it will ever by used directly by users (as is, unless they are power users) but developers will benefit greatly from it, either on their applications or by writing their own shell extensions. I also think, this will take a few iteractions on microsoft part (after the first public official release) until they get it right.
Something that wasn't very clear to me (but stated that this was going to be addressed) is synchronization between WinFS systems and with other data sources. And synchronization with other sources. If i synchronize my WinFS with other sources (PDA or other application for example (CRM)) isn't this another silo? Why centralize it on WinFS if i then synchronize it? isn't this maintaning the silos?
Does someone know of some online comparison between BeosFS and WinFS features? i also wonder (haven't tracked it in years) if Reiserfs can be somehow compared to WinFS.
To-morrow, and to-morrow, and to-morrow,
Creeps in this petty pace from day to day
To the last syllable of recorded time,
And all our yesterdays have lighted fools
The way to dusty death. Out, out, brief candle!
Life's but a walking shadow, a poor player
That struts and frets his hour upon the stage
And then is heard no more: it is a tale
Told by an idiot, full of sound and fury,
- William Shakespeare (1564 - 1616), "Macbeth", Act 5 scene 5
I've allways been known, as someone who is too vocal, strong opiniated, and that normally speak without having all the facts. :-) (predicting the future is not one of my qualities either)
Insufficient facts always invite danger.
-- Spock, "Space Seed", stardate 3141.9
Quite rightly i'm speaking ahead of time. Einstein was probably right when he said "Great spirits have always encountered violent opposition from mediocre minds."
When i said, what i said i was not critisizing per se, i was merely stating that such dumbing down was going against my expectations. Clemens is much smarter than me, and probably based his decision on some feedback he had, or has been influenced by the audience's background he had on previous presentations. Since an audience is something that you can't choose, and it's certainly something i can't control, all i can do is bow before the majority. :-)
I'm probably unworthy of some undumbness of the presentation :-), but it was worth the shot <GR&D>
But clemens if you are reading this, all you have to do is email me, and either lunch or dinner (your choice) is on me. :-)
Longhorn developer preview is coming to town tomorrow, and the presentation that i was looking forward , has just been changed (rats,rats and double rats). I was expecting perfection and nothing more than perfection from Clemens, and now i've just been given a (very) cold shower. :-(
"So I took a rather dramatic step: I dropped almost all of the slides that explain how Indigo works. What’s left is mostly only the Service Model’s programming surface. For the eight slides I dropped, I added and modified six slides from the “Scalability” talk written by Steve Swartz and myself for last year’s “Scalable Applications Tour”, which now front the talk. Until about 20 minutes into the “new” talk, I don’t speak about Indigo, at all. And that turned out to be a really good idea."
While i do agree, that sometimes the why's are more important that how's, i think this not the time nor the place. This is a MS show, not distributed & scalable systems101 class. So i would like to learn how Indigo technology will help me solve my problems, not a description of what my problems are. If people don't know what a distributed system is, let them go to MIT's opencourseware and learn it or read some Practices and Best patterns from MS. For such kind of presentation, i think this should be a prerequisite. (if this were a presentation about C# generics, wouldn't you expect that people would know what OOP is, and already know how to program in C#?)
"David Chappell called my talk an “impossible problem”, mostly because the scope of the talks we are doing is so broad, ranging from the big picture of Longhorn over Avalon and WinFS to the Whidbey innovations and I am stuck in the middle with a technology that solves problems most event attendees don’t consider to have."
While my views can be somehow considered radical, this is the way i see them (you can criticize them, but i'm not going to say i'm sorry:-) (but i can be convinced that i'm wrong. :-))):
- If they don't have a problem, there is no need to solve it nor there is no need to explain how they can identify it, since they problem can do it.
- If they do have the problem, but don't know it. Then boy they are really in trouble (they will either learn, and they can do it by themselves with some fine literature  or they are a tipical case of the Peter's principle and even vasters can't help them). I guess they do not have sufficient foundations to address the problem and they should start by learning the basics [boy sometimes i really sound pedantic and elitist]. The presentation may (emphasis on may) help shed some lights but will not solve their bigger problem. When Louis Armstrong was asked by a reporter what Jazz was, he just replied "If you have to ask, you will never know". [perhaps i'm just being selfish, and i want a crash course on Indigo before i dive deeper into it. Perhaps i should starting by watching Don Box session on Indigo (sorry no direct link)]
I think people (most people at least) are there to see how MS can help they solve their problems, not how to identify them, Leo Tolstoy once said "Historians are like deaf people who go on answering questions that no one has asked them." isn't this such a case?
I guess i want an explanation why indigo is the greatest hammer ever built and how can i use it, not a dissertation about the nails i have. :-)
Perhaps i'm wrong, and now that the bar has been put so low, it will probably be not very difficult to impress me. :-) (although i've already watched most of the presentation on the “Scalable Applications Tour”).
I don't want to sound very harsch, but these are just my 2 cents. If someone is reading this, i would like very much know your opinions about it. Thanks for listening.
 As Mark Twain Said "I have never let my schooling interfere with my education" , i would also add a great quote from Thomas Carlyle "What we become depends on what we read after all of the professors
have finished with us. The greatest university of all is a collection of books."
As someone who as developed for the Web all my professional life. Although my home network has more Linux machine's than Windows and have in the past worked for a company that produced a cross-platform web application server, i'm for now (and the foreseen future) nothing more than a sharecropper i find this smart client (clickonce and longhorn) a bit disconcerting when it comes to interoperability.
I feel MS is putting again it's lock-in pressure on full power. I admit the browser has been stretched to the limit when it comes in functionality, usability and responsivness terms , but until smart clients have support in other platforms this makes my stomach hurt and makes me wary of the future.
I wonder if there is such such an effort undergoing.
On the other end, Sun is going for the Network Computer model once again. They say that with allways on broadband it will be a lot to easier to have a NC computer at home. Perhaps they should mix this with Grid computing to ride the hype. :-)
[Update] wes from .Net undocumented writes:
"In my article, I got caught up by the extensive reach of XAML, how it cuts across all aspects of the system. Since XAML is how Avalon elements persist themselves, it will be the primary way that OS communicates data. It will definitely be in the clipboard, essentially killing off rich text format and metafiles in five years. Ever tried creating tables in RTF? Aargh! Already, Adobe has a plugin for generating XAML, which can be copied and pasted like text into a XAML UI file. There's also a printer driver that generates XAML from any document. See where this is going?)"
Notice the interrogation "See where this is going?". I think it's going to lead us into the balcanization and fragmentation road. *sigh* Just what we need.
Where will this leave the emerging and poor developing nations that are going the Linux route in order to try to modernize themselves?
[update] Guess this is where emergent markets will be lead to. A stripped and cheaper version of win XP.