Making callbacks (and Atlas) synchronous, or how to shoot yourself in the foot

I've explained before why XmlHttpRequest should always be used asynchronously. In a nutshell, JavaScript is not multi-threaded, so the only way to keep your application and browser reasonably responsive is to use some kind of asynchronous pattern. This way, the multitasking is left to the hosting browser and the JavaScript developer can enjoy a relatively easier programming environment where he only needs to care about events and not about summoning threads and managing locks.

It's important to note that if you click on a link in a browser, it usually doesn't freeze: the UI is still fully usable even while the request is being completed. You can still cancel it by pressing the stop button, you can access all the menus, etc.

While there is a synchronous XmlHttp request going on, it's a different matter: the browser is completely frozen and none of the UI works. This is utterly wrong on several accounts.

First, if the server never answers, your users will need to kill the browser (assuming they know how to do that, which they usually don't).

Second, any UI that freezes for more than half a second without giving the user a clue about what's going on (remember the little animation that usually indicates navigation or posting back does not move during an XmlHttp request), as far as the user is concerned, just looks as if it had crashed.

Finally, the web application should not have side effects on its container (the browser). In particular, it should not put it into an unresponsive state. I agree that the browser should not let itself be frozen by its contents, but that's unfortunately the way it is and we just have to deal with it (by the way, Firefox reacts exactly the same way as IE in this department).

That's why callbacks in both ASP.NET 2.0 and Atlas are always asynchronous. The async parameter in the case of ASP.NET callbacks is mileading. It should really be named "parallel": when set to true, any number of callbacks may be initiated simultaneously and if false, only the last initiated will actually call back.

That being said, I've been getting a lot of feedback lately from people who just dislike so much asynchronous programming that they want nothing to do with it (even if it's conveniently hidden from them like it is in Atlas). Well, if what you really want is to shoot yourself in the foot, who am I to argue with that? You're the customer, and I'm here to answer your demands. So here's the gun... (of course I'm kidding here. I understand why people want to use synchronous callbacks even if I personally disagree).

Add this small script to your page (preferably in the <head> section) and all your XmlHttp requests will be done synchronously no matter what the framework you're using is doing. This works with ASP.NET 2.0 callbacks in both IE and Firefox but breaks callbacks for Opera. I suspect that it would also work with other Ajax frameworks such as Atlas.

<script type="text/javascript">
    var __xmlHttpRequest = window.XMLHttpRequest;
    window.XMLHttpRequest = XMLHttpRequest = function() {
        var _xmlHttp = null;
        if (!__xmlHttpRequest) {
            try {
                _xmlHttp = new ActiveXObject("Microsoft.XMLHTTP");
            }
            catch(ex) {}
        }
        else {
            _xmlHttp = new __xmlHttpRequest();
        }
       
        if (!_xmlHttp) return null;
        
        this.abort = function() {return _xmlHttp.abort();}
        this.getAllResponseHeaders = function() {return _xmlHttp.getAllResponseHeaders();}
        this.getResponseHeader = function(header) {return _xmlHttp.getResponseHeader(header);}
        this.open = function(method, url, async, user, password) {
            return _xmlHttp.open(method, url, false, user, password);
        }
        this.send = function(body) {
            _xmlHttp.send(body);
            this.readyState = _xmlHttp.readyState;
            this.responseBody = _xmlHttp.responseBody;
            this.responseStream = _xmlHttp.responseStream;
            this.responseText = _xmlHttp.responseText;
            this.responseXML = _xmlHttp.responseXML;
            this.status = _xmlHttp.status;
            this.statusText = _xmlHttp.statusText;
            this.onreadystatechange();
        }
        this.setRequestHeader = function(name, value) {return _xmlHttp.setRequestHeader(name, value);}
    }
</script>

Update: modified the script to include more properties of the XHR object, which makes the script compatible with ASP.NET 3.5.

41 Comments

  • Thanks,very interesting info;-)

  • I totally agree with the fact that the 'A' in AJAX is very important. But I'd like to add that there are ways to fake an 'unfrozen' UI. One that we use in our AJAX applications is to show an animated GIF with a message in front of the UI (ie Please wait...) just befor the Synchronous call. This way, the GIF (for example we take 'the little animation') make as if the UI was not frozen. Plus, we are sure that the user doesn't click anywhere because he can't. That's the good point about Synchronous calls that we like.

  • Synchronous Sucks. The first time I ever used an app that used xmlhttprequest object was back in 1999. It would fetch reports &quot;Synchronously&quot;. Gosh, that was a terrible user experience. The browser would just freeze for 30 seconds.



  • Headline: &quot;Web Developer deep-sixes development project by pretending to know AJAX but making calls synchronously&quot;



    Translation: &quot;Man accidentally shoots son in the head while teaching him hunter safety&quot;

  • Geoff, I agree with you so much it hurts. I just hope I made a good job explaining how this whole thing is a very bad idea. But you know, people just want that very much, so if that's what they want, here's how to do it... Maybe they'll realize how bad an idea it is after a while and when their users complain. Hopefully.

  • I have had just the opposite experience - using asynchonous on IE with its limited pool of concurrent TCP connections causes race conditions and deadlocks...

  • Asynch dude: I think in all browsers, you're limited to two simultaneously open connections per domain. Is that what you're referring to? I've never seen that cause race conditions or deadlocks but if you have a sample repro I'd be happy to take a look at it. Contact me through the contact page of this blog.

  • Ok, so I'm one of the developers that wants to shoot myself in the head - and I'm starting to think that may not be a bad idea after receiving the following request. :)



    I have a very unique situation however.

    I have a request to implement a VERY complicated security scheme in an INTRAnet

    business application. Depending on a users security profile, INDIVIDUAL items

    on any given screen may be accessible but some may not be. The security profile

    is data driven by the back end which uses an Active Directory like

    schema to implement user and group securities. User security

    and profiles are controlled by the end users that have

    the Administrator Group OR Security Administrator permissions assigned to them. Any combination of individual or group permissions is possible. This means the

    entire security scheme is dynamic.



    I have only been able to think of two ways to implement this.



    1) Use a client call back that uses an out-of-band RSP on the backend to check the

    users permissons at the time the item is clicked. The client callback

    responds accordingly if users don't have permission. If there is no permission,

    an error window is opened telling the user they don't have permission to perform

    the requested action and the requested action is cancelled. If the user has

    permission, then the action proceeds. This worked very well in beta 2.0. Yes the

    the browser was blocked, but the response time was always &lt; 1 sec for either the

    the error window to open or the original request action to proceed. Keep in mind

    Im working in a known network environement where I know everyone is using the

    latest version of IE.



    2) When the user logs in, write a cookie with all of the user permissions in name

    value pairs. Use javascript to navigate the name value pairs to see if the user

    has permission. The rest of the actions are the same as number 1. The routine

    for navigating the name value pairs would be complicated because of security

    rollups in the form of groups.



    Either way I'm looking at some sort of &quot;synchronous&quot; activity. If I could think

    of a way to get this to work asynchronously I would do it that way. Number

    1 was a much cleaner implementation. I also thought that it would

    be more secure because security information was never stored on a

    client. I can certainly see a way for a user to &quot;hack&quot; the cookies methodology. Any ideas?



    Thanks for your help.

  • Sushi: Thank you for taking the time to explain your scenario in details. Sure, synchronous programming is usually easier, I'm not denying that, but I see nothing in your description that necessitates a synchronous call. 2 is certainly not the only way to solve the problem and is not something I would recommend.

    How about disabling the relevant part of the UI while the callback is going on?

  • I see. I didn't understand you were using an event to cancel the normal operation of the link. Anyway, you have a security check on the target page, right?

    Well, in that case you should not have the target url in the link initially. Instead, just navigate from the callback function, after the security check has returned, using window.location.href.

    So it goes this way. The onclick event triggers a script that disables the UI and triggers the callback. When the server has given its response, the client callback function is called, which re-enables the UI and navigates if the user was validated.

    One more thing: why does the server render these links at all? Couldn't it determine from the server side that the current user does not have access to it and directly disable or make it invisible? It's a frustrating user experience to have buttons or links on a page that you're not allowed to use.

  • I agree. IT seemed logical at first to make the links invisible or disable them. However - there are numerous functions on any given page that would have to be enabled/disabled. I chose this implementation due to performance considerations at the time the page loaded. I saved numerous transactions becuase the security check was only performed when the action was clicked. I considered breaking up the UI - however this would have created a broken/difficult navigation interface. It would be like breaking up XP. Navigation links are not the only things that have security. For example - some users may be able to view data, but not change it. This means at load time all of the edit/insert buttons would have to be disabled on the screen within every single grid or form view. Or I could have had two screens. One with edit capability the other without. However, that means having two screens to maintain. A change on one screen forces a change on the other. The two would at somepoint diverge. It would also create a source code difficult and time consuming to update.

    As it stands now I can implement security on anything that is clicked by just adding

    a single JavaScript call to the OnClick or OnClientClick event handler of any clickable object. I also gained screen consistancy. How would I disable the edit linkbutton in a gridview after the user clicked it? Is there a way for the JavaScript call back event to put a gridview into edit mode on the proper record?

  • I don't know how often permissions change, but you could cache the security information server-side and solve that performance problem.

    About grid linkbuttons, they are going to post back anyway if the security check succeeds, in which case you'll have to re-check the security (the request could very well be forged).

    So with your solution, in the success case you have:

    - two network requests (one for the security callback and one for the postback)

    - two security checks

    In the deny case, you still have one network request and one security check.



    If you removed the callback from this button and only did the security check during the postback, you'd have only one network request and one security check in both cases. It seems like this is the way to go.



    There is currently no way for the JavaScript callback to put the grid in edit mode. You need a postback for that.

  • I do understand your concern, however what to do if - for example - a user refresh a combo box content from picking a value in another combo box?

    This is a simple example where asynchronous can lead to a mess at your server side.

    The user select something in the combo, nothing happens within 2/3 seconds. So, the user gets nervous and selects again the value, or another one, and so on...

    Resulting in tons of request on your server...

    Are you sure you want to stay asynchronous?

    I do prefer to freeze one or two browsers rather than killing my server...







  • Steph: You absolutely want to stay asynchronous in this case as well, but as I said, you should disable the relevant UI (in this case the first combo) while the callback is going on. And you should give visual clues to the user as to what's happening (like a &quot;loading&quot; message).

  • Hello,

    Is there an example of how to make a single synchronous call as opposed to the example at the top of the page which describes how to make ALL calls synchronous. In my case, I need to pass one parameter to a server-side function, and return one back to the client (boolean).

    Thanks much.

  • Stew: it should be fairly easy to only do the XMLHttpRequest substitution around your call and reset the default implementation when you're done. But again, is it worth the trouble, and wouldn't it be easier to just redesign for async?

  • Hi that was a good Work arround which helped me to switch to synchronous call. but i tryed changing false to true i.e. while calling return _xmlHttp.open method i changed the third parameted from false to true, (it is this boolean value which makes the difference between synchronous calls and asynchronous calls) but when i changed it to true and executed then it threw an javascript error at this.responseBody = _xmlHttp.responseBody; saying "The data necessary to complete this operation is not yet available." can any one help me how to get rid of this error.
    And one more thing does this code work on all browsers like IE 6/7 Firefox 1.5/2.0 Mac Firefox, Opera etc?

    Regards,
    shashi

  • Shashi, I don't get it. Why would you want to set that to true if you want synchronous? If you want async, you don't need this hack, but you need to start doing things asynchronously.
    I think this code should work on all browsers although I didn't test everywhere.

  • SurferGirl: setTimeout doesn't return the result of calling your function, it returns a cookie that identifies the asynchronous operation. If your timeout handler needs to return information, it should do so into some globally accessible state variable. Furthermore, you should never pass a string into setTimeout. Use a function reference instead (build a callback if necessary to remember the parameter, using Function.createCallback if you're using ASP.NET Ajax).
    There is nothing in your scenario that requires synchronous calls. Only your workflow state needs to be updated after each call. That doesn't mean that you need to freeze the whole browser. Just freeze/disable the UI that affects your workflow during the calls.

  • 1 legitimate use I believe would be for custom validator client functions.

    I've just written one that checks if an email address for a new user is already in use and disallows form submission if it is.

    The problem I had is that args.IsValid needs to be set in the validator js function meaning that it needed to be synchronous.

  • Martin: how does that make it legitimate to block the whole browser UI? Just disable the relevant part of the UI during the callback, make isvalid false, and when the callback returns, re-enable the UI, then set isvalid to true if that's what the callback returned, and call ValidatorUpdateIsValid.

  • But that doesn't work as they will never be able to submit the form!

    The onsubmit will call the validator function and if args.IsValid is set to false it won't hang around waiting for the asynch function to complete it will cancel form submission.



  • Martin: I'll try to put something together and make a blog post with it. Thanks.

  • I have a custom validator and I want to do an ajax call to the server and perform some validation using server side scripting. In my case the script is not known at build time.

    I need to set args.IsValid before the client side validation returns. I was doing this via a synchronous ajax call.

    How could I achieve this with a async ajax call?



  • Just noticed that Martin Smith has asked the same question :)

  • ok, i agree with most of the issues of why you should keep the jax async but then im getting this scenario : we are using autocompleteextender to select from a list of some 10k employees, upon selection, or onchange we need to validate that the value represents has a match in the list (even if the user typed the entire name - no hidden value)
    to add complexity - this is done inside a 3rd party grid control which have an onbeforeupdate event in which we are hooking in the validation.
    (a similar scenario is when a user types somthing in the autocomplete and then clicks the save(submit) button.)
    now, when validation fails it must cancel the update event, and that is not possible if its async.
    i would love to keep it async but i dont see any alternative besides shooting myself in the leg (and it will hurt oouch...)
    any thoughts ?

  • Roi: that is actually a fairly typical scenario that works well with async. Just disable the relevant pieces of UI during the callback.

  • You keep saying to disable only "the relevant pieces of UI". What if the business case is that the whole UI should be disabled? I just don't get it--it seems to me that there may legitimately be some situations in which a synchronous call from a web app MIGHT be justified, such as some kind of dynamic security check in a highly-secured, high-speed network intranet-only app in which the request will almost always complete very quickly and almost never fail. Or something. Maybe I'm just too old for this stuff any more :) In any case, I just tend to shy away from pronuncements like "never do this". Of course I recognize you have in fact given us the way to do it--thanks BLR. (Now I can go to work armed and dangerous :)

  • Ah--Now it's a little clearer. The last sentence in your 1:11 PM post de-obfuscated the matter a bit for me. So to summarize for my scenario--we use asynch methodology, but disable the UI by, perhaps, showing a modal "Authenticating request blah blah" popup dialog window (which might only display for a split second normally, but maybe program it to stay displayed for at least 1-2 seconds for user-friendliness (rather than just seeing the screen mysteriously flash for a split second)), and implement the error/timeout etc. callback routines to ensure robustness... That does make sense. Kewl.

  • This worked great for ASP.NET 1.0 and 2.0, but now with ASP.NET 3.5, this script does not work. Do you have an update for ASP.NET 3.5? Thanks in advance.

  • @Doug: I haven't tried that with 3.5, but I'm not really surprised and I really encourage you to try to find another way to reach your goal here. Feel free to drop me a private message on the contact form to decribe your scenario and why you think you need synchronous calls, I'll try to help.

  • Does anyone know how to get this working in AJAX 3.5 ?? Sometimes SJAX is a good thing, epecially when performing mathematical functions in certain order which is NOT always in the same order. Can Anyone help get this script working in MS AJAX 3.5??

  • @Alex: The scenario you describe can be achieved by maintaining a simple queue of tasks.

  • Perhaps that is possible, but would require a huge amount of rewriting. I'll pay you well for the script if you can do it! Not kidding.

  • @Alex: what are you seeing when you try it? How does it fail? Are you getting an error message?

  • I can try again and let you know shortly. Thanks for your help.

  • I put together a simple ASP.NET AJAX 3.5 sample page. On this page is an (1)UpdatePanel (2)Textbox (3)Button (4)Label .. When I omit the SJAX script above, it works fine. Simply updates Label with text that I type into TextBox without a page postback. Easy. Not I add the SJAX script and nothing happens. No JavaScript error. Nothing.

  • @Alex: send me your repro at bleroy at microsoft and I'll have a look.

  • I sent a zipped VB.NET sample project. Thanks a ton!

  • @Nariman: this is by design. I answered the MSDN forum thread with a more detailed explanation. Can you please point me to the MSDN atricle where you've seen claims that this parameter would make the call synchronous? I'd like to get that corrected.

  • I'll be glad to have a cross broswer version,

    Best Regards,
    Moshe Kaplan. RockeTier

Comments have been disabled for this content.