Archives

Archives / 2004 / December
  • When (not) to override Equals?

    In .NET, you can override operators as well as the default implementation of the Equals method. While this looks like a nice feature (it is if you know what you're doing), you should be very careful because it can have unexpected repercussions. First, read this. Then this.
    One unexpected effect of overriding Equals is that if you do it, you should also override GetHashCode, if only because the Hashtable implementation relies on both being in sync for the objects used as the keys.
    Your implementation should respect three rules:

    1. Two objects for which Equals returns true should have the same hash code.
    2. The hashcode distribution for instances of a class should be random.
    3. If you get a hash code for an object and modify the object's properties, the hash code should remain the same (just as the song).
    While the first requirement ensures consistency if your class instances are used as the key in a hashtable, the second ensures good performance of the hashtable.
    The third requirement has an annoying consequence: the properties that you use to compute the hash must be immutable (ie, they must be set from a constructor only and be impossible to set at any time after that).
    So what should you do if your Equals implementation involves mutable properties? Well, you could exclude these from the computation of the hash and only take into account the immutable ones, but doing so, you're destroying requirement number 2.
    The answer is that you should actually never override Equals on a mutable type. You should instead create a ContentsEquals (or whatever name you may choose) method to compare the instances and leave Equals do its default reference comparison. Don't touch GetHashCode in this case.
     
    Update: It may seem reasonable to say that it's ok to override Equals and GetHashCode on mutable objects if you document clearly that once the object has been used as the key in a hashtable, it should not be changed and that if it is, unpredictable things can happen. The problem with that, though, is that it's not easily discoverable (documentation only). Thus, it is better to avoid overriding them altogether on mutable objects.

  • Black hole evaporation paradox?

    I just sent this letter to Scientific American. I'd be interested to have any informed opinion on the matter.
     
    I’ve read the article about black hole computers with great interest, but there are still a few questions that I think remain unanswered.
     
    The article makes it quite clear how black holes could be memory devices with unique properties, but I didn’t quite understand what kind of logical operations they could perform on the data.
     
    But another, more fundamental question is bugging me ever since I read the article. From what I remember learning about black holes, if you are an observer outside the black hole, you will see objects falling into the black hole in asymptotically slow motion. The light coming from them will have to overcome a greater and greater gravitational potential as the object approaches the horizon, losing energy along the way and shifting to the red end of the spectrum. From our vantage point, it seems like the object does not reach the horizon in a finite time.
    From a frame that moves with the object, though, it takes finite time to cross the horizon.
    This is all very well and consistent so far. Enter black hole evaporation.
    From our external vantage point, a sufficiently small black hole would evaporate over a finite period of time. So how do we reconcile this with the perception that objects never actually enter the horizon?
    It seems like what would really happen is that as the horizon would actually become smaller over time, the incoming particles would actually never enter it.
    If this is true, and no matter ever enters it, would the black hole and the horizon exist at all?
    From the point of view of an incoming object, wouldn’t the horizon seem to recess exponentially fast and disappear before it is reached?
    If nothing ever enters the horizon, is it really a surprise that black hole evaporation conserves the amount of information?
    Does the rate of incoming matter modify the destiny of the black hole? If it grows faster than it evaporates, I suppose the scenario is modified, but how so?
    I know it is quite naïve to think in these terms and that a real response could only come from actual calculations, but still, I hope that you can give me an answer to what looks like a paradox to me. I don’t see how you can reconcile the perceptions of an external and a free-falling frame of reference if the black hole evaporates except if nothing ever enters the horizon.
     
    UPDATE: a recent paper presents a similar theory to solve the information paradox:

  • More on non-visual controls and the component tray

    Nikhil gives an excellent explanation of this and why data sources are controls (to summarize really quickly, they must be part of the page lifecycle).
    This also answers an ongoing discussion on TSS.NET about the SqlDataSource, on a subject similar to this old blog entry.

  • All abstractions leaky are leaky. All. But one?

    There's been a lot of talking about leaky abstractions lately. An abstraction is leaky by definition: it is something simple that stands for something more complex (we'll see later on that this is not entirely true in the fascinating world of physics).
    These arguments make sense until a certain point. And this point is determined by how much time will the abstraction gain you? The answer with ASP.NET is a lot of time as anyone who's developped web applications with the technology knows.
    So the abstraction may be leaky, but it doesn't matter: the really important thing is that it's useful.
    Joel's point in his paper was really to explain that at some point you'll need to learn what the abstraction is really standing for because as you use the abstraction in more and more specialized and complex cases, the abstraction will leak more and more. That's true, and the value of an abstraction can more or less be determined by the amount of time you can work with it without having to worry about the complexity that it stands for. Depending on what kind of application you develop, this time can be pretty long with ASP.NET.
    Now, what about physics? Well, in physics, there are leaky abstractions, like for example thermodynamics, which nicely reduce the complexity of the chaotic microscopic kinetic energy of molecules to very few variables like pression, temperature or volume. And the abstraction leaks if you start looking at too small a scale, or at a system outside of equilibrium. Still, it's one of the most useful abstractions ever devised: it basically enabled the industrial revolution.
    But there are more curious abstractions in physics. If we try to find the ultimate laws of nature, it seems like the closer we look, the simpler the world becomes. In other words, the layers of abstractions that we see in nature seem to become simpler as we go more fundamental. The less abstract a theory, the more leaky it seems, actually.
    Could it be that the universe is the ultimate abstraction, the only one that's not leaky?
    Well, the point is, the universe is no abstraction, it's reality. But if we're lucky and smart enough, we may someday find the only non-leaky abstraction, the one that maps one to one with the universe itself.