I mentioned the other day I wanted to figure out a good way to search volumes of text in my forum without the full-text engine of SQL Server. I didn't really find a lot, other than the way that SQL Server does it (or apparently does it, I've not seen a really thorough explanation).
The obvious path is to break up every piece of text and create a table full of words, filtering out the "junk" words like "the" first. So I copied the posts from CoasterBuzz (about 440k posts) and gave that a shot with a quick prototype.
I started to get bored with the indexing process and stopped it at about 2 million rows. I'm not sure why, as I had no reason, but I was skeptical that this was going to be very fast. With an index on the Words column, I started running some queries in Query Analyzer, and what do you know, the searches were nearly instantaneous. Huh. Not the results I expected. I did some AND's and OR's, still fast.
So now I'm scratching my head wondering why I didn't try something like this, I don't know, years ago. I expected an endless tweaking effort to get performance to an acceptable level, and it just works. Stuff is never this easy! I assume that boards like vBulletin do something like this as well.
So now I just need to figure out some kind of word ranking scheme. For that I think I need to just look at existing topics that I as a human understand as relevant to a word, then apply that to some goofy algorithm.
Hooray for things being easy for a change, and hooray for SQL Server!
Don't hate me because I'm ignorant... remember that I don't have a formal programming background. If I'm to understand generics correctly, using System.Collections.Generic.List<T> will be ridiculously faster than using an ArrayList, because everything is boxed/unboxed to and from an object in an ArrayList, right? Whereas List<T> is a collection of objects that are a predictable type?
In a related question, is BinarySearch() still the best method to find objects that have a particular property value? And I assume that the type I'm searching has to implement IComparer?
I was happy to see that a federal court upheld the notion that states shouldn't be taxing voice-over-IP service. This is the correct decision.
States like New York are trying to make the case that it's just like any other phone service, but that's not even remotely true. Heavy regulation of traditional phone service is justified because it's essentially a limited resource and a natural monopoly. It's not like every company can string up a phone network. Cable TV is regulated under the same premise.
Have you looked at your phone bill lately? I'm absolutely astounded at the number of taxes. They take up one entire page of the bill now. It's ridiculous. The only thing that has kept me from flipping to Vonage is that, for the moment, I can't carry over my phone number... yet.
I'm looking for articles that explore text searching strategies. I've read a lot of general ideas about creating word indexes and giving the words rank based on frequency, and referencing those indexed words to the database records that contain the full text. I'd like to read something with a little meat to it regarding performance and such.
Anyone personally take a shot at this kind of thing? And no, I'm not looking for, "I just use SQL Server's full-text indexing." :)
Related to that, I'm curious if anyone has advice on having a Web application communicate with a thread it launched. Please, let's not go into the case against launching new threads from a Web app. The kind I'm thinking of (like indexing text) would be run periodically on a timer from an HttpModule, not user initiated stuff.
Your opinions and knowledge are, as always, appreciated.
In my last post about open-source and documentation, Chris Martin makes the assertion that: "If you don't like the way something is implemented, you do it yourself and it ends up in the distribution of said software... Instead of complaining, you should contribute."
I'm not even sure where to begin with that one. I would say more than half of the projects I've ever encountered on SourceForge don't have an ounce of documentation. I'm the last person in the world that believes every projects needs a scope document and a stack of use cases, but if you can't at least write some basic documentation to get me started, why should I be interested to continue or improve upon your work?
Furthermore, this "contribute instead of complain" nonsense is laughable. The biggest open-source zealots say this kind of thing all of the time, and I can't help but wonder what their time is worth. I can do 30 to 40 hours a week for work, but the rest of the time goes to family, friends, and even getting my ass kicked on Halo 2 from time to time. If those work hours aren't generating revenue in some way, I'm really not interested. The good cause of freesoftwaredom is not even on my radar. If there's not a good open-source implementation of something, I'm perfectly fine with paying someone to do the work and hold them accountable.
S Dot One says: "You got the ULTIMATE documentation with open-source... The source code itself."
If I had a dollar for every time I heard that in an agile workshop, I'd live somewhere more tropical than Cleveland. It's the worst cop-out in software development today. Yes, it's true, that in an agile/XP environment that the code is generally simple enough that you should be able to read it and understand what it does. I get that. We heard that over and over again in a stint I had at The Second Largest Auto Insurance Company. What no one ever explained is how that translated to some kind of context for the use of said code.
The truth is that no matter how narrowly focused a piece of code is, it's rarely something that you can consider language/platform neutral, and there's almost always a different way to do exactly the same thing. So when it comes time to revisit the code, revise it, change it, whatever, you're left scratching your head because you have no idea what context the code was running in, or what business problem it was trying to solve. I saw it happen even in short itterations from an agile team.
I guess what bugs me the most is that people don't speak up and form their own opinions about things like open-source or agile. I don't know if it's fear of retaliation, fashion addiction, consultant backlash, or what.
Am I against open-source software? Of course not. I've been giving away POP Forums for about a year. I also document all 600+ classes, properties and methods. I don't remember what I wrote and what my own reasoning was; I certainly don't expect someone else to guess.
When you get bounced out of Hotmail you of course land on the MSN page, which has links to top stories on MSNBC. So they were plugging the amateur video from the tsunamis that sadly have killed nearly 30,000 people at this point, and I was curious to see just what this looked like.
I bounce on over there and launch the video clip. Wouldn't you know it, you can only view it with Internet Explorer. Could that be any more lame? I can deal with having to use Windows Media Player, but MSNBC wants to force me to use IE? And if I'm using a Mac I should do what?
For all of the crap that Microsoft has taken for exercising its monopoly power to dominate certain markets, and given the pending litigation in Europe, I can't believe they would allow something this lame to occur.
Sure, it's their business, and they can require users to use whatever software they choose. However, when you start doing this with news media, you're doing little but giving your critics the fuel they need to blast you some more. Controlling the distribution of news media in this manner is a real kick in the nuts to the journalists that risk their lives to get the story.
I have to say that the .NET world if fortunate to have a lot of open-source stuff available. I can't even tell you how much I like NUnit.
The problem I find, however, with a lot of open-source software is the complete lack or reasonable documentation. That drives me nuts, although I'm not entirely surprised. It's one thing to give away and share your work, but that scenario doesn't exactly provide a ton of incentive to document it properly.
NUnit ended up being useful to me I think because it was in use at a project I was on, and it essentially has its own book. Most stuff I've encountered doesn't have that luxury.
I'm not suggesting even for a moment that the world would be a better place without these projects, it's just that the price of entry is kind of high for something that is monetarily free.
Yep, the more I think about it, the more I think I want to write another book. I've got an idea that I think will sell, I can write it reasonably faster than the last one, and I'm feeling some enthusiasm for it. I'm going to put together a proposal and see if anyone is interested.
As my role in Maximizing ASP.NET winds down leading to release, I have to say that working with the folks at Addison-Wesley has been a really good experience. They bend over backward to give you the support you need. I felt even before this that they had the best ASP.NET titles on the shelf, and I'm really excited that it was them that picked up the project.
On a side note, I noticed at Border's last week that there aren't really as many ASP.NET books on the shelf as there used to be (casual observation, not a scientific statement). Before Wrox went down the tubes and was sold, there were a lot of really quality niche books that covered specific areas (threading, text manipulation, performance, etc.), and I don't think those areas are being served now. Granted, with such a narrow focus, I don't know what the market is for those books. Did they even sell three or four thousand copies? I get the impression that it's hard to justify publication of anything that doesn't at least get into the five-digit count.
I love ArrayLists. I find them to be among the most useful collections in the .NET Framework. I remember seeing a discussion somewhere a few months ago about making ArrayLists into strongly-typed collections. This was achieved by simply inheriting ArrayList and overriding the Add/Insert methods to make sure the objects being added were a particular type.
I don't know enough about what's going on under the hood to know if there's a performance penalty involved with this. Is checking the type of an object an expensive process?
I mentioned the other day that I was going to revisit the text parsing engine of POP Forums and essentially start over. What a difference that made. In less than a day I turned years of crap upon crap into something much leaner, about a third less code. I got there with about twice the unit tests that I originally had. Around a dozen regular expressions took care of all of my line break and blockquote woes that I kind of eluded to in that last post. I started with the first test, and kept working through them until they all passed. I don't know if it's the most elegant thing ever, but it appears to work. I dropped it into two of my production sites and so far, so good (yeah, TDD makes you that confident).
Since this entire exercise is really about arriving at the next version, I can now think about features. The big question is, do I want to endeavor into the world of allowing different text colors, and perhaps text sizes? On the pro side, it would be something other forums already offer. That's the entire list for the pro side.
On the con side, I have to deal with different implementations of rich text editors, decide how best to present the changes (span tags, probably), decide if the various heading tags make the most sense, and above all, know that I'll be responsible for some forum where I see something like:
o my f***ing god!!!!!!!111 u suX0rZ!!!!111
One must be a responsible code monkey, after all!
While the group of people that use CampusFish is small, I was astounded to see that only 20% of my visitors use Internet Explorer. Wow. Granted, I know several of the users are Mac guys (they blog about Mac stuff quite a bit), and several others are Firefox users. I assume the other 20% are the people I don't know. ;)
Those of you bored enough to read my blog regularly probably know that I coach 17-year-old girls in junior Olympic volleyball. In teaching my kids to improve upon their skills, I frequently tell them not to worry so much about the big picture when I want them to concentrate on the smaller things.
For example, to pass accurately on serve receive, you have to be behind the ball and squared to your target, pushing out with your legs, not swinging your arms. If a kid is diving around because they aren't moving to the ball, I set the goal to simply meet the ball, body firmly in front of it. If they can do that consistently, I'm not as concerned about them squaring to target, using their legs or whatever. It's a little victory toward the bigger goal.
Using test-driven development is a lot like that. You write all of these tests, maybe hundreds, and start to write code that passes the tests. There's no way in hell you'll pass them all the first time you run the tests (if you do, you're doing it wrong). But every test you pass is a little victory toward the bigger goal.
If you can concentrate on said victories, I think you can get a lot more enjoyment out of even the most mundane programming tasks. That's why I like TDD.
Today I reached the milestone I was hoping for in the revisions for CampusFish, my little blogging project. After a year, it has only made enough money to cover the SSL certificate, the domain name and some of the bank fees, but it's worth it because I use it. It's where I drop my F-bombs and frustrations on the world and talk about stuff that no one here would likely care about.
When I first launched the site, it was kind of limited because I was so geeked up about using the POP Forums class library to power it all. It still does most of the heavy lifting, but there are about a half-dozen or so data access methods now that do the rest.
It's a lot more simple than .Text in terms of the code base, but it basically does the same thing. There are some additional features like the photo galleries, a recent comments list for members, private messages, friends lists, profile photos, etc. Even prior to the revisions, it was apparently compelling enough for the small group of users, because they're very active with it. Some are having good times with the custom style sheet functionality.
The fun code exercise in this case was doing the trackback mechanism. It's pretty straight forward once you get your arms around the protocol. Granted, users can choose to disallow public comments entirely, so I don't know how much use that will get.
I've gotta come up with some more interesting style sheets for the users, but that will come in good time.
I decided that perhaps I should rewrite my text parsing engine for POP Forums from scratch instead of trying to band-aid it over and over. So with a clean slate, I have a few decisions to make.
I've noticed that other forums don't get into parsing paragraph tags at all. Instead they use line breaks for everything. What do you think, is this acceptable? If my understanding of XHTML validation is correct, it's OK as long as it's nested within some kind of block element, like <div>. It's certainly a lot easier to parse line breaks instead of properly closed <p> tags, that's for sure.
What's your take? I don't get religious about these things the way some people do, so I'm easily influenced.
I was chatting last night with a high school kid that frequents two of my sites. Smart kid, has a site about a certain amusement park, fortunate enough to have his own PowerBook. We were talking about advertising revenue. He uses one of the same ad firms I do, and he does OK even with limited traffic.
It made me think... what if the Internet was as mainstream today as it was when I was in college (fall '91 to spring '95)? I remember busting my ass on crappy work-study jobs, along with my radio gig, just to pay the rent my senior year. I was lucky to clear $300 for a month working 80+ hours. Not a lot of beer money, or money to buy other essential items like CD's and a replacement VCR when my hand-me-down died. I also wouldn't have had to settle for my ancient IBM PS/2 Model 25 with no hard drive (though ironically it was my first computer used to touch the Net).
Today, nearly every kid in college has a computer, a laptop even, with a wired dorm. There isn't a doubt in my mind that if I were in college today, I'd have some site and I'd clear a grand a month to seriously party. I might have even studied now and then.
And the effects aren't limited just to income. My former volleyball kids, now in college, are always connected and online. They're there in my buddy list 24/7. There's a totally different social culture aided by the Internet. I don't know if that would've resulted in fewer lonely nights or just a different means to receive a booty call, but it would be different, regardless.
Maybe the weirdest thing is just that life hasn't changed much in ten years now that we have a mainstream Internet. On the other hand, everything has changed. It's a very strange dichotomy.
"I've never understood how Microsoft has profited from IE's dominance."
"This is a very naive view. There is a certain base level of standards compliance that all browsers implement. Beyond that, Microsoft has added siginificant functional enhancements to IE which allow it to do much more than browers such as Netscape or Firefox." He goes on to say it has more to do with intranets than the Internet.
Either way, I have to respectfully say that he's full of crap. That sounds like a quote from the MS PR handbook.
Anyone using Firefox right now that is missing out due to the lack of "significant functional enhancements" in IE? Anyone?
That's about what I figured. Yeah, I'm sure you can find some exceptions, but give me a break. Heck, even in Corporate America I see no IE-dependence. In fact, I get mini-throw-up every time I start a new gig and find that a company is still hanging on to Lotus Notes databases, Domino Web servers and such.
I'm as much of a Microsoft cheerleader than anyone. MS products have changed my life and I wrote a book about them. But I haven't seen anyone give any compelling evidence that IE allowed them to earn actual money. Yeah, they killed Netscape by pushing out IE, but so what? Netscape was a company with the most riciulous business plan ever conceived (if there really was one at all), and the product sucked and got worse every release. The hardcore Internet dorks like me started with Netscape, and eventually moved to IE because Navigator sucked.
That's what kills me about the last six or seven years about this saga. There are really two issues that everyone intermingles into this demonization of Microsoft. The first is that Microsoft used its monopoly to squash competition. Seriously, what competition did Netscape offer? I'm not saying it's right, but to suggest that Netscape was ever going to be a bona fide profitable business is a fantasy.
The second issue is that proprietary IE features would cause Microsoft to own the Web. (Ironically, it should be noted that Netscape's early versions had "extensions" to HTML that did the very same thing.) Yet here we are talking about the relative explosion in market share by Firefox. Huh. A lot of good that desktop monopoly did Microsoft, eh?
Wow, have you read this story from The New York Times (via News.com)? The author just slams the guy from Microsoft, and quite frankly, he kind of deserves it for some of the stupid things he said. Granted, I'll offer that they might have been taken out of context, but that last analogy isn't very good.
My personal feeling is, and has been since I first saw the Web with Mosaic 1.0, that the browser is largely inconsequential in terms of any company's business. If I were to start a new company today, a company that builds Web browsers would not be among my considerations. I've never understood how Microsoft has profited from IE's dominance, or how Netscape back in the day made a buck when you could download the browser for free. Neither company has scored any extra revenue from me, any more than Mozilla has by me using Firefox. The only thing at stake is to say, "ours has more users." That's such a dotcom business plan.
Now of course the Microsoft haters (you know, the tools and morons that refer to the company as "M$," because that dollar sign means capitalism is bad or something) are going to say that they're trying to extend their desktop dominance to the Web. Really? How? Has IE's dominance prevented you from using the Web? There was this long-standing theory that as applications more commonly became Web-based that the browser would be the gateway to those apps, and somehow Microsoft's browser would control it all. That was a stupid theory because it assumes that the Web itself could only be viewed by IE.
If you want to bitch about IE, then by all means complain about the legitimate problems like security and the worst CSS rendering of any browser. Those are things that irritate the crap out of me, and they're the reason I don't use IE anymore.
Despite this, Microsoft is not being harmed by my decision (as a .NET developer, they're obviously getting my money in other ways). In fact, I start to wonder why Microsoft continues to build a browser at all. The one they have doesn't work as it should, there's no sequel in sight, and with XP SP2, there isn't a single reason you need it (Windows Update works on its own, without the browser itself).
It occurred to me that I've made a lot of posts lately indicating that something "sucks" or "blows" or is "terrible" or something similarly negative. It seems I blog a lot when I have something to complain about.
I think this is what happens when you spend too much time in front of the LCD glow. I'm actually very happy, and enjoying life. It's just that in this profession, given my area of "expertise" (stop laughing), there isn't much to talk about right now. I did my fair share of Visual Studio and ASP.NET v2 cheerleading last summer while writing my book.
Actually, there it is... I think I figured it out by talking through it. Since I can't use Whidbey in production, I need to use VS 2003 and ASP.NET v1.1 so I can pay the bills. Indeed, that's enough to make anyone grumpy. Aside from mangling the crap out of my HTML, VS 2003 gets pissed and won't open a Web project if the web.config for it has some other IHttpHandlerFactory taking requests. You get the drive doesn't map to site error nonsense. Honestly, who thought that rooting your Web apps in IIS was a good idea? I'll never understand that.
But alas, it won't be beta forever, and this insanely long testing period will result in a nearly perfect product, right?
This is to see who really reads my blog...
In his latest e-mail, Strong Bad takes on the stereotypes of radio.
I laughed so hard at this I nearly pee'd my pants. Seriously. If you know anything about my resume, you know that I double majored in radio/TV and journalism in college, and I worked professionally in radio for about two years. I'm not sure if it will be as funny to you without that radio experience, but for someone that loved the medium and loathes what it has become, it's freakin' comedy gold.
And speaking of radio, it sure sucks. You can trace the death of good radio back to the days when Congress was into deregulation for the sake of deregulation. When the FCC lifted ownership restrictions on radio, therefore handing the scarce resource of FM bandwidth over to huge media companies, they killed every last chance that radio had to be personal and local. The shit on the air now is programmed from New York, for New York tastes, is pre-recorded, has no show component to it, and the formats absolutely blow. Despite all this, radio revenue has never been higher. Why? Because small local companies can't get their hands on a frequency to challenge Clear Channel and Infinity. It's a joke.
Wow do I hate dealing with rich text editing. The funny thing is, way back when POP Forums was a product I actually sold, I think I may have been the first to use some very basic bold/italic functionality in a forum. Now there are some nice controls out there, free even, but trying to get them to work as you'd like in both IE and Mozilla/Firefox is hopeless.
The latest version of FreeTextBox has one problem: By default it renders bold and italics with span/style tags/attributes. That's bad because what I need for parsing is <b>/<strong> or <i>/<em> tags. To Firefox's credit, it's smart enough to combine them into the same tag, but again, not really what I need. I did find this little gem buried in the Mozilla documentation and tried to work it into a derived class:
public class FTB : FreeTextBox
protected override void OnPreRender(EventArgs e)
+ this.UniqueID.Replace(":", "_").Remove(0, 1)
My next attempt was to try and upgrade my own little control, however ugly it might be, to work in Firefox. Works great, except for the part about copying the HTML from the iframe to the hidden text field. In my version, I use the iframe's onblur event to copy, so if you hit anything else on the page, it'll copy it over before a form submit (by postback or otherwise). Firefox doesn't seem to listen for onblur from an iframe, so that doesn't work. Despite a lot of searching through the FreeTextBox script, I can't see how it does the copy.
So here I am, back at zero.
There really should be a good Flash-based editor, though that of course would cause you to lose text if you accidentally moved back or forward. I've seen a few out there, but they rely on Flash's built-in functionality, which, believe it or not, throws in more junk than IE ever did.
I bought the first Intellimouse Explorer back in... uh, well, actually I don't know when it came out. It actually crapped out on my in the first year, but Microsoft sent me a replacement. I've had that one ever since. It has been at least four years, maybe as many as six.
In the past few months, it started cutting in and out on me, and it wasn't a short in the cable. If I'd cross from the far end of one screen to the opposite end of the other (I use a pair of LCD's), Windows would make the disconnect then connect noise and I'd lose the cursor somewhere. I could almost deal with that if it wasn't for the noises! :) Alas, I decided it was finally time to retire it. It had been good to me. The Microsoft logo had long since been worn off and there are actual grooves in the plastic from my fingers.
I replaced it with the new v4.0. Why not? The last one lasted so long. I got the wired version since I hate changing batteries (as my wife does this regularly on hers). The new version is roughly the same shape, but lighter. The only thing I don't get is why they made the forward and back buttons smaller. Then again, I don't know how many times I've accidentally hit them when grabbing the mouse on the old one.
My Natural Keyboard Pro is still working. It looks disgusting, but it works. I hope it continues to hold on, because I haven't found any other keyboards that have the same tactile feedback I like.
As much as I'd like to think that I can continue to improve POP Forums on my own, I can't. I need some help.
At the root of my problem is the text parsing class. In a nutshell, this thing is supposed to turn the HTML of a rich text editor into "forum code," and turn forum code into valid HTML for display in a forum thread. It mostly does this pretty well, but there are issues related to parsing e-mail and URL's correctly, namely if they appear in tags already.
I've uploaded the class and the NUnit tests here. There are basically just a few tests that don't pass in the ComplexTests method. If anyone would like to take a stab at fixing, please, be my guest and I'll be eternally grateful. I realize the code isn't what it should be, and that starting over is probably a better idea, but you're looking at six generations of band-aided code. Rewriting it entirely is something I just haven't really had time to do.
EDIT: Yes... the class won't compile because I left out the rest of the project. If you want to give it a go, comment out the section that calls the Emoticon class and the censoring functionality. There may be some tests that test emoticon parsing as well, so you'll have to ditch those. Sorry... it would've been too much to try and get it all together, including the config files and database. :)
I'm sure it won't make me more popular by saying it, but I think the Wiki craze among developers is nothing to get excited about. Yeah, it's neat that you can implement such a system, but it seems to breed useless content.
For example, I noticed that FreeTextBox released a new version, so I thought I'd check it out. I downloaded it, but went back to it in a test project on a remote server, where I did not have the original zip (and therefore, not the help files or code samples). I thought, hey, no problem, I'll just check the docs on the site. What a waste of time that turned out to be.
I went to the installation page looking to see what the @Register directive was (seeing as how I had no idea what the proper namespace was). Nope, not there. After looking around some more, I eventually landed on a page with nothing on it at all, and no navigation to get me to something useful.
I'm not a hater. From what I can tell, this version of the control is extra cool, and the price is right. And yes, I'm sure someone wants to comment that I should have had the stuff in the zip file with me, but I didn't. I don't think it's that ridiculous to expect that you'd actually find meaningful documentation for a product, free or not, on the site from which it came from.
I've yet to see any Wiki evolve into something useful. The concept has been around for a long time, and for awhile you'd think that blogging .NET developers saw it as something that would change the world. But here's the thing... Having run sites that encouraged the contribution of content from anyone on the planet since 1998 or so, I can tell you from experience that this kind of Utopian everyone-can-edit idea won't ever work. You can't even trust people to behave in a discussion forum or in blog comments, and you want to have a site anyone can edit content on? Without some kind of moderation, it's useless, and if moderation is to be practical, it has to be of structured data.
So tell me why I'm so uninformed.
I read Scott W.'s article on URL rewriting in .Text, and it's pretty straight forward. What I'm still not getting is how you can handle a default page request without having to wildcard map requests in IIS. For example, in this very blog, you can request "/Jeff" or "/Jeff/" and get my blog. I assume that's because IIS and ASP.NET are assuming this is a request for "/Jeff/default.aspx," but perhaps I'm not seeing something right. I've been looking at the .Text code and it's not entirely obvious to me.
Anyone wanna help a guy out?
I didn't sleep well at all last night due to a nasty stomach ache, which I think I can attribute to the popcorn butter they use at the local Cinemark. I feel like crap every time I eat it. (Blade Trinity, by the way, was awesome. Jessica Biel: Action star. Who knew?)
So tonight I thought I'd go to bed early since I would obviously be tired. Yeah, after an hour staring out the window I gave that up. My mind started racing, thinking about some of the projects I have in the pipe. Some of it will lead to revenue, hopefully in the near future, some of it will not. Of course, the more fun stuff isn't revenue generating.
My wife has to get up early for school, so to spare her of the tossing and turning, I came down stairs with the iPod (a bit of Venus Hum) and the laptop to surf for some articles relating to some of the things I have to do. I figure it's the only way I'm going to get this crap out of my head so I can sleep.
At first I was a little annoyed by this, but putting it in perspective, I'm glad I'm getting excited about writing code again. The book really took its toll (though I'd still do it again). Now that I have other things like a new J.O. volleyball team to coach, I think I'm balancing out some more. There's so much I want to accomplish.
Has anyone else noticed that FireFox doesn't always refresh as it should? I'm talking about the meta tag refresh. There are a couple of sites I visit that use these tags to refresh after a login, as does my Trillian "Check Hotmail" link. In these cases, you have to view the source of the page and paste it in to make the refresh happen.
I've been seeing this since the beta days, so I'm surprised it's still a problem.
For reasons no one can explain, iTunes asks me to authorize my music about every other time I try to play songs. I have no idea why. I've got my tracks on no more than three machines, and I get five.
So I fired off a support request to Apple, which after three go-arounds resulted in little more than an explanation that I could only authorize five machines and that further support could only be achieved via a fee-based call. In each case it was clear that these were copy-pastes, not an effort to try and diagnose the problem.
Using support scripts like this, handled by support drones making minimum wage, might appear good for business in that it keeps costs down, but at what cost? How many customers will just say "F' it" and move on? Probably not many when it comes to Apple stuff, but it's still not a good front for developing further business.
Isn't this the truth: What corporate America can't build: A sentence
It's staggering to me that we have this technology that has become a vital part of life in less than ten short years, yet people communicate more poorly than ever. I've seen it everywhere. I get e-mail from recruiters all of the time that look like they were pecked out by a 14-year-old crack addict with hands too unsteady to type the right letters. In my big corporate jobs, I didn't see it from other code monkeys that often, but from outside departments (help desks, HR, etc.).
If you've been to any online forum that covers something you're interested in, you've seen the worst of it. If you like video games, you're really screwed, because I can't remember the last time I saw a coherent video game forum.
I have a strict grammar and spelling policy for my sites. It really pisses off some people, but they leave, and that's fine. I just refuse to allow my little corner of the Internet to be over run with what we like to call "brain-dead AOLer speak."
I'm in the process of reviewing the copy edits made to my book before it finally goes off to production. I can honestly say that I've never done anything more tedious.
Granted, the editors didn't make a ton of changes (I guess that degree in journalism counts for something after all), but it's enough that you have to read very, very carefully. Looking at these Word documents with all of the changes tracked and comments hurts the eyes. Last time I did something like this was in college circa 1993, when we didn't have Word and you still had to paste together the newspaper. (We didn't have instant messaging or the Web either. How the hell did we survive?)
Aside from looking at the final proofs, this is essentially the end of the project for me. My wife Stephanie keeps yelling at me for blowing it all off as something anyone can do, but I never wake up and think, "Holy crap, I wrote a book and it was published!" I can be an arrogant bastard about a lot of things, but for some reason I tend to understate my professional accomplishments. I couldn't tell you why.
I've got a lot of little projects to start, finish, or think about, but I've also got that question in my mind about whether or not I should write another book. From proposal to publication it will take about 15 months, so if I want to take a stab at supplementing my income in a serious way, I can't wait forever to do it again.
Any seasoned authors have advice?
It didn't take me long after playing with the trial for SmarterMail to see that it was a really good Web application (and it's even a .NET app) and server product. The navigation is ridiculously clean, and honestly you could probably use it as your mail client, and never ever use a desktop client again. Best of all, it's catching far more spam than IMail ever did. I guess after using IMail for six years, I didn't realize how much it sucked (and went relatively unchanged).
On the other hand, I decided to take advantage of an Overture promo ($100 credit) to try and generate a little traffic for my volleyball site. I've used this service on and off all the way back to the days when it was GoTo.com, and honestly I'm astounded by the way it generally isn't well designed. Aside from being slow, the UI is pretty bad and the navigation isn't logical. There are several pages where you try to update something and there's no explanation as to why it didn't save. Oh, and naturally the promo credit wasn't actually applied until I complained. Google's AdWords, by comparison, isn't perfect, but it's quick and straight-forward.
I guess when I stop to think about it, there aren't very many really good Web applications that I encounter. When I did Weight Watchers last year to shed a couple of pounds, that one was pretty good. Bank One is pretty good too. (Is it coincidence that these are .NET apps?)
Any other examples that come to mind of really good online applications? I'm really curious to know if anyone has extensive experience with SalesForce.com what they think about it. That should've been my millions...
IPswitch just sent me a reminder asking if I wanted to renew my service contract for IMail, and truth be told, I'm not really that satisfied with it. The Web interface isn't great, it's expensive, and frankly the spam filtering isn't as good as I suspect it could be.
I launched my volleyball site today, VolleyBuzz.com. This one is probably not much of a commercial venture because I'm not sure how big the audience is. Still, as I found when writing my ASP.NET book, writing about things makes you think more critically about them, and I hope to apply that same discipline to coaching volleyball.
Flat-panel TVs can't topple tubes--just yet
There sure are some problems with this article. First it says that, "LCDs are great as desktop PC monitors because they don't have to refresh pictures rapidly." This implies that TV's must refresh faster, which is not even remotely true. My computer LCD's here run at 72 Hz. Even the fastest HD standards top out at 60 Hz (or frames per second).
The article also implies that the quality isn't as good, which I also tend to disagree with. I'll give that LCD's don't do black as well as CRT's do, but in terms of overall sharpness of picture, especially a digital picture, it's like night and day.
So I'm working up this alternate style sheet on this project I'm working on. Looks absolutely beautiful in Firefox, and it's totally predictable. Pop it into IE, and naturally it's a total mess.
But that's not even half the problem. The other thing is that it doesn't even render half the stuff it should, until you scroll it on and off the screen. Text won't appear, but if you scroll it off, then back on, or select it with the mouse, suddenly it appears. What the hell is that?
If Microsoft is in no hurry to fix IE, I hope that Firefox continues to gain market share.