August 2004 - Posts
Someone was reading the circular text layout math installment and asked if I had "... any thoughts on how to draw the text so that it is centered?". I'm going to be blatantly honest. The math installments are mainly to bring math to professionals and draw tangible links between theory and implementation. I've seen far too many hacks end up in code where the math isn't understood, but the end result was good enough to get by. With that in mind, I don't have thoughts on how text is centered, I instead go back to the mathematical rules and equations to discover the answer. Hopefully given the small basis on math that I've been focusing on others are now able to do this as well.
Since the original article discussed linearly rotated text as well as circular text, let's center that first. Remember that the transformation pipeline allows us to offset, rotate, and then offset again. We'll use this to make the entire process much easier.
textLocationX = ??
textLocationY = ??
textSizeX = size(text).x
textSizeY = size(text).y
textOffsetX = textSizeX / 2
// Transformation pipeline in non GDI+ order
Easy enough. We simply offset or center the text with respect to the origin. Then rotate about the origin. Then move to the final destination. The center of the text will now be deposited on the point given to the equation.
Centered circular text isn't much more difficult. I tried to quickly explain the process in a comment, but it deserves a bit more. Our objective is to convert linear string distance into an angular offset. Our algorithm already takes an angular offset for rendering text, so our new centering code simply needs to exist at the front of the pipeline to modify this angle before we start rendering the characters out. If you recall that Carcd is the distance around the circumference per each degree we can find a degree offset by dividing the linear text length by the Carcd. This is how far the text will travel around the edge of the circle in degrees, cut this in half (aka center), and then add or subtract it from the initial angle and you have your angle centered circular text.
textSizeX = size(text).x // Upgrade to character wise because of GDI+ measurement crap
sweepAngle = textSizeX / Carcd
offsetAngle = sweepAngle / 2
angle = initial angle +/- offsetAngle // Depending on direction of rotation
I would hope that the understanding of these processes can flow from the previous articles. I've had a number of contacts based on the series where people are putting together additional concepts to create some cool things, but they needed the intial grounding. I've created some great new controls, a couple of which are powered by Whidbey so I'll hold onto them for a while, that really make use of all the math. I also want to discuss some three dimensional layout opportunities using the same math and GDI+. Stay tuned!
If you haven't watched the Discovery Channel program on the XBox, Microsoft Gaming Studios, and the development of Crimson Skies and the next Oddworld, then you are a very lucky person. I can't quote the program, because I didn't bother taping it, but I can para-phrase some statements that are an utter travesty.
"You can't call them games anymore." Ed Fries... Yep, I think he is right. What I got from the program was that playability was being sacrificed to add plot-line features and CG videos that immerse the player in the story line. A story mode is useful the first time through a game and probably accounts for 10% of the game time spent, unless the entire purpose of the game is delivering the story (aka an RPG). In games like Crimson Skies I could care less about the stupid little video sequences and could care quite a bit more about how the plane flies, the different moves, the different game-play modes, and other PLAYABILITY features in place of ENTERTAINMENT features. If I wanted to be entertained I'd go spend $10 on a movie which is much cheaper than the $50 I spent on the damn video game.
"Unique games with unique playability." Ed Fries again... I could use all sorts of poor language here, but the recent trend in playability is the creation of an environment and then providing the player with the minimum amount of interaction required for them to complete that environment from start to finish. I feel more like I'm being DIRECTED in modern games than that I'm PLAYING them. When you only give me the minimum playable interface you are corraling me into solving problems a specific way. Play the old Gauntlet and the new Gauntlet's for a great examination of how playability has become pure trash. While the entertainment value of the games is definitely improving, graphics are getting better, I'd have to say there is a complete lack of unique playability. To the point that unique games fail, like Deathrow. Yeah, I know it sucked, because it tried something unique with playability, but failed to deliver the full experience of that playability.
Yesterday I posted on the game Rush Hour Spatially oriented puzzles, density, and perceived complexity... and about it's various types of complexity. Today, I'm desperately seeking some puzzle game that is going to last more than a few hours. When all was said and done for this particular puzzle game I was out about 30 bucks and only got around 6 hours of gameplay between the 120 puzzle cards. I've been to quite a few game stores and I can honestly say this was one of the more challenging puzzle games that I've managed to find compared to all the others, so similar games with a much deeper focus on creating complex solutions might be something to look at (they have a railroad version that adds a new piece type that can move horizontally and vertically).
Since I love giving content I'll go ahead and provide an examination of the game for you in case you might be willing to play Rush Hour yourself.
- Initial Game - $18 after tax with 40 game cards. The game cards have 4 levels of difficulty (Beginner, Intermediate, Advanced, and Expert). Each level is set apart from the previous by requiring more moves in order to complete the puzzle. I find that some of the Intermediate and most of the Advanced are probably the most difficult in this particular deck based on the spatial orientation and density discussion I had in my previous entry.
- Deck 2 - $8 after tax with an additional 40 game cards and a new 2 square car. This car is generally just a replacement for the old red car that you need to remove from the board in order to win. With a new 2 square car, new types of puzzles are available that weren't previously enabled. Only puzzles that make use of all of the two part cars are new puzzles. The rest were previously available with the existing set of cars. One of the new puzzle types allows you to remove more than one car from the playing field. These are often very easy and the first of the two cars can be removed in about 3 - 5 moves leaving you with the real problem. These could just as easily have been made without the 3 - 5 extraneous moves.
- Deck 3 - $8 after tax with an additional 40 game cards and a new 3 square car. The new car is a limousine and you often have to remove this car from the board in place of the red car. Decks 2 and 3 are exclusive, so you don't need Deck 2 to play Deck 3. This isn't really a problem since the board isn't large enough to support all the cars of both 2 and 3 square types. The new puzzle types enabled with the new car rely on having all of the 3 part cars on the board at once. Because 3 part cars take up half the board, they heavily restrict freedom of movement, and it is often quite obvious where the cars need to wind up if you are going to escape the traffic jam. Based on the freedom of movement principle I find this entire deck to be much easier than both the initial game and the second deck.
The game strategy breaks down quite nicely. There are three types of puzzles that revolve around the movement of the red car and an initial examination phase that will enable you to beat even the most difficult cards with ease.
- Initial Examination - During this step, you need to identify where each piece should wind up if the red car is going to escape. The more dense the board the more obvious this is because you know that empty spots on the board need to be filled in order for filled spots in front of the red car to open up. You can often pick a single empty space and begin mentally finding what I'll call the critical cars. Generally there are between 1 and 4 critical cars that have to change position and consume different parts of the board for the red car to escape.
- Green Light - This is the first puzzle type. A green light puzzle means you can solve the puzzle without moving the red car or only having to move it slightly. This involves mentally finding where the critical cars are up front and moving them into place to open up the path for the red car. These are almost non-existent at higher densities.
- Jams - Jams are much harder than green light puzzles, but I don't think they classify as the hardest puzzle type. In a jam, the red car will be forced to move backwards and forwards several times to let cars pass around it and into position. In a jam, the movement of the red car is fairly localized so the visualization step is still fairly easy. Jams aren't all that hard because they are like knots that unravel. As you solve portions of the jam the pattern gets easier and easier to solve for.
- Stop & Go - Stop and Go puzzles are by far the hardest. Generally these puzzles are very dense and they feature a kind of figure 8 pattern of cars. You are forced to put the cars into a specific sequence in order to transfer the red car between two locations within the puzzle. You work on each side of the puzzle to move the critical cars and have to put the game back into the specific sequence several times, each time with a different car layout (because you are moving critical cars) to pass the red car around and unblock other cars. Unlike jams, the complexity of a Stop & Go never unravels.
I'm thinking of doing an algorithmic study of this game using the Zobrist hash for solving. And then showing how the same memory techniques can be used for creating an entire database of new puzzles. The end result would hopefully be some very solid puzzle cards with great freedom of movement and complex decision trees. There must be an optimally hard puzzle in there somewhere. If the longest time spent on any of the 120 existing puzzles is only about 10 minutes, then maybe I can squeak out a card that takes me 20?
Play the Game!
After taking about an hour in a game store, the owner finally told me a story about a bunch of genius Boeing employees that had spent large deals of time on a specific game, Rush Hour. Figured it was worth a shot with the game itself being 15 bucks and there being two expansion packs. Hell, with 120 puzzles after the expansions it has to provide some level of difficulty...
First, we have to realize the game is spatially oriented on a 6 by 6 grid. Generally speaking, the fewer pieces on the board the easier the puzzle is. You'll start out with small numbers of pieces for the low end puzzles and working with more pieces at the higher level. With more pieces you get a higher density of pieces on the board and the higher the density the fewer possible moves at any given time. In other words, increasing density decreases freedom of movement and therefore makes the decision tree of the game tall and thin.
In all there are a couple of possible decision trees for this type of game. you can have fat short trees where after each move there is a lot of freedom to decide the next move. This is great because it adds a level of thought at each level as you pick between the various moves. The tall thin trees discussed above happen as the density increases and you are forced to select specific moves with choices coming at critical tree sections as you switch from one tree section to another. The final option is a kind of free tree where there is both a large freedom and a large number of moves. A game like chess would fall into this last decision tree category.
Now, with most puzzles the more moves required to complete the puzzle, the more complex it supposedly is. I can kind of agree with this. However, I believe puzzles with a balance between freedom and move count actually turn out to be more complex. To bring this home, some of the Grand Master puzzles in Rush Hour only have 2 or 3 critical junctures and the rest of the movements are a fixed function of the density (aka required), while many of the Advanced and Expert puzzles have great freedom and often have 10 or more critical junctures and even false move sections.
The puzzle rating complexity, being based on move count often falls short in positively identifying the true complexity of the puzzle at hand. Especially when the mathematical complexity or algorithmic complexity has nothing to do with the human problem solving complexity. The perceived complexity is a much better measure where the complexity can be based on the difficulty from an actual player's stand-point. These are often relative measures, but when you have a series of puzzles, you can logically based complexity of your later puzzles from the first few puzzles. The complexity may take into account not only the number of moves, but also the initial analysis complexity (can I see how the puzzle can be completed at first look), move type complexity (the various types of transforms that are required), and freedom of movement (number of open decisions in the tree).
Just goes to show that more isn't always better, especially when it comes to puzzles.
I'm pretty sure I've seen a couple of messages go by where the author's of various blogs were worried about a decline in readership. There was one yesterday that particularly caught my eye because the immediate response from the author was, "What are we doing wrong?". That means at least some authors are worried about their own readership, while others primarily care about having a repository for their own information. I think you really need to draw a balance between the two in order to fight general community ignorance of whatever feature or item you are a professional for.
One thing to note is that there are some definite trends. During the winter when you release a new posting, you generally have 800 aggregator reads in the first day. Not sure how the .Text system keeps these sane, perhaps with unique IP counts or something, but not only is this an inaccurate number (5000 different people from a shared portal IP for instance could be getting your RSS) it is a very dependent number.
Your agg views are primarily linked to the time you are on the main feed list. Very few people actually read your feed directly. This is common. I know I have about 100 direct feed viewers becuase every once in a while I create a posting that doesn't show up on the main feed just so I can get that information. The more active the main feed list is, the less time there will be for your posting to get mass impressions. There really isn't a function for maximizing your time in the feed list and impressions. Late at night, you'll stick around longer, but you'll only get views from people that leave their aggregators on. Not to mention by the time the morning crew starts you'll be further down in the feed list. Releasing in the morning works pretty well, or before lunch. But since everyone knows this now, you get kicked off the list pretty quickly. Releasing after the work-day, and yes you need to take into account the east-coast, often costs you a hundred or so views as well.
Notice I mentioned you get 800 during the winter. Well, people are on vacation during the summer. People are also less likely to be diligent about using their aggregator software when the sun is shining. Strangely enough, over the past month the aggregation has been around 600 for my blog, while in the past week it has shot right back up to 800. Can you maybe blame something on the rain and people being inside more?
Use some statistics to put your mind at ease. It doesn't appear that there is any decline in the readership and statistically we are still on the rise. Readers are always going to be very explicit in what they are looking for. Don't expect all of your articles to be a hit and have thousands of readers, that just won't happen. Just because you are geeked out on some new technology doesn't mean everyone else is. My math installments of late are a perfect example where I planned for my target audience to be about 20% of the people that might read my blog. My game postings are another great example where only a fraction of the readers will have an in-depth interest and you can't honestly say that everyone wants to add new schemas to VS or write custom exchange mailbox clients. At least one other geek is reading your stuff, believe me.
That said, blogging isn't about the immediate return. It could be I guess if you got sponsored and paid to blog. It appears that some people do have this sort of associated stigma where each article tends to have direct pay-off for them in some way or another. These are the same people that have similar success in other mediums. I've always been curious how many of the MSDN writers continuously get called back to write more sub-par articles (I said some man, others are actually pretty good) but it's their networking and they use the same charisma and techniques with their blogging. You should make sure to draw a balance between blogging for yourself (things you find interesting and get an immediate return from), blogging for your readers (targeted and categorized to the audience that wil ACTUALLY be interested), and simply keeping your blog as a reference material where you can send people that ask questions. This last one is where the blog really pays off. I find that I've already answered 20-30% of the questions that people might ask me somewhere on my blog and sending them to a location with a coherent article rather than giving them impromptu explanation that may be untested, incomplete garbage.
First I'd like for the systems to support the <category> element. I've blogged about this a number of times and a taxonomy for organizing entries on a grander scale than by feed is definitely needed. I know I post up to 3 items a day at times and with the blogs I read I have to parse through about 60 different feeds reading 1 or 2 posts per feed. This can be a real pain in the butt. Further, I don't know what the feeds are going to be about, so as I read them, if I like them, I have to categorize and move them around myself. That just isn't right.
The <category> element is made to be used at both the <channel> and <item> level. This is nice. It means I can use a general taxonomy (such as a global set of categories defined by the site hosting my blog) for my channel that will enable users of the aggregation site to properly position my feed. Each <channel> has an optional domain attribute that allows easy separation of who or what entity is defining my categories. For my own blog the categories might be defined using a domain for the root url to my blog. The weblogs categories would of course be defined with a domain set to weblogs.asp.net...
Hell, I make sure to properly categorize my blog entires so I can find them myself. In fact I would say I was meticulous about it. I think many users would be interested in making use of my own categorization and if there was a global categorization I would take advantage of that as well to make sure that people could properly index my content within their aggregators. Blogs are quickly becoming resources, and while it is nice to go find the entries through something like Google, it is even more nice when I'm running unwired and can quickly and easily search a database of technical content.
I'm only beating this dead horse because there is an EXISTING feature of RSS 2.0 that makes it possible. You could say none of the aggregators support the feature, but mine does and that is good enough for me. Make the feature widely available in some popular feeds and the aggregation software will follow.
Okay, the next feature is the <enclosure> element. Anyone think it is extremely retarded to have a linkage system and a use case scenario related to that system for delayed and background downloads and still not take advantage of it? The <enclosure> element simply links to some other content. It wants you to give a content length, and that makes it not so useful for more dynamic content or content you plan on changing, but it is simply a guideline to be used for scheduling download of the content. You can specify a mime-type as well. You could easily link HTML documents, audio/video content which was the original design of the feature, power point decks, zip files, project files for a buildable solution, and source files for sample programs.
I consider linked content really important. As it is, I have to embed all of my links within the content of my entry. An aggregator could go out and cache those items if it were so inclined (actually I should add that to mine), but it can't tell the difference between linked items and links. They really are different. A linked item might be a sample file or zip of a project system while a link might be a URL that does a search on google. Wouldn't want the search results to be cached by the aggregator. The <enclosure> item also implies features for scheduling the caching of an item. In fact, enclosures could be added to a list and I could select and then schedule each item based on the information provided (like content-length).
Thats my RSS gripe for the day/week/month... You can always say, do something about it. I actually have a number of extensions and quite a few RSS sample applications, but the general practice with something as widely used and available as RSS is to leave it alone for fear of breaking it or making it complex beyond necessity. Many great extensions exist that haven't gotten the time of day. I've recently added photo-album support to my own aggregator and I have to say it is one of the most feature rich extensions I've seen but is so infrequently used that few popular aggregators support it.
Getting gold in the Men's 400m is one thing, but getting the silver and bronze as well, now that was just too cool. I don't think most people can appreciate how fast these guys are running though. First, they are running 400 meters. The normal runner takes about 180 steps per minute and if you ran an olympic gold medal time that would mean your stride would be a little over 3 meters... That isn't a joke my friend.
180 / 60 = 3 * 44 seconds = 132
400 / 132 = ~3 meters
At this level these guys aren't improving by seconds any more, they improve by fractions of a second. My fastest 400m time ever was a 48.6. I'm nowhere near that now, and I think my last 400m time was around 65. People that don't run very often look at the numbers and tend to think that numbers like 48.6 is close to another number like 50, or that 55 is close to 60. The meters per second speak for themselves
400 / 44 = ~9 m/s
400 / 48.6 = ~8.2 m/s
400 / 50 = ~8 m/s
400 / 65 = ~6.15 m/s
Haha, my latest times are a far cry from olympic times now I'm afraid. Every second I'd have to push another 3 meters or take an additional step and a half in my case. Of course on my way to an olympic time I'd probably extend my stride a bit, so it would probably mean one extra step per second. That just isn't easy my friend.
Now, at 48.6 I was not a consistent runner. I ranged anywhere from a 56 to a 48.6. In all I think I had three runs below 50 and hundreds in the range from 50 to 56. Improving my average time by fractions of a second was a huge improvement and it often took an entire season to do just that. It would have been nearly impossible to improve my times down into the high 49's consistently. Knowing the difficulty, my hats are off to our 400m squad, and I'm sure they'll be quite successful in the 4x400m relay. I'm also pretty certain that at 20, our gold medalist will be pushing the world record a little lower in the years to come. Currently less than a second off of Michael Johnson's record, he'll have to improve his movement by an extra foot per second.
To put things into perspective, the world record in the 100m is currently 9.79 seconds. The current 400m runners are putting together 11s 100's so they are still about a second off the maximum achievable speeds of a sprinter. The current 200m record is more impressive because it is actually faster than the 100m record at 19.32. You see Michael Johnson put together a slow start time (acceleration) with a maintained higher speed. It all comes down to maintenance. The human body isn't very efficient at maintaining hefty outputs of energy and is much more capable of short bursts. If the 200m is any indication that a faster average speed can be obtained in a longer race than a shorter race, then the theoretical 400m times have the capability of breaking below 40s in the next 20 years.
If you ask me, preparing for a distance longer than what you run seems to be the key. Michael Johnson's ability to run the 400m at such a level enabled him to maintain great speeds in the 200m and inversely the faster speeds he achieved in the 200 trained him to turn over a great kick around the curve to the finish. You normally consider this type of training cross-training. An intense cycling routine might increase the turnover of a runner, while an intense swimming routine might increase their capacity to run in lower oxygen situations. I'd hate to see the training line-up that turns a 44 into a 43.
As some might have guessed I am an avid reader and developer of many different types of artificial intelligence. While Darren was busy putting two and two together with his math refresher course he is putting himself through he found the time to point out an interesting article and make some possibly interesting comments Linking morsels of informational data... sometimes learning is like a tantalizing treasure hunt. What he noticed is that the more he learns about math in theory the more tangible links seem to develop in terms of application. In fact many things that he has probably taken for granted have turned out to be enabled through the math he is only now learning.
The reason he is making many of these realizations is the ability for human categorization. We'll call this human categorization because it is conceptually and functionally different from other categorization algorithms. For instance, human categorization has to be trained and trained hard to be truly effective while other categorization routines that run in silicon tend to have rules that ensure some level of categorization the first time around. The training isn't as important because it is based on fixed rules rather than rules that change over time.
What Darren was realizing as he is re-learning trigonometry and algebra is that there is already a familiarity there. It is easier for him to learn the same concepts a second time through because the memory from the first time was never completely lost. Instead human categorization and learning is based around activation energies or weights. As he relearns the old stuff the weights increase over time and they now have more of an influence over the decision making process. This can be a good thing and a bad thing as we'll discover shortly.
Now, as the activation for these old memories begins to rise and new memories are created the brain is re-indexing many of Darren's past experiences. Most links in the brain are created through focus. As you focus you can direct your mind that certain paths of thinking are either correct or incorrect. This is the training as the brain is able to throw out the bad connections and only keep the good ones. Now what happens as you begin to forget your old math but you being to implement things that would possibly be much easier if you could remember your old math? Well, the brain still makes links. The proximity is always there only the activations are smaller now and the focus is missing. Your brain makes all sorts of low activation links. This is the power of the human mind to connect things early on and make use of the massive parallel processes of neurons, but not bother our focused thought process with the information.
That means Darren is at a crucial cross-roads. As he relearns the math, all of the past links start to come to the front of the mind (using very non professional terms). Some of these links are detrimental and some are helpful. Remember the brain made these links without training and so the links are very low activation and they can either be true links or phantom links. The links now have to be examined because they come rushing into the mind the more math (or other subjects) that are refreshed. Hell, sometimes hundreds of different past experiences can come forward over a single evening of study.
But if some of the links are bad and some are good, what is happening in the mind? Well, you have to be skeptical of the thought process. Remember these are age old mental links and normally you trust what comes out of your own mind. As each link gains in weight because of the association with the math processes you are learning you'll have to either enforce the association or break it. Breaking the association doesn't mean getting rid of the link, it means linking some really bad weight with it (at least that is what turn of the century research tends to show). Wrong things always have the ability to come back and bite you in the arse and one of the things you'll realize especially as you grow older is that it takes a much longer time to unlearn something than it does to learn something new.
Good links continue to grow in the mind. A year in the future those tenuous links that weren't important in the past will be just as vivid as the actual memories and links of the process you actually used. Looking back I can point out many places where things I learned within the last three months would have helped me. This type of cross-indexing where I can associate events from many years ago with things that I'm learning now is ultra-powerful.
This all speaks volumes about the huge parallel process that is human categorization. As we continue to learn new things or use old things those memories continue to be enforced and the activation energies increase. The ability to focus on and retrieve those memories increases and more importantly the activation speed or recall speed is improved that allows us to think quickly. While human categorization is a huge database of valid and invalid links it is through learning that our familiarity with valid links improves and the activation speeds increase. Only so much can be available at once and focusing on things that are important at a particular point in time is crucial to being efficient at our current task. The process of switching the active mind to a new palette of information is slow and unlearning old techniques in favor of new techniques is very slow and difficult. The neurons used for those old techniques may only be re-tasked after hundreds of hours of focused unlearning.
While human categorization works well and is powerful, the true potential of the system becomes apparent when more than one person works together and shares ideas. Often times the choice is between five or six different options and by comparing these options in parallel with more people the more agreed upon option will emerge. This would be similiar to merging the results of a search from several different algorithms and weighting entries based on occurence. Some of the text-search engines make use of parallel systems like this to make things work. Human intelligence really is a team sport and it is the result of contributions by billions of people. While computer intelligence can endeavor to approach and simulate human intelligence I have a nagging feeling that we'll find more break-throughs in focusing problem solving using methods that are more geared to way a computer thinks rather than the way humans think.
There may already be a few entries on bounding regions that I posted many months ago. At one point I think I described some different types of bounding region scenarios including strict and relaxed fit scenarios. To add some math to this a strict region fit is going to inscribe the shape that it is bounding. We'll take a look at inscribing a number of shapes inside of a bounding circle. A relaxed fit reverses the equation with the region begin inscribed inside of the shape.
I actually gave this as a challenge to Darren, but didn't really tell him how much it would aid him later. Inscribed shapes can be reduced to interpolated points along the curve of the circle. For instance, an equilateral triangle is simply three points equally spaced around the inside of the circle. Being three points we can divide the 360 degrees by 3, get 120, and each point has to be 120 degrees apart. You can rotate the triangle anywhere in the circle by picking a different starting angle and then adding 120 and 240 degrees respectively when computing the remaining points.
startAngle = ???
offsetAngle = 360 / n
point0angle = startAngle + (offsetAngle * 0)
pointnangle = startAngle + (offsetAngle * n)
Computing the locations of each point using the previously defined trig methods is easy. Plot the points, draw lines between them, and what you get is a shape of some number of points where every point touches the circle and the shape is entirely contained within the circle. What about computing the bounding circle for a given shape? Well this is a bit different, especially if the shape isn't regular. But let's see what we can do.
First, you start by taking the distance between all of the points or extremes in the shape. For a triangle, you'll be computing three edges or the distance between all three points. The longest distance becomes your diameter or taken by one half and you get your radius. The center point of the bounding circle is the center point of the longest line. As you start to get larger numbers of points in the shape or polygon, the number of distance checks that must occur increases geometrically. For four points you'll have to do 6 distance checks and for five points you are up to 10.
; Bounding circle for a triangle
radius = max(d(p1, p2), d(p2, p3), d(p3, p1)) / 2
center = mid(p?, p?)
; Bounding helpers for other shapes
distance checks = n(n-1)/2
Now you should be able to inscribe any regular shape given a circle and then turn around an create a bounding circle given any regular or irregular polygon.
When you circumscribe a shape, you generally draw it such that it contains a circle using the least bounding area. Every edge of the circumscribed shape is drawn tangent to the circle so that it brushes it at a single point. You might think this process is hard, but it is actually quite easy if you recall that any regular shape can be quickly and easily drawn within a circle. That means we simply need the radius of the inscribing circle for the circumscribing shape. Confused?
Don't be. Making this realization only makes our life easier. It means that if we find a single point for the circumscribing shape, we can use that point to find the rest of them with ease. Finding a single point involves making use of the trigonemtric sine rules that allow us to take any two angles and a side and get the length of the remaining sides. Take a look at the figure for getting the first point of a circumscribing square. We are using a square for now because it is pretty easy.
We start by making the following assertions. The angle from the center of the circle to two points on the circumscribing square is going to be 360 / n. This is called the internal angle and is the same angle we used earlier for inscribing shapes as well. Now, we extend the legs of a triangle beyond the radius of the circle until the third leg drawn opposte our internal angle touches the circle at only one point. The distance from the center to where the point touches is the radius or Cr. This will be important given information in a second.
Because the far leg is drawn tangent to the circle, the radius is perfectly perpindicular. That means the radius forms a right angle with the tangent line breaking our larger triangle T(C, P1, P2) into two triangles T(C, P1, T1) and T(C, P2, T1) where C is the center point, P1 and P2 are the points on the circumscribing square we are creating and T1 is the tangent point. By definition the radius also becomes a perpindicular bisector splitting that internal angle of ours from 360 / n in half to 360 / n / 2. The third angle or cross side angle is whatever is left of the 180 degrees allowed in a triangle. We now have three angles and a side. The following sin rules now apply:
sideA / sin(angleA) = sideB / sin(angleB) = sideC / sin(angleC)
In the example, sideA will be Cr, angleA will be the cross side angle and angle b will be 90 degrees. This reduces the sideB equation to:
Cr / sin(cross side angle) * sin(90) = sideB
sin(90) = 1
Cr / sin(cross side angle) = sideB
Frigin sweet. We have a basic equation that gives us the distance between the center of our circle to the first point in the circumscribing square. Use this distance as the radius of a new circle given the center of the original. Now, inscribe the rest of the square within this larger circle and you are done. You've just circumscribed the circle within a square. You could just as easily have used another shape though.
Cr / sin(90 - (Internal Angle / 2)) = distance to point
Cr = 100
100 / sin(90 - (120 / 2)) = 200
100 / sin(90 - (90 / 2)) = 141
100 / sin(90 - (72 / 2)) = 123.6
100 / sin(90 - (60 / 2)) = 115
This is some of the basic math behind creating anywhere from simple to complex bounding regions. You can see where this might come in handy if you have a complex shape that you might need to create a clipping region for. You can start by creating a bounding circle and then using it to circumscribe a bounding square that can be used for the clipping region.
Where this really comes into play is clipping systems for sprite functions. A simple or complex sprite can easily be examined to create a simple bounding region such as a circle. Depending on the application, this region may be too strict, so you can make it less strict by playing with some math and reworking the regions programmatically, rather than having to eyeball the final bounding regions. It's up to you, but hopefully this helps out a bit.
The first concept is displacement. This is the change in location at two particular points in time. If something happens between the two times we choose, we don't consider that for the equations. That means you could drive your car to work and back home and as long as we only check the location of your car late at night, we might consider the displacement to be 0. The displacement of the arc considered at it's two extremes would come out to be:
startX = 0 + radius
endX = 0 - radius
startY = 0
endY = 0
displacement = sqrt((startX - endX)^2 + (startY - endY)^2)
This is a pretty lame look at the distance travelled along the arc. We know for instance that the arc also changes along the y distance, but because we only looked at the displacement at the extreme we lost all that. Let's go to the next extreme and look at the box distance. The box distance is the linear distance travelled along a single axis. You can get a good idea of the box distance through the diagram. I'll also talk about the difference between distance and displacement. Displacement takes into account the sign or direction of movement while distance removes the sign or direction to provide a different measurement. This shows that the movement along the y-axis will cancel out and result in 0 distance, and we are right back to the first equation.
firstY = Radius, secondY = -Radius, firstX = Radius * 2
displacement = firstY + secondY + firstX
distance = abs(firstY) + abs(secondY) + abs(firstX)
The box distance is another lame approximation of how far the arc has travelled. Why? Well because we look at the x and y values independently. Everyone knows that the shortest distance between two points is a straight line and by travelling first the x-axis and then the y-axis separately we are going to travel further than need be. We need to discover the actual distance to see just how bad the measure is. The actual arc distance can be achieved by looking at the circumference formula of radius (Cr) with relation to the mathematical constant pi. The arc distance for half the circle is now half the circumference.
Cr = 2
Cc = 2 * pi * Cr
arc distance = Cc/2
box distance = Cr * 4
arc distance = 6.28
box distance = 8
Why is there such an error metric between the box distance and the arc distance? Because rather than moving along each axis independently we often move along them simultaneously. While we may move fractions of some distance along both the x-axis and the y-axis taken independently the actual distance moved is much smaller than both of them taken together. If we complete the triangle using the pythagorean formula we can find the distance moved. We'll make use of this by splitting the arc into several separate and distinct points and then computing the distance between those points. This is called interpolation and will always return a final result that is smaller than the arc distance giving us a good bounding for the actual arc distance without actually computing the arc distance. In the figure we use 5 total points to compute 4 distances or chords around the circle.
d(L1) = sqrt((P2.X - P1.X)^2 + (P2.Y- P1.Y)^2)
d(L2) = sqrt((P3.X - P2.X)^2 + (P3.Y- P2.Y)^2)
d(L3) = sqrt((P4.X - P3.X)^2 + (P4.Y- P3.Y)^2)
d(L4) = sqrt((P5.X - P4.X)^2 + (P5.Y- P4.Y)^2)
distance = d(L1) + d(L2) + d(L3) + d(L4)
box distance > arc distance > distance
The more samples you take when interpolating the more accurate the measure will be. There is definite proof for this using some basic trigonometric rules, but I'll leave that up to the reader. Maybe I'll toss a comment with the solution. For now, we need to finish up the arc length solutions and prove that working from an axis causes problems. The interpolation of chords proves this for us by taking the distances at the extremes and at the center of the arc. At the extremes the distance travelled becomes larger because we are moving along the x-axis at a fixed rate and along the y-axis at a variable rate.
d(L1) > d(L2), d(L4) > d(L3)
Rather than base the computation on an axis of movement we need to instead base the computation on the distance travelled. The distance travelled is actually the arc distance. Some of my previous articles on wavy text make use of this measurement for laying out text in circles or along a wavy path. This identifies two coordinate spaces for us to work in. The first coordinate space is the circular space where the movement of the pendulum can be described using only a single coordinate or angle. We update the angle by understanding how the angle will affect the distance travelled this gives us a fixed distance movement routine. When displaying the results, we unfortunately need to convert this into a viewable space, the cartesian coordinate plane where the location will be identified by two coordinates in place of one.
angle = 0
distanceToMove = 1
angleToMove = Cc / 360 * distanceToMove
new angle = angle + angleToMove
x = cos(angle) * Cr
y = sin(angle) * Cr
A quick wrap-up brings the following.
Updating a single axis variable to retrieve the second axis variable is interpolation. The distance travelled will always be less than the arc distance but is a good approximation.
By not using arc distance in interpolation calculations the distance moved per step will vary with time producing a time dilation at the extremes
The best option is to convert the values using the following rules
Convert the distance to travel into a change in the angle using the relationship between arc distance and angle
Convert the anglular movement into (x, y) coordinates
More Posts Next page »