Note: this entry has moved.
I was trying to determine the best setup for a secondary disk to use to work with a virtual machine (VM), which is a must in order to have the development experience inside the VM be almost on a pair with developing on the host machine (without the risks of screwing your machine with betas, etc.).
I have the usual Hitachi Travelstar 7K60 (7200rpm, 60gb), pretty much the only 7200rpm option about a year ago. And lately I bought a Seagate Momentus 5400.2 (5400rpm, 120gb) with the idea that even if it’s slower, I may use it to keep backups of VMs, music, etc.
I had some doubts regarding my Thinkpad T43p UltraBay performance for the disk. At some point, I felt putting the disk in the UltraBay was slower than using it with the external USB enclosure. So I run some tests with the two disks in USB and UltraBay configuration, using the PerformanceTest 6.0 software from PassMark, which is supposedly pretty good at testing various performance indicators.
Results were surprising:
In a nutshell
- A secondary disk in the UltraBay outperforms USB2 consistently.
- Hitachi Travestar 7k60 is showing its age, and even the “slower” Seagate Momentus 5400rpm outperforms it consistently by a substantial margin.
So disks switched roles now: 5400rpm for VMs, older 7200rpm for music/backups :o)
Note: this entry has moved.
I want DivX playback. I want a decent screen size. I want easy photo slideshows. I want a single device.
Turns out there's no easy answer for that combination. PSP entered the scene thanks to Scott, and now the choice is even harder.
On the one hand, traditonal portable DVD players, there’s the excelent Phillips PET1000, which is DivX certified and has a very cool design and “huge” 10’’ screen. It’s pretty much the only one with DivX playback. But photo slideshow requires you to burn a CD :(. No way to just take the 2GB ultra-fast CompactFlash out of my Canon Rebel and show the family.
Then comes PSP, which is also impressive, is even cooler and also has the games bonus, but it has only a 4.3’’ widescreen display. I could put movies and music in a couple 1GB memorysticks, but I have to do a conversion from my CF :(.
Finally, there’s a digital frame which is basically an LCD with a multi-format card reader that can also display video. Sounds like what I need, but its screen resolution is far from optimal and it’s only 8’’ (although bigger than the PSP for sure).
And it all started with the idea of having a big picture (i.e. the size of the really cool 24’’ wide Dell monitor) hanging on a wall on our living room, doing a slideshow of pictures coming from a CF or SD or Wi-Fi. Then, not finding a big enough “picture frame” I started dreaming of the single device.
In the end, I think I’d need to get a big LCD display, an embedded linux processor, a compact Wi-Fi chip or something like that, a multi-card reader and somehow put it all together and make it look good (or not be visible at all by hiding the duct tape behind the display ;)). I wonder how come nobody came up with such a device yet… (or I missed and I’d certainly like to know about it!!!)
Note: this entry has moved.
What follows are some thoughts regarding the authoring of guidance that we have learned during practical experience. The following applies to both DSL and GAX toolkits.
Developing guidance should be an iterative process, and we're still exploring it and how it fits in the overall development process. Intuitively and based on previous experience building several guidance packages, I'd say the process is more or less as follows:
Phase 1: Define End Product
1. Use intensive TDD and short iteration cycles to develop the end "product" you'd like to guidance-enable (i.e. code, application structure and architecture, etc.)
2. During 1), you will end up with a process that takes you from a scenario/use case to an implementation using the architecture/structure you designed (ideally via TDD)
Phase 2: Define Guidance Process and Flow
3. From that process, figure out which are the variability points, where the user should be involved in decisions that affect what the code/solution looks like, as well as the dependencies between them (user does A and only then can do B)
4. Understand and clarify the roles, personas, concerns and use cases the guidance should express
5. Based on the previous two findings, define the launch points for those user interactions (recipe/template/dsl launching points)
6. Mock-up the recipes and UI required for the entire package, and document the steps and input required for each
7. Analyze the mocks, go through them, and evaluate whether the input information, the process flow, and the launch points follow a natural progression that is likely to be intuitive enough for users. Also, think of potential missing pieces of input that may be needed to get to the end result from there
Phase 3: Implement Guidance
8. Finally, add the actions to the recipes so that they generate the code
a) Optionally, the recipes may generate tests that exercise the features in the end result (code/application). I say this is optional, because this is a process that should have been previously exercised and sufficiently proven during steps 1 and 2. Generating from the same recipe, code and its tests is not TDD at all, and if steps 1 and 2 are well done, may be a waste of time, as you already know that the code you will emit will adhere to the architecture and design principles outlined there.
9. Test the generated code and the whole process. Once this is OK (or if you have spare time in the meantime), do the next step.
10. Improve the UI by adding type converters and UI editors
Unless you do Phase 1, the optional 8.a step becomes more important, but I think it is far worse as a testing approach, and will never be as comprehensive as a well done Phase 1.
Phase 1 and 3 are the most likely to consume the most time. I believe in the majority of cases (unless you already know very well you want the guidance to do up-front, or the scope well defined and not too big), the former will be longer than the latter. However, depending on the nature of the tooling part, and the complexity/lack of documentation/unforeseen bugs-issues with VS/integration with Beta-quality products/etc., Phase 3 may become a big part of the work too (we faced this a number of times, where something was supposed to take a few hours and we ended up spending at least a couple days chasing a bug or erratic behavior in VS). Any kind of integration with VS is bound to be unpredictable to a certain extent in that front for the foreseeable future, I'm afraid :(
What raises considerable the bar for phase 3 is if you introduce a meta-guidance in the picture (i.e. you want a guidance package to help people build guidance packages in a certain area).