Apple releases its MacPaint source code to the Computer History Museum

This was a fun article I just ran across at Business Week:

http://www.businessweek.com/technology/ByteOfTheApple/blog/archives/2010/07/apple_donates_macpaint_source_code_to_computer_history_museum.html

I used Mac Paint on the Mac Plus and I would definitely call it revolutionary.  Yes the Amiga, and several others, came out with competing or better products, however, for me in a new Mac world coming from the IBM PC clone world this was amazing. 

And, in contrast to one commenter who said it was a neat demo and nothing more, we used this in our desktop publishing all the time.  We could finally create and manipulate graphics and logos for newsletters, business cards, etc.  For a small business it really made us stand out.  Hardly anyone in the mid-80’s had this capability for so cheap.  It usually required a large print house with expensive machines.  Now we could put a real professional touch on customer documents.  B/W art was definitely used on probably 90% of print output at the time so having a color capable utility didn’t offer much when it came to a hard copy.  MacPaint really allowed us to push the boundaries.  With a Mac SE/30 and an Apple LaserWriter II we were producing high quality professional documents for clients for less than $7,000 in the late ’80s, which at the time was simply amazing.

I’m sure others out there could have done something similar with Amigas, PCs, etc, but for us this was a game changer. 🙂

I love seeing recaps on older history like this, for any company, not just Apple.

A Great Resource for Different Strategies on Concatenating SQL Results

In every DBA’s career I think having to concatenate results happens at least a few times.  Probably more than we like to admit because we tend to live in table-land.  :)  However, there are those occasions, which are usually driven by some downstream requirement to format output.  Now, I know that formatting should be handled by whatever data viewing method you are using, but sometimes that just isn’t possible or practical.  Other times it may just be that we need to transform data from one system to another and that other system is not as normalized as the tables you are working with.

Like I said, I do it fairly infrequently, so I never remember the best way in my head.  I usually end up looking at how I’ve done it in the past.  I started thinking that there may be better ways then some of the convoluted strategies I’ve found in previous solutions.

Trusty Google sent me here:

http://www.projectdmx.com/tsql/rowconcatenate.aspx

It’s an incredible (though certainly not exhaustive) list of ways to deal with this depending on your need.  I like XML and chose to go with simplicity so, for this particular task, I went with Eugene Kogan’s “blackbox XML” method.  It’s only a few lines and if you are familiar with XML and SQL then it’s not that hard to understand.

I’ve definitely bookmarked this for later reference!

We’re all still kids, just with bigger toys

I just saw this article:

http://www.tomshardware.com/news/SFF-RAID-HDD,10319.html

It’s an amazing feat by Will Urbina.  He has an amazing tool shop and knowledge on how to use them.  He custom built his own Small Form Factor case that holds 8 2TB drives for a total of 14 TB raided.  Pretty amazing.

I would count this as a great version 1.0 product.  The two changes I would make for v2.0 would be:

1) Accessibility: I would mount each drive into a removable tray. Changing the drives out is going to be a bear. Having removable trays would not only make this a snap but also allow for hot swapping, a real help when dealing with RAID failures. If you can get a tray with built-in heat sinks for the drive that will lead nicely into my next recommendation.

2) Heat: The heat issue should easily be solved by putting heat-sinks on the drives (or heat-sink trays as mentioned above). Then simply seal the box, put vents on the front left side and channel the air through the front left, across the drives to the right and through the back. The drives will need more spacing to accommodate the trays and airflow but maybe you could switch to 2.5 drives. This will definitely take some engineering, especially to work around removable drive trays, but proper sealing and a good fan in the back would give a good airflow. May need to increase fan speed or add additional fans to the front intake.

 

Awesome idea.  I have a few tools in my tool shop, some of which were gifts, others bought new for projects, and others picked up on Craig’s List or garage sales.  With only a 2 car garage and 3 young kids I have no workshop. :)  But, I have the dream of slowly adding to my tools each year and someday building a sizable work shed in the back.  I would love to be one of those dads that has tools and builds things with their kids.  I always envied my friends who had this and their dad’s shop at their disposal for inspired ideas or school projects.

Ideas like this keep me hoping that this will be a reality someday.

Great job Will!

Spell-check, suggest as you type, etc – Are we efficient or just lazy?

I’ve been using spell-check since the day it was available years ago on my first computer and word processor,  a Victor 9000 with Multimate.  This was back in 6th grade for me and later through my academic career.  I have great belief that my lack of ability to spell complex words off the top of my head are directly related to the fact that I didn’t have to know how to spell all the time.  I used a computer for that.

Technology progressed from there and soon Microsoft had Word correcting misspelled words as we typed.  Soon they were offering grammar suggestions too (although not very well in the beginning).  Then they started correcting my tenses for me automatically, along with capitalization, correcting my case and turning off my caps key if I suddenly starting typing a sentence like “yESTERDAY I WENT TO THE STORE.”. 

Now, sites like Google and such actually “guess” at what I’m trying to say, making it so I don’t even have to type my entire though.

It’s all about saving time and making us more efficient, but are we just being lazy?

Honestly, I love how far we have come.  Non-audio communication (for my experience anyway) has always been drastically slower than audio based communication.  I can put forth a concept talking with someone a hundred times faster than if I were to write it in an email or in a memo by hand.

In school before computers were mainstream I handed in all my written homework on a computer.  In fact, if I was given a homework paper to fill out, I would either duplicate it on a computer or type it out on a typewriter.  I wasn’t being neat, I hated handwriting.  It was just too slow! 

So, rather than being lazy I’d call myself inpatient.  Do I take this for granted?  I think I actually take advantage of it. It’s not a crutch but a feature.  I know Word will correct my capitalization.  I know my iPhone will add a period and put my Shift key on when I enter a double space, signifying the end of a sentence and the start of another.  So, I simply don’t do this anymore.  It’s actually funny watching me type in a plain vanilla word processor like Notepad or an online webform, because I see how much I have come to use the built-in optimizations.  I actually anticpate and take advantage of the fact that my typing is being corrected for me.  Why do I need to bother holding down the shift key at the beginning of every single sentence?  I know the software will do it for me so that’s one less key I have to hit every time.  Why do I have to type in apostrophe’s in words like won’t, you’re, or I’ll.  There is no other word these could possibly be so my iPhone puts them in for me.  This is a speed boost feature, not a tool for inept typists.

In other areas of life we don’t even think about this.  For instance, in the development world we have development environments that allow us to write lines of codes with only a few keystrokes.  The software makes suggestions as we type and even makes recommendations on how to clean up our code.  This isn’t considered lazy but actually considered a feature because it makes us not only faster programmers but helps us write more consistent and higher quality software.  This is an investment by our employers.  They don’t want to pay us to type mundane lines of code when we don’t have to.  They’d rather spend an extra few $100 on our tools and pay us to think, get their product to market and start selling it faster with a higher rate of quality.  They don’t think of it as giving us tools to make us lazy but to actually get a better return on their investment.

Enter the iPhone and it’s text input system.  Since the keyboard is entirely touch screen and can be a little small it is quite common that you’ll actually hit a letter adjacent to the one you meant to type.  So, what does Apple do to help this?  Why, every word you type is checked against a dictionary.  If it doesn’t recognize the word you type it attempts to find a match using all the letters adjacent to the ones you typed.  So, for instance, if I accidentally types “hekko” it might suggest “hello” as an alternative.  Other neat features are like I mentioned above.  Since the screen ahs limited space, unlike a full-sized keyboard on a computer, they attempt to maximize the space they have and minimize the amount of context switching.  What I mean by context switching is changing from letters to numbers or symbols, typing punctuation, using international characters, etc.  For isntance, if I am filling out my email address they put the @ sign as one of the keys on the main screen.  I have to use an @ sign every single time I type an email address, so why not put this on the main keyboard with the rest of my letters when entering email addresses?  When I am typing in a web address they have a “.com” button.  What if I want to go to a .net or .org address instead?  If I hold down the .com button after a brief pause it will open up and allow me to drag to any number of common suffixes, such as .org, .net, .gov, etc.  There are tons more but you get the point.

I never realized how awesome these little changes might be until I got my own iPhone.  I type on it all the time and they are life savers.  What the iPhone doesn’t due (as highly criticized by users and iPhone opponents) is that there is no spell check.  I completely agree and expect Apple to remedy this shortly.  Why a device that can predict what I meant to say, take video, allow me to find where I last parked, count my food intake, suggest movies near me, etc can’t even offer to spell check my words in this day and age is beyond me.  But that’s another story.

Now, if you’ve seen the recent Samsung commercials, there is a new texting technology on their Omnia phones called Swype. It allows you to type simply by dragging your finger rather than physically pushing down and then lifting up your finger on each button.  Is this faster?  Well, that all depends on your typing style and comfort but I could see this being a game changer for those that like it.

In the end, what’s the best?  Well that’s all relative, but for my money it seems like we have a lot of good ideas, all going in opposite directions

Why can’t I have a Swype input that suggests as I type, corrects my spelling if I hit an adjacent letter, corrects my spelling if I misspell a word in a common way or use a grammatically incorrect tense or pluralism.  that would be the best.  Combine Swype with the iPhone and Microsoft and I’d be set.  If I am on a standard computer with a full sized keyboard some of the options like Swype no longer make sense, but I still like the double space that converts to a period and turns my shift key on among many others. 

That’s where my money is.  Hopefully it won’t take too long.  I’m sure there will be patent wars but in the end hopefully us users get the benefit of all these typing optimizations working together.

Building a .Net 3.5 Web App on Windows Server 2000 with only .Net 2.0

I am upgrading an older web app of ours as I referenced in my last blog post.  This was originally a straight html app with no dynamic content at all.  I created a .Net ASPX web app out of it and used LINQ to quickly and easily create a survey form that our users could fill out.  It worked great on my machine.

Unfortunately the happy ending got derailed when I deployed it to our web server.  Our web server is an ancient Windows 2000 server box with IIS 5.  this is because it’s where all our main apps our housed, everything works and there is great fear in changing it.  <sigh>

So, I either had to figure out how to get my .Net 3.5 app running on IIS5 with .Net 2 or I had to abandon LINQ and go back to data readers (yuck!).  I first tried to install .Net 3.5 on the web server but quickly found out that it requires Windows XP or Server 2003 as a minimum.  OK, so that’s ruled out.

I knew that the asp.net framework has always been 2.0 (until the new release of 4.0 that is) and .Net 3.0 and 3.5 just added extra features on top of 2.0 but never changed the underlying base classes.  So you can easily use .Net 3.5 apps on a .Net 2.0 web server.  In fact, this has caused a lot of confusion because there simply is no 3.0 or 3.5 selection in IIS for the .Net framework.

I knew if I could just reference the required .Net 3.5 dlls then this shoudl work.  Doing a quick search on Google lead me to this great article.  I was wondering if something like this was possible and, sure enough, it pointed me in the right direction.

 

Here is what I did and it worked like a charm.

I first set my build target for the web app in Visual Studio 2008 to .Net 2.0.  This caused VS 2008 to instantly remove any non-.Net 3.5 compatible references such as LINQ.  I did a build and received numerous errors, most pertaining to my code that made use of LINQ.

I copied the System.core and Linq.Data DLLs into my web app’s bin folder and referenced them.  After another attempt to build the solution the LINQ errors went away but it still didn’t understand my lambda expressions or my auto-properties.  This makes perfect sense.  These are compiler features and not referenced code.  Since, by default, asp.net compiles on the server it had better understand these.  I could change the autoproperties back to normal properties but there is no lambda equivalent for .Net 2.0.

So, I created a new project and moved all LINQ code into it and had it target .Net 3.5.  Having my data access classes in a separate project felt much cleaner and probably would having been an eventual refactoring later.  I removed this code from the web app and created a reference to the new project.

Ran a build and received the welcome success message.

I then deployed the web app to the web server.  Upon opening one of the new pages, which runs a LINQ query to obtain some data to populate a drop down list, I received the following error:
Could not load type ‘System.ComponentModel.INotifyPropertyChanging’ from assembly ‘System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089’

After some googling it turns out that INotifyPropertyChanging wasn’t introduced until .Net 2.0 SP1.  Sure enough, our web server had 2.0 but no service packs.

I installed .Net 2.0 SP2 and everything worked great!

 

I am in the process of redesigning our entire department’s website and that is all built on MVC and several other current technologies.  I have another web server that is running 2003 for that.  I might miss out on some of the newer IIS 7 features but .Net 4 runs on it just fine so at least this is a major step forward.

Take care!

ReSharper destabilizing my runtime? Huh?

I’m adding some new features to an older ASP.Net Web Forms app we have.  Fortunately I can take advantage of .Net 3.5, though not MVC because it’s on a Windows 2000 server with IIS 5.  Anyway, this allows me to use LINQ, which has been incredible.  Creating the mapping and using LINQ queries is leaps and bounds more convenient and less prone to error than our older data readers and such we used in the past.

I am more in the TDD camp and like to manage my own data bindings so I don’t like to use the LINQ model designer.  Instead of dragging and dropping tables onto the designer and letting LINQ create the classes for me in the background I create my own concrete classes and then put attributes on the classes themselves.  I know, some may not like putting attributes in their code because they argue that it makes their objects dependent on the database framework rather than being completely database agnostic.  However, 100% of our apps make use of Microsoft SQL and I don’t see that ever changing in the near future.  So, rather than go through the overhead and complexity of a truly database agnostic ORM setup I’ll happily add a few attributes to my code that plainly show exactly what’s going on.

In this particular case we have parents and students that will be filling out a survey based on when they graduated from our programs.  Because we serve multiple districts, each with multiple schools, I ask the user to select which district and school they graduated from.  Thus, I have a SchoolDistrict class which contains a List of School classes.  In LINQ you can map a relationship using the Association attribute.  I got the details and examples from Microsoft’s MSDN article here: http://msdn.microsoft.com/en-us/library/bb386950.aspx

Here was my initial code for SchoolDistrict:

1: using System.Collections.Generic;
2: using System.Data.Linq;
3: using System.Data.Linq.Mapping;
4:
5: namespace App_Code.Entities
6: {
7: [Table(Name = " vw_Table_D_Current_SchoolDistricts ")]
8: public class SchoolDistrict
9: {
10: private EntitySet<School> Schools;
11:
12: [Column(IsPrimaryKey = true )]
13: public string districtCdsCode { get ; set ; }
14:
15: [Column]
16: public string name { get ; set ; }
17:
18: [Association(Storage = " Schools ", OtherKey = " districtCdsCode ")]
19: public List<School> schools
20: {
21: get { return new List<School>(Schools); }
22: set { Schools.Assign( value ); }
23: }
24: }
25: }

 

Here was my initial code for School:

1: usingSystem.Data.Linq.Mapping;
2:
3: namespaceApp_Code.Entities
4: {
5: [Table(Name = " vw_Table_D_Current_Schools")]
6: public classSchool
7: {
8: [Column(IsPrimaryKey = true)]
9: public string schoolCdsCode { get ; set ; }
10:
11: [Column]
12: public string name { get ; set ; }
13:
14: [Column]
15: public string districtCdsCode { get ; set ; }
16: }
17: }

 

Everything worked great and I was rolling.  Then I had ReSharper clean up a few things and I took some of its suggestions on optimizing my code.  I made a few other changes and, then all of a sudden, I got this when I ran the code:

System.Security.VerificationException: Operation could destabilize the runtime.

Ugh!  What’s that???!!

After a lot of Googling I had come up with nothing.  Most of the fixes I saw related to mapping IEnumerables to IQueryables, covariance vs contravariance, etc.  Nothing seemed to fit my particular scenario and none of the fixes worked.  I went over my code with a fine tooth comb making sure my mappings were still correct, making sure the capitalization in my attributes were not fouling things up, etc.  I came up with nothing.

 

So, I went back to square one.  I looked at the Microsoft example again and I found one modifier that was different.  ReSharper had noted that I could mark my Schools EntitySet in my SchoolDistrict class as readonly.  I removed this modifier and, voila, everything worked.  Sure enough, ReSharper again started suggesting that I mark this as readonly.  I did and my code broke again.  I had found the culprit.

I had not seen this mentioned anywhere on Google and I guess, in hindsight, most people wouldn’t mark this as readonly

I don’t always take ReSharper’s suggestions but this is the first time that ReSharper actually broke my code.  :(  Truth be told, ReSharper is a tool and it’s only as good as the one wielding it.  If I let ReSharper perform invalid operations that’s not ReSharper’s fault but my own.  However, I still didn’t like that I may not remember this key piece of information in the future.

So, I told ReSharper to ignore this in the future and wrote a comment as to why.  That cluttered up my code and it now looks like this:

1: // ReSharper disable FieldCanBeMadeReadOnly.Local
2: // If set to read only this causes an "Operation could destabilize the runtime" exception when the query is evaluated
3: privateEntitySet<School> Schools;
4: // ReSharper restore FieldCanBeMadeReadOnly.Local
5:

Wow, that is ugly!  I hate having to put comments in my code that are because of dependencies to frameworks.  Now I have a comment in my code that’s dependent on an IDE tool, not even something required for the app itself!!!  That’s just plain ugly and a major code smell if you ask me.  However, that’s just the way it is for now.  The app works and I’ll go on with my life.  It’s not perfect but I don’t want to spend a day figuring out a better way.  If someone posts a better suggestion or I find something in the future then I’ll revamp it, but for now, I’ll get the app out the door and life will go on.

I hope that this helps someone else out there that may run into this issue.

In the end ReSharper is one of the best tools I have in my toolbox, number 2 right behind Visual Studio itself.  It’s incredible and I can’t imagine coding without it.  It truly has helped make me a better developer than any other tool I’ve used to date (again, after VS).

Take care all and happy developing!

Microsoft Intune – The Beginning of Small Business IT Management in the Cloud

Microsoft just released information regarding their new cloud management service for small organizations, Microsoft Intune.  you can read about it on their blog post here.

It’s geared towards smaller companies that have between 25 and 2,500 PCs that may not be able to afford a standard IT infrastructure and server deployment.  Honestly, with some of my clients using SBS 2003 with a decent IT consultant (me :)) companies with as little as 15 machines can easily make use of the standard Microsoft infrastructure.  If you’re beyond 100 PCs I don’t know how you would ever manage this effectively without having Windows Server, Active Directory and many of the management tools such as WSUS and a managed virus/malware setup.  But, that’s beyond the point.

What is Microsoft Intune and what does it do for you?  Here are the basics:

  • Manage PCs through web-based console: Windows Intune provides a web-based console for IT to administrate their PCs. Administrators can manage PCs from anywhere.
  • Manage updates: Administrators can centrally manage the deployment of Microsoft updates and service packs to all PCs.
  • Protection from malware: Windows Intune helps protect PCs from the latest threats with malware protection built on the Microsoft Malware Protection Engine that you can manage through the Web-based console.
  • Proactively monitor PCs: Receive alerts on updates and threats so that you can proactively identify and resolve problems with your PCs—before it impacts end users and your business.
  • Provide remote assistance: Resolve PC issues, regardless of where you or your users are located, with remote assistance.
  • Track hardware and software inventory: Track hardware and software assets used in your business to efficiently manage your assets, licenses, and compliance.
  • Set security policies: Centrally manage update, firewall, and malware protection policies, even on remote machines outside the corporate network.
  • Licensing to upgrade all your PCs to Windows 7 Enterprise.  Includes all applicable upgrades to the latest Windows as well as downgrades while you are under the subscription.

Intune is only in beta at the moment.  You can sign up here until May 16th.  It isn’t scheduled to be released in production until next year.  At that time it will be a subscription based service, most likely ona per PC basis. 

A few things of note:

  • The tracking of hardware and software would be nice.  I don’t know if this only tracks PCs or if it also tracks hardware like printers and network appliances and I’m not sure if it tracks non-Microsoft software.  We’ll have to wait and see how thorough their system is.
  • Setting of security policies seem to be limited to templates that affect security settings like Windows Firewall, updates, etc.  It doesn’t seem to be a full fledged Active Directory Group Policy infrastructure. 
  • Allowing the upgrading of all of your PCs to Windows 7 enterprise is a pretty great deal.

Not a replacement for Small Business Server

I don’t see this as a replacement for SBS.  Honestly, I don’t really see anything that can’t already be accomplished by a decent network setup by an IT consultant, and that you don’t have to pay a monthly fee for.  You still have to have someone knowledgeable (or your IT consultant) to handle the setup and monitoring of Intune, so you aren’t getting rid of your IT guy, just adding the management layer on top of your current network.

What does SBS do that Intune doesn’t do?  Pretty much everything else.  It gives you a full fledged AD infrastructure, user/group/hardware authentication/authorization, shared resources such as folders/printers, Exchange, SQL Server, IAS, etc.

Microsoft already makes Exchange available as a subscription based service, though I don’t know if this is technically in the MS Azure cloud yet.  Azure currently also is starting to handle the SQL space. 

I think Intune will really be able to fill the small business space when I can have a SBS server locally to handle shared resources and local caching of my AD/DNS, but then offload everything else to the cloud, including my licensing management of all my MS products including Windows, Office, etc, AD management, GPO management, intranet, etc.  Then this might really be a full on solution that I could see businesses shelling out $50 annually a computer for.

So, am I signing up for the beta?  Yeah, why not.  I’d really like to see how this works out and where it’s headed.  One of my clients is due to renew their annual license for their virus vendor and we haven’t been that happy lately with the product.  So, this will give us a chance to try out the Microsoft offering for little cost (if anything) and see if this really lets me manage the network better.  Having the remote access through Silverlight will be nice.  That way I don’t have to remote into the server and then remote from there.  Until I see actual estimates on licensing though I will be hesitant to upgrade the PCs to Windows 7.

Google Cloud Print – Neat idea

I just read this article on Mashable:

Google Cloud Print Reveals the Future of Printing

 

This could eventually be combined with Google maps and location-aware apps.  Then, if I am on my mobile device and I need to print a document quickly I can pull up a map and find all the nearest public printers that I could use, such as from Kinko’s, Staples or other office type stores near my current location or my destination. 

 

That would be sweet.  I’m on my iPhone, get a document from a client that needs to be signed, I review it on my iPhone and then print the signature page near our meeting.  I pick it up, sign it and deliver it on the spot.  Cool!

Fixing corrupted Word 2007 docs using DiskInternals’ Zip Repair

A colleague recently came to me because she had a long running Word 2007 document that no longer would open.  When we attempted to open it in Word it would state that the document was corrupted.  It prompted us to use the built-in repair tool but that was unable to fix the problem.

Knowing that a Word 2007 document (actually, any Office 2007 document) is just a zip file containing xml files inside, I attempted to open the file using 7-zip.  At the very least I was hoping we could extract the raw text and my co-worker could just reformat it.  While 7-zip could view the archive it reported that it was unable to extract most of the files.  This included the actual xml file holding the text so we were still out of luck.

So, where do you turn when you are out of ideas?  Google of course.  I searched for “zip repair” and DiskInternals’ Zip Repair utility was the first on the list.  Fortunately this is a free program so I thought I’d give it a try.

It only allows you to select .zip files so I had to change the extension from .docx to .zip.  That’s my only complaint, however, and it’s arguably a small one.  Once I did that and ran the utility it reported that it successfully repaired the zip file.  Wow! 

OK, but I’m one of those guys that believes it when I see it.

I changed the extension back to .docx and attempted to open it in Word.  It again reported that the document was corrupt and prompted to run the repair utility.  However, this time upon running the repair it was able to open the document with full text and formatting.  Wonderful!  I saved it into a new document and emailed it back to her.  She was ecstatic. 

So, definitely a +1 and recommendation for DiskInternals’ Zip Repair utility.  Give it a try.  It’s great and worth far more than the price.

Metroid Prime Trilogy on the Wii – What a fun game!

I don’t know if you ever played any of the Metroid games on the classic Nintendo.  I did and loved them. image

I just picked up the new Metroid Prime Trilogy for the Wii.   It’s actually the first two trilogy games that came out on the Nintendo GameCube and the final chapter that came out on the Wii.  The GameCube versions were revamped to use the Wii controllers.

I have to say, this is one of the most fun games I’ve played in a long time.  I’ve played a few first person shooters on the XBOX or PS2 but the controls just haven’t been nearly as comfortable as my keyboard and mouse that I’m used to.  I still haven’t tried The Force Unleashed on the Wii but with Metroid I feel like this is the first time that they have gotten it right.  You use the nunchuck for the movement and use the Wii remote as your targeting and viewing.  Buttons on both controllers facilitate the myriad of actions you can perform.  Honestly, this setup is more natural and fun than any I have ever used on a console or computer.

The storyline game play are really a lot of fun.  If you’re a fan from the older series then seeing familiar characters and hearing the music will bring back a lot of fond memories.  Sometimes when 2D games get brought into 3D it’s done in a very kiddie or cheesy way, or sometimes just doesn’t translate well at all.  However, seeing my old 2D side-scrolling nemeses in full blown 3D moving around in live space is just awesome.  They aren’t bloated cartoony characters (like many of the Nintendo games turn out to be) but are realistic representations of what you would expect to see.  The scenery is great and the worlds are really well designed.  Between the game play, world design, storyline, attention to detail and music this is really a top-notch game you can get totally engrossed in.  I’ll start playing at 9pm and Eva finally calls me to bed at 11pm without me even realizing what time it is. 

I’m still only at the beginning of the first game.  For $50 it’s a pretty standard price for a Wii game but since there are three in the pack I really feel like I got my money’s worth.  Like I said, I’m still on the first chapter, which is really a GameCube game but it’s still pretty awesome.  I can’t wait to make it to the final third chapter that was developed for the Wii from the ground up.