ESRI 2008 UC: Plenary Session

The main presentation by Jack Dangermond was pretty good, as was last year’s.  It’s a good mix of what is new in ArcGIS 9.3, what users are doing in the field, what’s coming on the horizon, and an overall impact of GIS in the world. 

From a technical point of view here are just a few of the new features in 9.3.  There are way too many to list so take a look at the ESRI site if you want a comprehensive list.

  • Reverse Geocoding
    This has been a long time coming and a hot item on the request.  This wouldn’t warrant too much discussion, except that the implementation was really well done.  Typically (from my point of view) ESRI is a functional program, but  wouldn’t win any UI or productivity awards for most of it’s features.  The buttons or dialogs for typical tasks aren’t always where you expect them and sometimes you have to drill down through 10 screens just to get to your data. 
    However, with the reverse geocoding, it was a very easy to use tool with a crosshair for your cursor and a small dot that would snap to your street network nearest to your cursor.  Clicking would very quickly give you a geocoded address at that point.  If you clicked on an intersection the geocoding would be the intersection of those streets.
    These reverse geocoded addresses could be saved as pushpins which can later be saved as features I believe.  Very nice!
    • Oh, I just stepped into the Geocoding Intro technical workshop.  Now when you run a geocoding process, if there are any addresses that could not be geocoded, in addition to the standard options to fix this you can now use Pick Location.  This allows you to use the Reverse Geocoding tool to pinpoint exactly where on the map this address should be.  This is great as it might be difficult, if impossible, to change the address in the source data.
  • KML Export
    As the rest of the world jumps online ESRI, arguably the largest provider in the GIS arena, has been a little behind the game.  In the past I have had to resort to 3rd party tools to export our map data to KML.  Invariably this also requires lots of massaging of the data afterwards before it is ready to be published on an online mapping service such as Google. 
    You can see an example of our school district layers pushed to Google through KML here.
    Now ArcGIS will have native KML export built in.  When used in conjunction with ArcGIS Server and other tools this will make offering your GIS data to online mapping systems a very easy process, that will free up maintenance and always hit live data.
  • PDF Support
    For a while now you’ve been able to export GIS maps as PDF.  This is a great feature as ArcGIS Desktop will also export the text as well which is completely searchable and selectable using Acrobat Reader.  I use this all the time when exporting maps of our district.  It’s amazing when I have several hundred streets on a map, go to the Acrobat Reader search box, type in a street name and find it in an instant on a map.  This is really useful when other users download our maps and want to find where they live.  We have an online School Locator tool, however, having a map on your local machine is a great tool for use in offline scenarios. 
    However, other than this ability the PDF version of the map has still been fairly static.  ESRI has been working with Adobe to really exploit the abilities of Reader.  Now you can export a wealth of data to PDF.  This includes data frames, layers and feature attributes.  In the PDF hierarchy you can now see the individual data frames and layers.  When clicking on a feature you can get all the underlying data for that feature.  This is just like using the Info tool in ArcMap.  Also, the data can be georeferenced.  This allows a user to get X,Y coordinates from any area of the map.  There is no geocoding yet, but tis is all pretty neat. 
    This is pretty amazing because now you can get an incredible amount of information just from an offline PDF.  This is not only useful for Internet connected machines.  As more and more users are using mobile devices that may not have direct connection to an online GIS service, having a PDF they can use with this info will be a great step forward short of building an offline app.
  • Virtual Earth Integration
    They went through this area pretty fast so I didn’t get all the details.  It seems that you can pull VE services and resources directly into ArcGIS Desktop now and use in your own maps.  This means that you have full access to the imagery and data.  This is all on demand, which means that you cannot store the resources for your own editing or offline use.  However, this also means that you will always have the latest data.  When you open a map it will retrieve the latest images, including any new ones Microsoft may have published, directly in your maps.  This can offer a wealth of data if you have out of date or no imagery/data for your map content.
    I assume that Google and other map services will be accessible as well, it’s just that ESRI kept touting it’s partnership with Microsoft so I’m a little hesitant to say this.
  • JavaScript API
    This has been a sore point with ArcGIS in the past few years.  As I said above, ESRI has really been playing catchup.  Most of ESRI’s online mapping products have been pretty bad.  The UI design wasn’t great and it was terribly slow.
    I don’t know what the current tools are like (and usually ERSI demos are always running in a perfect world) but ESRI is starting to allow more options to connecting with data.  One of these is the JavaScript API.
    This API, on the surface, seems pretty similar to Google or Microsoft, where you specify a JavaScript file, the resource data and a div to place the contents into.
    When you publish a map to the ArcGIS Server there are now several default options to consume the data.  When you go to the url ArcGIS Server now allows you to open the map in ArcMap, view in the internal viewer, and view using the JavaScript API among others (KML export possibly but not sure).  If you choose the JavaScript API option a new page is opened with a standard web 2.0 map using the ESRI API.  If you view the source you can see that there are only about 10 lines of code that actually retrieve and display the content.  If you simply copy this text you can paste this into your  own apps and very easily add your interactive map resource to your pages.  Pretty nice indeed!
    I have to laugh here because the ESRI rep demoing this function turned a static (and very bad looking) jpeg of a campus map into a fully GIS capable interactive map in about 1 minute.  The crowd cheered.  :)  As any HTML/JavaScript developer might know there are a lot of underlying things being assumed, the first gotcha being make sure your div is properly named either in your DOM or in the JavaScript code referencing the ESRI map resource.  This is of little worry for developers who understand what’s going on, but I know there will be a few business users going back to their organizations saying "Do this, it only takes 1 minute!" and their non-web savvy GIS engineer will be spending a day on it.
    Eh, maybe I’m just pessimistic but you  can see the marketing "woohoo!" all over these demos.  ESRI always operates their demos in the perfect world.  But so does everyone else (i.e. Microsoft). 🙂
  • Mashups
    OK, if you are a web developer and haven’t been in a coma for the past few years, you should know what a mashup is.  In a nutshell a mashup is simply a web page that takes data from one source (i.e. Flickr photos), combine it with another source (i.e. Google Maps) and display the results (i.e. showing where the photos were taken on a map).
    John Grayson from ESRI’s Applications Prototype Laboratory created a great tutorial with 7 different examples of creating mashups using ESRI data and JavaScript API’s.  Each one increases in it’s level of capability and complexity.  Unfortunately all the examples were based on retrievinganalyzing data and not on editing actual data for updating on the server.
    I can’t seem to find these slides or any information on John’s presentation anywhere so hopefully he will publish these soon.  Otherwise in my spare time maybe I can throw a few together.  (Yeah, when do I have spare time!  I stayed up to almost 4am last night!)

Overall it was a great session.

I’ll be adding more posts throughout the conference on anything I see that’s noteworthy.  Those will hopefully be a much shorter read!