Extending Your jQuery Application with Amplify.js

This was a pretty amazing article about how to extend your jQuery code even further using Amplify.js.  Elijah has a great writing style with very organized and clear examples. 

He first starts out with what I think is a pretty nice and clean example app.  Then he proceeds to discuss various improvements he makes using Amplify.js and refactoring to various patterns that really start to bring out the flexibility of the app.

Definitely take a look!

http://msdn.microsoft.com/en-us/scriptjunkie/hh147623.aspx

Alternating Ordered List Item Styles

We have Policies and Procedures document that we are making available on our website.  This document in its entirety easily spans several hundred pages and has a pretty lengthy table of contents. 

To help make this easy to navigate and easier to consume we are posting several of the larger topics as individual documents and making them available from a linked table of contents that is created as a web page.

The requirement is that the table of contents uses different numbering styles at each indentation.  So, for instance, the outline below:

Topic A

   Sub-topic A.1

   Sub-topic A.2

      Sub-topic A.2.1

      Sub-topic A.2.2

Topic B

Topic C

would become:

1. Topic A

   a. Sub-topic A.1

   b. Sub-topic A.2

      i. Sub-topic A.2.1

      ii. Sub-topic A.2.2

2. Topic B

3. Topic C

This document isn’t maintained in a database yet so for now the table of contents is created in good old fashioned HTML.  Using a standard <ol> (ordered list) element in HTML looks initially like this:

  1. Topic A
    1. Sub-topic A.1
    2. Sub-topic A.2
      1. Sub-topic A.2.1
      2. Sub-topic A.2.2
  2. Topic B
  3. Topic C

The way to change the numbering style is by using the CSS list-style-type style on the <ol> element with various options such as decimal, lower-alpha and lower-roman.  You can find a great example of this at w3schools.  So, I could manually change the list style of each <ol> tag based on how much it was indented, but this would be a pain to maintain.  Anytime we changed the layout I would have to change the list-style-type styles and be sure to keep everything consistent.  I thought this would be a perfect place to use jQuery.

Using jQuery I was able to define an array of the types I wanted to use.  jQuery would then find all the <ol> and nested <ol> elements and change the style for each indentation.  Once done I could simply maintain the table of contents and jQuery would handle the styling. I couldn’t find any jQuery plug-ins that did this so here is the code I came up with.  If I end up using more than a couple of times I’ll probably create a plug-in for it.

1: function styleOutline() {
2: // List-item-styles we would like to use
3: var olStyles = [‘decimal’, ‘lower-alpha’, ‘lower-roman’];
4:
5: // Process the parent element.
6: styleOl($(‘ol:first’), 0);
7:
8: function styleOl(element, styleIndex) {
9: // Apply style
10: element.css(‘list-style-type’, olStyles[styleIndex % olStyles. length ]);
11:
12: // Call recursively for each nested ol
13: element.children().each(
14: function (i) {
15: var ol = $( this ).children();
16: if (ol) {
17: styleOl(ol, styleIndex + 1);
18: }
19: }
20: );
21: }
22: }
23:

This works great and took me only a couple of minutes to do. Definitely an improvement over the manually written way.  I hope this helps someone else out there!

When Bad UI Sticks Around – JavaScript Alert Boxes

We recently migrated from one online IEP vendor to another.  This is a program for collecting and managing Special Ed information, but it is an enterprise app and worthy of critique and comment.  They do a great job, but there are some small UI issues I wish could be fixed.

For instance, when you enter data onto a form and try to leave the form without saving they do a nice job of alerting you that you are about to lose the valuable data you just entered.  However, there is a catch:

SNAGHTML97e6ac4

Ouch, did you read that correctly?  “Hit CANCEL to Leave this page.”  The problem with that is I would say 99% of users are used to hitting Cancel when they realize they made a mistake and want to undo it.  Several times I’ve had to stop my knee-jerk reaction to hit cancel when I realized I updated something and didn’t hit Save.

Not only are the button actions inconsistent with what users are used to but I would go further and state they are actually opposite, which to me is one of the worst UI anti-patterns.  In addition the buttons are not very informative, OK and Cancel just don’t tell you what they are doing with your data.  Unfortunately I think it might be stuck as it is.  The software now boasts adoption of over half the state of California.  That’s a good sized client base and the current users are already conditioned to hitting OK to go back to the page and hit the Save button.  If the vendor were to reverse the impact of the buttons now I can just imagine the outcry there would be over users losing data because they are simply used to hitting OK.  You see, no matter how many emails, home page news alerts or training documents you give out half the users won’t even bother to read them.  They won’t know there is a change until they have already lost their data.  Not a good thing.

If I were to have to implement this change I would re-label the buttons all together.  This would not only alert the user that something functions differently then what they are used to but it would also make the dialog more usable, which is also a good UI pattern.  Unfortunately you cannot do this with plain JavaScript.  Since the vendor already makes use of jQuery the UI Dialog plug-in is a great tool to use.  It also standardizes the dialog boxes since JavaScript alerts are handled inconsistently between browsers.

Here’s a sample of what I would do using the jQuery UI Dialog plug-in.  I think it is easy to read and clear in informing the user on what they are about to do.  While this would be a change to what users are used to I think it would be one that is easily adapted to.

image

Hacking your vendor’s product; sometimes good, sometimes not

We have a particular web product that we use for all of our clients.  It is a pretty incredible system and has several hundred clients.  We are one of their largest ones so, between our implementation, and their normal customer needs its understandable why they cannot accomodate every one of our requests right away.

That being said, there was one slight “feature” that went slightly against one of our policies.  So, what to do?  Well, I put on my curiously tipped white hat and go to work.  One of the cool features of the system is they allow us, the admins for our group for all intensive purposes, to update portions of the page for our users.  They don’t escape HTML so it turns out I’m able to inject my own JavaScript code.

Ooohhh, that’s bad.  😉 I let the company know but I proceed on.

Turns out they, being the creative bunch they are, make use of jQuery.  This is definitely turning into a possibility with my toolbox already stocked and ready to go.

The particular portion of the page I choose to launch my JavaScript from is ideal because it is on the navigation bar, ensuring that it will be displayed on almost every screen.  Because we can only change content for our users it ensures that my changes will only affect our users and not those of their other clients.  However, with 5,000 users I had better make sure my code is well tested and clean.  I still have the ability to disrupt my 5,000 users if I make a bad mistake.  Point noted.

The vendor puts a character limit on the particular area of the page I’m changing.  That means I have to inject JavaScript that tells the browser to load a larger script from elsewhere.  No big, I put the script on our department website.  Hmmm, not so good.  The system is now throwing a warning that I’m loading JavaScript from a non-secure source.  Hmm.  Sure would be nice if I could load the script directly on my vendor’s server.

Well, I can. 🙂  The vendor also allows us to upload documents to a library that our users can download from.  So, I upload the script.  Uh, oh, didn’t work.  Turns out they block all but a few extensions.  So, instead of calling it myscript.js I change it to a text file called myscript.txt.  That uploads great and, guess what, the browser is happy to execute it.  Great!  On my way.

After a few tests it turns out I’m able to quite nicely make our little “issue” a thing of the past.  First I check to make sure the page in question is the one being viewed.  That way the bulk of the code (all 6 lines of it) runs only when necessary.  Then it cleans up after itself by removing my injected code from the page DOM.  Nice!

It’s not foolproof of course.  If the user isn’t running JavaScript then my trick won’t work, however,  half the system won’t work for them anyway since JavaScript it is required.  Also, if someone is sneaky enough they can load the original source and see where I injected my code.  But, if they are going to do that we have bigger problems then them simply disabling my script.

OK, that covers the techy issues, what about the people ones?  Well, as you may have guessed, no matter how many times we alerted our users to the changes we made, some users started calling the vendor’s Help Desk asking what happened to the report they were used to.  The vendor then called me after spending two days trying to solve the issue they weren’t seeing.  They didn’t really care for the changes.  They understood the necessity and gave me props for how I handled it but now their Help Desk had no idea if a problem call was because of their system or my code.  Granted, this totally makes sense.

So, they gave us a choice.  We can continue to develop our own “customizations” but we have to take over all Help Desk calls for our users for the entire system.  I understand that, but directly supporting 5,000 users is not something we are capable of doing.  I have to note also that the Help Desk is absolutely outstanding, one of the best  Ihave ever worked with.

So, after the vendor agreed to look into implementing our change we are allowed to keep the code and support the Help Desk for any questions regarding this one issue.  Once the vendor has solved the problem we will remove it and go on our way.

So, here is the moral of the story: Hacking your vendor’s site open doors to lots and lots of capabilities, however, it may aggravate your vendor a little or, quite possibly, cause an early termination to your contract.  I suggest you keep this little tool in your developer’s pocket for extreme cases.

It was fun and our vendor is great.  Once I put the change in place we started to come up with lists of additional “nice to have” features I could implement, but, in the end, we’ll just hand those over to our vendor and see if they make it in someday.

Microsoft to include jQuery in Visual Studio

This is absolutely amazing.  If you’ve never used jQuery definitely check it out.  Ever since James Johnson (president of the Inland Empire .Net User’s Group) did a presentation on it last year I’ve been hooked.  It’s is an outstanding JavaScript framework that actuallly makes JavaScript a pleasure to use.

As a classically trained developer I’ve always approached JavaScript as a tool to use only when absolutely necessary and as a last resort.  Dealing with cross browser compatibility and just plain frustration over the language has made JavaScript a tool of evil in my development toolbelt.

With jQuery I not only now consider JavaScript a valuable asset I actually love to develop in it.

Hearing that Microsoft is now including it in their IDE is pretty exciting.  This means that IntelliSense and debugging (while possible with some great workarounds from the jQuery community) will most likely eventually be fully supported for jQuery.  I’ve worked with lots of development environments and Visual Studio is by far one of the best IDE’s around.

Probably even more exciting is that this furthers the strategy that MS is really interested in working with developers.  Some of my friends are probably tired of me bashing the old-school “Microsoft Way”.  Seeing the real encouragement of MS through employees like Scott Gu, Phil Haack and others on projects like MVC and such really make it apparent that MS is offering alternatives for developers who want the ability to code using modern standards.

Actually integrating jQuery into Visual Studio shows that MS is willing to offer alternatives to their own prodcuts such as the ASP.Net AJAX JavaScript framework.  MS is no longer in the “We’re Microsoft.  Our way or the highway” mentality.

ESRI 2008 UC: Plenary Session

The main presentation by Jack Dangermond was pretty good, as was last year’s.  It’s a good mix of what is new in ArcGIS 9.3, what users are doing in the field, what’s coming on the horizon, and an overall impact of GIS in the world. 

From a technical point of view here are just a few of the new features in 9.3.  There are way too many to list so take a look at the ESRI site if you want a comprehensive list.

  • Reverse Geocoding
    This has been a long time coming and a hot item on the request.  This wouldn’t warrant too much discussion, except that the implementation was really well done.  Typically (from my point of view) ESRI is a functional program, but  wouldn’t win any UI or productivity awards for most of it’s features.  The buttons or dialogs for typical tasks aren’t always where you expect them and sometimes you have to drill down through 10 screens just to get to your data. 
    However, with the reverse geocoding, it was a very easy to use tool with a crosshair for your cursor and a small dot that would snap to your street network nearest to your cursor.  Clicking would very quickly give you a geocoded address at that point.  If you clicked on an intersection the geocoding would be the intersection of those streets.
    These reverse geocoded addresses could be saved as pushpins which can later be saved as features I believe.  Very nice!
    • Oh, I just stepped into the Geocoding Intro technical workshop.  Now when you run a geocoding process, if there are any addresses that could not be geocoded, in addition to the standard options to fix this you can now use Pick Location.  This allows you to use the Reverse Geocoding tool to pinpoint exactly where on the map this address should be.  This is great as it might be difficult, if impossible, to change the address in the source data.
  • KML Export
    As the rest of the world jumps online ESRI, arguably the largest provider in the GIS arena, has been a little behind the game.  In the past I have had to resort to 3rd party tools to export our map data to KML.  Invariably this also requires lots of massaging of the data afterwards before it is ready to be published on an online mapping service such as Google. 
    You can see an example of our school district layers pushed to Google through KML here.
    Now ArcGIS will have native KML export built in.  When used in conjunction with ArcGIS Server and other tools this will make offering your GIS data to online mapping systems a very easy process, that will free up maintenance and always hit live data.
  • PDF Support
    For a while now you’ve been able to export GIS maps as PDF.  This is a great feature as ArcGIS Desktop will also export the text as well which is completely searchable and selectable using Acrobat Reader.  I use this all the time when exporting maps of our district.  It’s amazing when I have several hundred streets on a map, go to the Acrobat Reader search box, type in a street name and find it in an instant on a map.  This is really useful when other users download our maps and want to find where they live.  We have an online School Locator tool, however, having a map on your local machine is a great tool for use in offline scenarios. 
    However, other than this ability the PDF version of the map has still been fairly static.  ESRI has been working with Adobe to really exploit the abilities of Reader.  Now you can export a wealth of data to PDF.  This includes data frames, layers and feature attributes.  In the PDF hierarchy you can now see the individual data frames and layers.  When clicking on a feature you can get all the underlying data for that feature.  This is just like using the Info tool in ArcMap.  Also, the data can be georeferenced.  This allows a user to get X,Y coordinates from any area of the map.  There is no geocoding yet, but tis is all pretty neat. 
    This is pretty amazing because now you can get an incredible amount of information just from an offline PDF.  This is not only useful for Internet connected machines.  As more and more users are using mobile devices that may not have direct connection to an online GIS service, having a PDF they can use with this info will be a great step forward short of building an offline app.
  • Virtual Earth Integration
    They went through this area pretty fast so I didn’t get all the details.  It seems that you can pull VE services and resources directly into ArcGIS Desktop now and use in your own maps.  This means that you have full access to the imagery and data.  This is all on demand, which means that you cannot store the resources for your own editing or offline use.  However, this also means that you will always have the latest data.  When you open a map it will retrieve the latest images, including any new ones Microsoft may have published, directly in your maps.  This can offer a wealth of data if you have out of date or no imagery/data for your map content.
    I assume that Google and other map services will be accessible as well, it’s just that ESRI kept touting it’s partnership with Microsoft so I’m a little hesitant to say this.
  • JavaScript API
    This has been a sore point with ArcGIS in the past few years.  As I said above, ESRI has really been playing catchup.  Most of ESRI’s online mapping products have been pretty bad.  The UI design wasn’t great and it was terribly slow.
    I don’t know what the current tools are like (and usually ERSI demos are always running in a perfect world) but ESRI is starting to allow more options to connecting with data.  One of these is the JavaScript API.
    This API, on the surface, seems pretty similar to Google or Microsoft, where you specify a JavaScript file, the resource data and a div to place the contents into.
    When you publish a map to the ArcGIS Server there are now several default options to consume the data.  When you go to the url ArcGIS Server now allows you to open the map in ArcMap, view in the internal viewer, and view using the JavaScript API among others (KML export possibly but not sure).  If you choose the JavaScript API option a new page is opened with a standard web 2.0 map using the ESRI API.  If you view the source you can see that there are only about 10 lines of code that actually retrieve and display the content.  If you simply copy this text you can paste this into your  own apps and very easily add your interactive map resource to your pages.  Pretty nice indeed!
    I have to laugh here because the ESRI rep demoing this function turned a static (and very bad looking) jpeg of a campus map into a fully GIS capable interactive map in about 1 minute.  The crowd cheered.  :)  As any HTML/JavaScript developer might know there are a lot of underlying things being assumed, the first gotcha being make sure your div is properly named either in your DOM or in the JavaScript code referencing the ESRI map resource.  This is of little worry for developers who understand what’s going on, but I know there will be a few business users going back to their organizations saying "Do this, it only takes 1 minute!" and their non-web savvy GIS engineer will be spending a day on it.
    Eh, maybe I’m just pessimistic but you  can see the marketing "woohoo!" all over these demos.  ESRI always operates their demos in the perfect world.  But so does everyone else (i.e. Microsoft). 🙂
  • Mashups
    OK, if you are a web developer and haven’t been in a coma for the past few years, you should know what a mashup is.  In a nutshell a mashup is simply a web page that takes data from one source (i.e. Flickr photos), combine it with another source (i.e. Google Maps) and display the results (i.e. showing where the photos were taken on a map).
    John Grayson from ESRI’s Applications Prototype Laboratory created a great tutorial with 7 different examples of creating mashups using ESRI data and JavaScript API’s.  Each one increases in it’s level of capability and complexity.  Unfortunately all the examples were based on retrievinganalyzing data and not on editing actual data for updating on the server.
    I can’t seem to find these slides or any information on John’s presentation anywhere so hopefully he will publish these soon.  Otherwise in my spare time maybe I can throw a few together.  (Yeah, when do I have spare time!  I stayed up to almost 4am last night!)

Overall it was a great session.

I’ll be adding more posts throughout the conference on anything I see that’s noteworthy.  Those will hopefully be a much shorter read!