use_last_segment="yes"}

Developing Google Glass Applications

Here at RDF we have recently had the opportunity to develop a prototype Google Glass application for one of our clients. Our client wished to showcase to internal stake-holders how Glass can be used to deliver innovative new applications for their business.

It was an interesting but, at just two weeks, very short project. However it did serve to give us some idea of the challenges of developing Glass applications before the hardware has even been made available in the UK.

Technical Overview

When developing for Glass there are two API options that can be used, either separately or combined together.

  • The Mirror API is a RESTful web based API that allows developers to build platform independent applications which can post to a user’s Google Glass timeline (which functions something like their Facebook feed). A photograph taken by Glass can to be sent to a third party contact, for example, or added to Facebook or Twitter.
  • The Glass Development Kit (GDK) is based on the existing Android Software Development Kit (SDK) and allows the development of native apps that run directly on the Glass and leverage the hardware (GPS, accelerometer, gyroscope, and magnetometer) to provide more interactive applications. The Glassware that we developed required access to GPS and the magnetometer built into the Glass to be able to determine which way the user is facing so this feature necessitated the use of the GDK.

Our application was created using some of the existing GDK sample code as a starting point. The application runs as a ‘Live Card’. Live cards persist in the present section of the user’s timeline while they remain relevant. Users remove the live card application when they finish using it. We overlaid “live” data onto our Glass application that was retrieved via Spring MVC web-services which returned JSON data to the Glass. The application needed to draw directly onto the canvas, for which we used some of the Android graphic libraries. We also had to develop a simple hit detection algorithm to determine which of the data items being plotted were currently of interest to the user (based on their current line of sight).

Challenges

The following is a brief list of some of the main challenges we faced while developing our Glassware prototype.

  1. Availability of hardware. We only had one Google Glass device to use. Although for this project we were only a small team (2 developers and an analyst/tester) this still posed the problem of how to share the hardware to allow us to develop and test features simultaneously. Dividing the Agile project into small 3 - 4 days sprints and multiple small user stories in order to incrementally build the functionality helped somewhat. But the ability for developers to use emulators to continue development whilst the application was being tested would have been an advantage, which brings us to…
  2. Lack of emulation. Emulators for specific mobile devices can be easily integrated with the Eclipse IDE when building Android mobile applications. This allows developers to build applications without the need to continually deploy them to the hardware. Unfortunately, at the time of writing there is no emulator available for testing Google Glass applications. As a workaround we investigated the possibility of running the Google Glass software on an Android tablet and then deploying our Glassware to the tablet. As developers have already ported the Glass code to the Android platform this is theoretically possible – see the Xenologer Github link at the end of this blog for more details – but it seems very error prone currently. Most Glass features such as using the camera or sending an email failed when we tested them on a Nexus7 tablet. This may well change though as developers keep working on porting the Glass code so the Xenologer project could be a useful resource in the future.
  3. Hardware issues. It should be reiterated that it is very early days for Google Glass but the Glass we borrowed for the project was prone to over-heating, especially when plugged into a computer for debugging purposes. We found it necessary to turn the Glass off periodically so that it could cool down. Needless to say this made developing to tight timescales more difficult. It is possible that the Glass we used had a hardware fault. Or maybe all Glass hardware is currently prone to this problem. It would be hoped that any hardware issues will be resolved before Glass is officially released.
  4. UI considerations. Building apps for Glass involves careful consideration of the way that users interact with Google Glass as opposed to normal mobile devices. The Glass documentation on the Google website gives some good common sense pointers for this; it stresses that the application should avoid distracting the user or doing anything unexpected, for example. Additionally we found that fitting content into the display so that it was readable and usable involved a careful trade-off between losing information or ending up with a Glass display which was too busy and difficult to follow. We found the ‘Glass Playground’ site useful for creating mock up images of how the application might look. These were reviewed by the product owner, and also by the developers as a reference when building features.

Summary

The Glass project has given RDF practical experience in how to tackle the challenge of building Glassware and provided some valuable lessons which we can apply to new work arising in this field.

Related Links

Share |