Custom Vuforia VuMarks to identify and monitor IoT Devices with HoloLens

Some of the most popular experiences of Augmented or Mixed Reality are currently in a gaming or an immersive 3D form. But what makes headsets like the HoloLens so different and special is the ability to still see into the physical world. It is this link that opens up so many new opportunities and future possibilities of spatial computing. An interesting scenario is using HoloLens to interact with IoT devices in the real world. Just by gazing at devices around a room imagine you could identify a specific IoT device to review its real-time telemetry and control it over the air!

Setting up some sample IoT devices to interact with…

To test out this scenario the first thing we need here is access to some IoT devices to identify. In our case and for sake of simplicity we use some simulated IoT devices using the Azure IoT Solution Accelerators Remote Monitoring sample which you can try out.

Once the Azure IoT Remote Monitoring solution is provisioned and ready you can select it to review a list of devices. This provides us with the list of Device Name Ids we will use to generate the VuForia VuMarks in the next steps.

Identifying IoT devices in the real-world using VuForia VuMarks

The second thing we need is to be able to identify each IoT device in HoloLens. One approach we tried during a HoloLens hack was to use Vuforia VuMarks to identify each device. A VuMark template contains a particular type of encoded data; numeric, string or raw bytes. Initially I tried out the default numeric type VuMarks from the VuForia Samples to check everything was working before trying anything more complex. Bear in mind there will also be a number of physical and environmental factors including VuMark placement, size and lighting conditions in the area to test and consider.
Tip: I found it useful to test the VuMarks by saving all the generated images on my iPhone and testing them in the Unity Editor using the built-in web cam.

Creating a custom VuMark

I used the VuForia VuMark Illustrator template to create a custom VuMark. In my case I wanted to a support a 32 character length string to contain a GUID so I created a string type VuMark with 280 data elements. To save time designing your own VuMark you can download my finished custom GUID VuMark SVG. If you want to create your own VuMark I’ve included a list of VuMark element requirements below so you can get an idea of how complex the design would need to be and compare how many elements are required for each data type:

Id length String elements required Byte elements required
1 35 40
4 56 64
8 84 96
10 98 112
11 112 120
12 119 128
14 133 144
16 147 160
18 161 176
20 182 208
22 196 224
24 210 240
32 280 320
48 406 464
64 546 624
100 840 928
Maximum numeric Id Numeric elements required
9 28
99 31
999 34
9999 38
9 x5 41
9 x6 50
9 x7 54
9 x8 57
9 x9 60
9 x10 64
9 x11 67
9 x12 70
9 x13 74
9 x14 77
9 x15 80
9 x16 84
9 x17 87
9 x18 90
9 x19 94

For more info on designing VuMarks you can download the VuMark design guide or view design guide docs. I also found the video explaining the VuMark design process to be most helpful. NB: To design your own custom VuMarks you will need Adobe Illustrator to run the VuMark template scripts.

Illustrator / VuMark Scripts troubleshooting notes:

  • You may have to restart Illustrator after copying the scripts into the C:\Program Files\Adobe\Adobe Illustrator CC 2018\Presets\en_US\Scripts directory.
  • If you hit an error when setting up a new VuMark using the Illustrator scripts v6.0.112 then check you have Adobe’s Myriad Pro fonts installed.
  • If you can’t see the Illustrator canvas or the document area is blank or black then you might have to disable GPU acceleration under Preferences > Performance.

Creating custom VuMark database for Unity

Once you’ve designed your custom VuMark in Illustrator and it passes all the tests you will be ready to export your VuMark Template artwork. If you don’t have your own design ready you can download my GUID VuMark SVG artwork.

Note: If you’re starting a new design it’s preferable to avoid rotational symmetry in your VuMark’s border or contour otherwise you will have some additional work to do, as well as this the validation scripts don’t seem to provide a clear indication if this is completed correctly. You might also notice the Border and Clear Space width is only shown as “VERIFY” status – this check is left to the designer to manually check that the magenta overlay around the VuMark contour falls within the border and clear space boundary.

  1. If you haven’t used VuForia before you will have to create a developer account and get a free license key for development in Unity.
  2. Create a new VuMarks database.
  3. Upload the custom VuMark SVG artwork file into your VuMark database. Note: You should set the width of the VuMark in relation to Unity’s unit of measurement which is in meters. In my case I want to recognize the VuMark on my iPhone which is 6 cm wide therefore I use a value of “0.06” m.
  4. Select your VuMark template target to download as your VuMark database.
  5. Download database for Unity Editor.
  6. Import your VuMarks database package into Unity project. If you don’t have your own Unity project you can setup the Mixed Reality IoT Monitoring sample to get started.
  7. In Unity scene check the VuMark Behavior is setup correctly with your custom VuMark Database and Template and has Extended Tracking enabled for Mixed Reality.
  8. Open the VuForia AR Camera configuration settings to enter your VuForia developer license key and to load and activate the VuMark database.
  9. Generate the VuMark images for each Device Id you want to recognize.

    For my sample IoT devices I generated the following device Ids; “chiller-01.0”, “chiller-02.0”, “elevator-01.0”, “elevator-02.0”, “furnace-01.0”.


    Tip: Save the generated VuMark images to iPhone / Android to test with. (I just saved the generated VuMark PNG images to my OneDrive images folder to sync onto my iPhone.)

Running the sample Mixed Reality IoT Monitoring Unity project

To run the Mixed Reality IoT Monitoring Unity project you will also need to setup the Azure Functions endpoints to get the device data from the Azure IoT sample.

  1. Fork or clone the Azure Functions project on github.
  2. Open the project in Visual Studio 2017 and Publish the solution.
  3. Select “New Profile” in the Publish dialog.
  4. Select “Create New” Function in Publish target dialog.
  5. Create “GetIoTHubDataFunction” App in the App Service dialog.

Next steps…

Using VuForia VuMarks we are able to identify an IoT device using a HoloLens. Then using the recognized device Id as a param we can poll an Azure Functions endpoint to return the device’s telemetry. The next steps in this scenario would be to add buttons to control each device listed in the device’s payload.

Analytics for Mixed Reality

Behind every good user experience is great analytics

If you ever designed or developed client side applications or websites you’ve probably integrated with an analytics service to provide telemetry data to help make informed design choices and development decisions to improve user experience and business outcomes.
One of the key performance indicators is when you track steps or funnel operations as conversions to calculate a conversion rate for each session. You would want to know how the conversion rate can be improved upon and a good idea would be to watch out for the steps with a high bounce rate where users are dropping off. If the bounce rate is very high then there might even be a blocker or flaw in regards to the user. Either way we can understand the benefit for the collection and study of analytics which is essential for providing the insights that will help designers and developers craft better user experiences and advance product development.

Application Insights for Unity

If you’re a Unity developer or you develop in VR, AR, MR or XR you might have stuck the issue of gathering analytics onto the backlog but it should be one of the first items done so you can use it to help plan and prioritize the other features. To help you get started I’ve made an Application Insights for Unity sample so you can start logging telemetry in just a few minutes! All you have to do to add this to your existing Unity app or game is drop the Unity Application Insights script onto a Game Object, add your Application Insights Instrumentation key and you will be all set to record valuable user session telemetry automatically. After that all you have to do it wait around 5 mins for the telemetry to display in the Application Insights Usage section in Azure. You can also extend this in you own app or game to record any custom events or metrics you want to know about. But right out of the box (without any additional effort on your part) you will be able to visualize telemetry for users, sessions and user flow and their journey across the scenes of your Unity app or game. Here are just some of the visualizations already built-in to Application Insights in the Azure portal:

User Flows

Chart user flow across Unity scene changes and split by custom or interaction events.

Unity App Insights User Flow


View users and events during sessions.

Unity App Insights User sessions


Create funnels by creating step by step conditions to get conversion rates.

Unity App Insights Conversions


Review returning users over a period of time.

Unity App Insights Retention

Analytics for Mixed Reality interactions

In the Unity project there is also a MR sample to show how to setup Application Insights for recording custom interaction events and metrics in a scene. To use the sample please fork or clone the Unity Application Insights sample project, import the plugins and setup Application Insights in Azure portal if you haven’t already. Once you’ve got that you can get up and running on HoloLens straight from the Unity Editor:

  1. To view the Mixed Reality sample in HoloLens open the scene named “Scene-MR”. (Make sure you’ve pasted in your Instrumentation key into the Application Insights game object script)
  2. Connect to remote HoloLens device using Window > XR > Holographic Emulation window. Note: Requires the Holographic Remoting Player installed and open on HoloLens to get the Remote Machine IP address.
  3. Hit Play and you will start recording interaction telemetry with the holograms.

The Application Insights MR scripts will record taps, gaze time and object proximity – when users physically “visit” a hologram by moving closer to it.
You can also create your own custom dashboard templates using Ibex Dashboard (which is another project I helped with) and is designed for visualizing data from Application Insights using Kusto queries.

You can add the dashboard template for MR shown above to visualize the telemetry for MR custom events and metrics. Check out the readme on github for more info about installing custom Ibex Dashboard templates.

Unity Web Sockets for Mixed Reality

Certain cloud services may offer a Web Socket streaming connection as an alternative to firing repeated REST requests or polling. To make working with REST APIs in Unity more convenient I built a REST client for Unity based on UnityWebRequest which supports abstract types for JSON / XML serialisation. But given a real-time scenario like “speech to text” using a Web Socket client instead of REST gives an option to stream the audio data and get intermediate results back which provides responsive feedback to users for an improved user experience.

Using Bing Speech API as an example we can see some limitations of using REST API versus the Web Socket protocol:
Bing Speech REST Web Socket
Audio stream duration 15 secs 180 secs – 10 mins
Stream audio with intermediate results No Yes

Also when it comes to client app development in Unity there are a couple of very useful message events you receive from the WebSockets server:
End-of-speech detection so you stop recording on client device.
Phrase detection so you can pass phrases to natural language understanding services (LUIS) model.

To use Web Sockets in Unity you can use the WebSocket-Sharp library, but this only supports the Unity Editor and the mono target platforms. In order to use Web Sockets when targeting Windows Mixed Reality headsets you have to use Universal Windows Platform (UWP) APIs like MessageWebSocket. To make things easier I have created a common Unity Web Socket interface to use WebSocket-Sharp inside the Editor and mono platforms and then use MessageWebSocket when targeting the Windows Store App platform for MR headsets.

Unity Web Socket interface

API Description
ConfigureWebSocket(url) Configures web socket with url and optional headers
ConnectAsync() Connect to web socket
CloseAsync() Close web socket connection
SendAsync(data) Send binary byte[] or UTF-8 text string with optional callback
IsOpen() Check if web socket status is open
Url() Return the URL being used by the web socket

Interface events

OnError(object sender, WebSocketErrorEventArgs e);
OnOpen(object sender, EventArgs e);
OnMessage(object sender, WebSocketMessageEventArgs e);
OnClose(object sender, WebSocketCloseEventArgs e);

If you are interested in the example above there I have also prepared a Unity Web Socket demo project to show Bing Speech service to LUIS for controlling scene game objects using natural speech commands for Mixed Reality scenarios.

Querying Application Insights for data visualisation

Ibex dashboard is an open source web app for displaying telemetry data from Application Insights. It comes with a number of sample templates including analytics dashboards for Bots. If you’re developing a bot and you want to see how your bot is performing over time then you can select the Bot Instrumentation template which requires you to enter your Application Insights App Id and App Key. Also depending on your bot you will need to add Node.js instrumentation or C# instrumentation in order to enable logging to Application Insights. Then after a couple of minutes you will start to see the data come through! The dashboard can be completely customised using generic components including charts, tables, score cards and drill-down dialogs. These elements can be used to review how your bot performs over time, monitor usage stats, message sentiment, user retention and inspect user intents.

If you are new to Application Insights one of the useful features of the Ibex dashboard is the ability to inspect an element’s Application Insights query and the formatted JSON data side by side. 

This query can be copied and played back inside your Application Insights live code editor. This is a good way to learn how the Application Insights queries work as you can step through the query by commenting out various lines with double slashes ‘//’.

Writing Azure Log Analytics queries for Ibex dashboard

The Ibex dashboard schema is composed of meta data, data sources, filters, elements and dialogs. Each data source allows you to define a query and a ‘calculated’ javascript function to process the query’s results for display purposes. Before learning to write Application Insights queries I was used to writing javascript map / reduce functions to aggregate data and so it’s all too easy to rely on previous javascript knowledge to process the data from a basic query. But often this javascript ‘reduce’ aggregation logic can done in an Application Insights query with a lot less effort. So invest some time up front to learn the key Application Insights query concepts and it will pay off in the long run!

To help to illustrate this we can look at the Application Insights query for tracking a bot to human hand-off during a user’s conversation session. For this scenario we built a QnA bot with the hand-off module installed. If a customer asks the QnA bot a question and no answer was found in the knowledge base we trigger an automatic hand-off to human. We want to show the fastest, longest and average times for customer waiting for an human agent to respond in the dashboard.

We can start by writing a basic query in Application Insights to get all the transcripts from the ‘customEvents’ table and ‘project’ only the information we need.

But in this example we are not using Application Insights to aggregate the results so we end up with a lot of results to process. Given the query above the following code snippet is the amount of Javascript required.

The first ‘reduce’ block is required to group the transcripts per user Id. Then for every user we track the state change from waiting and talking to human agent and calculate the time difference in seconds. Where ‘state’ is an integer value that marks the current status of the conversion.

0 = Bot
1 = Waiting
2 = Human agent

But we can optimise the code by doing the aggregation within the Application Insights query by using the ‘summarize’ operator and ‘count’ function.

Notice how you can apply aggregations in multiple passes, in this case the ‘summarize’ operator and ‘count’ function is used to aggregate results twice in conjunction with multiple ‘where’ statements that are used to filter the results. Now the javascript ‘calculated’ function code can be greatly simplified:

The only thing we do is a ‘reduce’ function to convert the time format ‘hh:mm:ss’ returned from the Application Insights query into a number of seconds for the various calculations for displaying in a score card element.

The final Application Insights query is available in the hand-off to human dashboard template and is included with Ibex dashboard.

Further reading and resources:

Unity3d and Azure Blob Storage

Previously I’ve looked at using Azure App Services for Unity, which provided a backend for Unity applications or games using Easy Tables and Easy APIs. But what if I wanted to lift and shift heavier data such as audio files, image files, or Unity Asset Bundles binaries? For storing these types of files, I would be better using Azure Blob storage. Recently I created an Azure Blob storage demo project in Unity to show how to save and load these various asset types in Unity. One of the exciting new applications for Unity is developing VR, AR or MR experiences for HoloLens where a backend could serve media content dynamically whether it’s images, audio, or prefabs with models, materials and referenced scripts. When thinking of cloud gaming the tendency is to consider it in terms of end user scenarios like massive multiplayer online games. While Azure is designed to scale, it is also helpful to use during early stage development and testing. There is an opportunity to create productive cloud tools for artists, designers and developers especially when extensive hardware testing is required in Virtual Reality, Augmented Reality or Mixed Reality development. For example, imagine being able to see and test updates on the hardware without having to rebuild the binaries in Unity or Visual Studio each time. There are many more use cases than I’ve mentioned here like offering user generated downloadable content for extending your game or app.

I’ll be covering the load and save code snippets from the Unity and Azure Blob storage demo commentary which you can watch to see how you can save and load image textures, audio clips as .wav files, and Asset Bundles. The Unity Asset Bundle demo will also include loading Prefabs and dynamically adding them into a Unity Scene using XML or JSON data which should give you some ideas of how you might like to use Blob storage in your Unity development or end user scenario.

Setup Azure Blob Storage

Setting up Blob Storage for the Unity demo can be done quickly in just a couple of steps:

  1. Sign in to your Azure portal and create a new Storage Account.
  2. Once the Storage account is provisioned then select the add new container button which will be used for storing the blobs.
  3. 02-CreateContainer
  4. Create the ‘Blob‘ type container which permits public read access for the purposes of this demo.
  5. 03-NewContainer-BlobAccess

Audio files

Saving Unity Audio Clips into Blob Storage

For the Unity audio blob demo I created a helper script to convert Unity Audio Clip recording to .wav files for the purpose of saving to Azure Blob Storage.
Once the audio has been recorded in Unity I can upload the file using the PutAudioAudio method which takes a callback function, the wav bytes, the container resource path, the filename and the file’s mime type. By the way this method must be wrapped using StartCoroutine which is the way Unity 5 handles asynchronous requests. Once the request is completed it will trigger the PutAudioCompleted callback function I have provided my script with a response object. If the response is successful you will see the wav file blob added in your Blob Container.

☞ Tip: Grab the Storage Explorer app for viewing all the blobs!

Loading .wav files from Blob Storage

As we used the Blob type container with public read access you can use the UnityWebRequest.GetAudioClip method to directly load the .wav file from Azure Blob Storage and handle it as a native Unity AudioClip type for playback.

Image files

For the Unity image blob demo I used Unity’s Application.CaptureScreenshot method to generate a png image representation of the current state of the game screen.

Saving Images into Blob Storage

The image is saved using the PutImageBlob method which is similar to the audio blob except we pass the image bytes and mime type.

Loading Image Textures from Blob Storage

As we used the Blob type container with public read access you can use the UnityWebRequest.GetTexture method to directly load the .png file from Azure Blob Storage and handle it as a native Unity Texture type for use. As I want to use the Texture in Unity UI to display as an Image I need to convert it to a sprite using my ChangeImage function.

Unity Asset Bundles

Unity Asset Bundles provide a way to dynamically load in assets in your project. This Asset Bundle demo for Blob Storage is a little more complicated than the other examples. An important note to remember is that Asset Bundle binaries need to be build for each target platform. Refer to Unity documentation on building Asset Bundles for more info on building Asset Bundles. Also make sure to review the code stripping section if you want to be able to use referenced scripts in your Prefabs when you do a build.

Building and uploading the Asset Bundles for each platform to Blob Storage

I have included the Editor scripts with the demo to build the Asset Bundle for each platform. NB: Windows 10 Store App (or HoloLens) bundles can only be built on the Windows Unity Editor at time of writing this. Building the Asset Bundles and uploading them is performed inside Unity Editor:

  1. Select Assets > Build Asset Bundles
  2. Select Window > Upload Asset Bundles…

Loading Asset Bundles from Blob Storage

If you like the Azure Storage Services library for Unity let me know about it on Twitter. Any issues, features or blob storage demo requests please create it as an issue on github for others to learn from and collaborate.

Merging Unity scenes, prefabs and assets with git

When it comes to working as a team on the same project we are all thankful for source control. But even if you’re cool with git there are some things to be aware of when starting new source controlled Unity projects that should help to reduce the chance of nasty merge conflicts.

Solo Scenes

Something to generally avoid in Unity is working on the same scene. Thats why the question of how to merge a scene when a team of developers are working on it is a fairly hot topic. One basic strategy is for each person to clone the main scene and work on their own version, then nominate a scene master to combine the various elements into in the main scene to avoid conflicts. But because this is quite a restricted way of working Unity 5 introduced Smart Merge and the UnityYAMLMerge tool that can merge scenes and prefabs semantically.

Asset Serialization using “Force Text”

By default Unity will save scenes and prefabs as binary files. But there is an option to force Unity to save scenes as YAML text based files instead. This setting can be found under the Edit > Project Settings > Editor menu and then under Asset Serialization Mode choose Force Text.


But as this is not the default setting make sure when applying this mode that everyone else on the team is happy to switch.
If you select “Force Text” to save files in YAML format you should add a .gitattributes file that tells git to treat *.unity, *.prefab and *.asset files as binary to ensure git doesn’t try to merge scenes automatically. Paste the following into the .gitconfig file inside your Unity project:

Another result of saving in text file mode is that you can see the changes in source control commits.

Setting up UnityYAMLMerge with Git

You can access the UnityYAMLMerge tool from command line and also hook it up with version control software. Paste the following into the .gitconfig file inside your Unity project:

UnityYAMLMerge (Windows):

UnityYAMLMerge (Mac):

GitMerge for Unity

Worth a mention is the free GitMerge tool for Unity for merging scene and prefabs inside Unity Editor but unfortunately this editor plugin is currently broken in Unity 5. Once you start merging and are in a git merge state you can resolve the conflicts inside the Unity app using GitMerge Window for Unity which is opened via menu Window > GitMerge.

Merging Unity C# script conflicts with P4Merge app

For merging conflicts I prefer to use the free P4Merge visual merge tool which is available for Mac and Windows. Here’s how to hook up the P4Merge app as the global git merge tool when issuing the git mergetool command:

P4Merge (Windows):

P4Merge (Mac):

Setup a .gitignore file for Unity projects

First up there are certain Unity folders and files you don’t want to include in the repo. Only ‘Assets’ and ‘ProjectSettings’ need to be included. Other Unity generated folders like ‘Library’, ‘obj’, ‘Temp’ should be added to the .gitignore file. Or you can just copy the boilerplate Unity .gitignore file. I also suggest ignoring generated files like OS and source control temp files:

Unfortunately I made the over zealous mistake of adding all *.meta files to the .gitignore file. At first this seemed like a good idea until the repo gets cloned and you end up with broken script and resource links in the Unity Editor scene. The Unity source control documentation mentions that these .meta files should be added to source control. However I found that its only the meta files associated with resource files and scripts that are linked to a GameObject in the Unity Editor that are required. By using the exclusion rule in gitignore I can limit it so the only .meta files to be saved are those within the Unity special folders like: ‘Prefabs’, ‘Resources’, ‘Scenes’ as well as a ‘Scripts’ folder. So if you wish to limit the meta files just add the following rules to the .gitignore:

For example if I import the Azure AppServices library for Unity by copying it into the Assets/AppServices directory that would mean no meta files would be pushed in commits for this folder as it’s outside the Assets/Scripts folder. But what if I use a library that will be linked with GameObjects like TSTableView for example which attaches to a Unity UI Scroll View. Either I can drop the TSTableView folder inside the Assets/Scripts directory, or if you prefer to keep third party scripts outside as I do then you also need to add the Assets/TSTableView directory to the list of exceptions in the .gitignore file:

If you adopt this convention just be aware that every time you add third party MonoBehaviour script libraries outside the Assets/Scripts folder then these directories will need to be added as .gitignore exceptions to save the associated .meta files.

Swift JSON parsing for iOS development

Recently I started a new iOS Swift project and spent way more time than I would like trying to find a JSON parser that could handle the various JSON data models I was working with. In this post I will document some real code samples that should prove useful for other iOS developers looking to get off to head start with data modelling in Swift.

The search for a Swift JSON parser…

Handling JSON is a very common task with modern app development whether its consuming some REST Service API, loading a JSON file or document objects from database. With regards to Windows C# apps Newtonsoft JSON is the popular choice and similarly with Java for Android there is GSON. But what library to use for iOS apps? Previously I had used libraries like JSONModel to parse JSON data into native objects and it worked pretty well. But the iOS developer landscape has changed with the shift from Objective C to Swift so I wanted to find a Swift based framework. There are a number of open source Swift JSON parsers, but the ones I tried resulted in code mountains just to parse some format of JSON. This felt like a fail compared to the elegant manner of Newtonsoft or GSON object models. I was surprised how hard it was to pinpoint the one Swift library that could satisfy all my parsing needs. But with Argo I feel I’ve discovered the golden JSON parsing library for iOS Swift development.

Getting on board with Argo

I’m a long time user of CocoaPods for Xcode source control projects as it makes it easier to avoid jamming up a repro with binaries. However the precompiled versions on CocoaPods don’t always provide the latest version available on GitHub. This is where Carthage comes in as you specifically request a tag version or branch on GitHub. Carthage can be quickly installed using Homebrew brew install carthage as mentioned in the installing Carthage docs. To setup create a new text file and save it as ‘Cartfile’ inside your Xcode project folder. (In this case I’m requesting a specific version of Argo and Curry for use with Swift 2)

Once you have installed Carthage and saved a ‘Cartfile’ then you need to build the frameworks.

  1. In Terminal navigate to the project folder and run carthage update to build frameworks for all platforms. NB: For packages that can only be built for a single platform use carthage build --platform iOS
  2. Drop built ‘*.framework’ folder into Xcode project
  3. Add Build Phases > Run Script carthage copy-frameworks and add Input Files path to ‘*.framework’

JSON data modelling with Argo

Argo decodes standard property types (String, Int, UInt, Int64, UInt64, Double, Float, Bool) as well as arrays and optional properties. You can decode a nested object or an array of nested objects that conform to the ‘Decodable’ protocol. In fact you can even do inception – using the same struct within itself as shown below. One thing that might require explanation is Argo’s sugar syntax. The summary of the sugar syntax is this:

  • <^> syntax pulls the first property, and <*> pulls subsequent properties.
  • <| syntax relates to a property.
  • <|? syntax relates to an optional property.
  • <|| syntax relates to an array of 'decodable' objects.
  • <||? syntax relates to an array of optional 'decodable' objects.

What about decoding JSON values into native types like NSURL and NSDate?

It can be advantageous to parse URL and date values as native types instead of String types. To get this to work with Argo you need to make a parser which wraps NSURL and NSDate in the 'Decoded' type. But first I made a Uri helper to encode url strings as NSURL and a Date helper to convert a date string (of a known format) to NSDate.

The Parser helper returns objects wrapped in Decoded type:

Example model with NSURL and NSDate using the Parser helper (note the extra brackets):

Three things to avoid in your JSON models for smoother sailing with Argo

  1. Two dimensional arrays (arrays within an array) aren't handled out of the box. There are multi-dimensional array workarounds but it can cause compiler melt down if your model is particularly complex. Better to avoid this complexity by flattening arrays to a single array or use nested property arrays.
  2. Best to limit object model to no more than 10 properties. This is because there are limits of how many things can be curried with Argo before the complier gives up. Try to use nested objects to group things together, but if that is not possible then there are techniques to deal with complex expressions.
  3. Array of mixed objects (dynamic types). Argo can be made to decode an array of different types but it will increase complexity as you will have to use subclasses instead of structs.

How to load JSON file within iOS app bundle in Swift

Often the first thing I like to do is to load a JSON file to configure my app. For example you might have various JSON config files for localhost, staging and production settings.

The data model using Argo & Curry would look like this in Swift:

To load the JSON file within the app bundle I use a file helper:

The loaded JSON can be parsed into the 'ConfigModel' using Argo's decode method.

While this is fine for converting one type of object, what if you have multiple data models? You could quickly end up with a lot of repetitive code. One of the powerful things with Swift 2 is that it supports Abstract Types. Argo needs a little help to ensure the abstract type conforms to the Decodable type so there is slightly more boilerplate in this case, but it should help keep things DRY.

The JSON config file can be loaded in AppDelegate in the 'didFinishLaunchingWithOptions' method:

Parsing JSON response from REST service

I also needed to parse various JSON results provided by via REST service API. To handle the REST request here I'll be using the Alamofire library for Swift. Alamofire can also be added to the Cartfile:

Below is an example snippet taken from a login POST request. When using Alamofire the JSON data is available as response.result.value which can be parsed with the Argo decode method.

One thing to point out: I have used very simple parse error detection here - it either decodes or it doesn't and there is no indication of what went wrong during the decode process. With smaller data models this form of indication is perfectly adequate. But when you are working with complex data models then this type of error reporting is not granular enough to pinpoint the exact the problem if you get a parse error. Fortunately Argo provides a way to parse with failure reporting by using a Decoded type.

I found this an absolutely invaluable technique to be able to debug issues with my complex models, especially as models are pretty verbose and its always hard to spot that one string mistake.

What's next…

What about storing loaded data for offline use? JSON documents can be stored with revisions using a Couchbase Lite database. The problem here is Argo only accommodates decode, but the native objects will need encoded back into JSON for use with Couchbase. This is where Ogra (Argo in reverse) comes in. The only thing is you will need to extend the data object with an encode method. If you found this post useful or if you would be interested to see some Ogra to Couch examples just fire me a tweet @deadlyfingers.

Creating content with Web Components

Many web projects rely on a CMS of some description. The system itself is not important, but rather the content it helps to create. The primary function of a CMS is to enable the creation of content – it should empower content creation. If a new project requires a CMS the question that would tend to spring into a developer’s mind is – can I use an existing CMS already out there, or do I need to build a CMS from scratch for this project? But before that can be answered, perhaps some simple questions need to be asked first.

Asking the simple questions…

Content Management Systems are designed to make it easier to create and publish content. With so many open source systems available there’s a good chance you can find something to do the job you need. Often in the case where additional functionality is required most systems can be extended with some sort of plugin to add that ‘must have’ feature. So why would you ever need to build your own CMS from scratch? This decision should not hang solely upon application’s technical requirements, but rather it depends on who will be using it – we need to ask ourselves who will be the one creating the content? Sounds like a simple question, perhaps even an obvious question but it merits deep thought and careful design decisions. If it is a non-technical audience then displaying a bunch of features that the user doesn’t need is distracting, in the worst case intimidating, ultimately leading to a poor user experience. What if you could design something from scratch so it could be tailored exactly to fit the user’s requirements? Imagine if the UI only contained the functions needed without extraneous menu options or clutter and was designed to maximise ease of use and content creation.

Starting from scratch

Recently I was working on the ‘Badge Builder’ project which required a CMS to author quiz content. But rather than manipulate some existing CMS or plugin that might roughly fit the use case we wondered if we could design and build our own bespoke CMS components during a one week hack. At the very outset of the project we wanted to build a system that would be easy to use and quick to create content regardless of the technical abilities of the user.

Badge Builder

The main problem with building all the CMS components from scratch would be the time required – with only three weeks. However there are a number of things that I feel made the most of the development time we had.

  • Web Components

    By leveraging Web Components we could make our own custom HTML elements for each quiz and content element. Common behaviours could also be shared across elements.

  • Polymer

    During our one week hack the Polymer Starter Kit was a good kick start and saved time by setting up a stack of things like node and bower dependencies. Polymer provides a nice UI kit for web apps which can be separately imported for use. The PSK boilerplate is now available through Polymer-cli.

  • SASS and Foundation grid

    Because nobody likes working with thousands of lines of CSS, SASS can reduce physical line count and can be easily split into separate files which makes it easier to manage in source controlled projects. Also SASS makes it easy to import Foundation Grid for responsive design.

  • Live reload of server and client

    A combination of Nodemon and BrowserSync allowed us to see live updates of all changes made on server and client side. This combo is essential to fine tune the interface and user experience and is my personal ‘must have’ for designing and developing a web app project.

  • Document database

    Saving content as a JSON object allowed greater freedom developing components on client side.

Polymer Web Components

Developing Web Components for each quiz element and content element felt very intuitive. A quiz could be built using a combination of a number of individual quiz and content components.

Quiz components:

  • Single choice

    Select the correct answer from a number of options

  • Multiple choice

    Select one or more answers that apply from a number of options

  • Ordered list

    Move options into their correct order using drag and drop

  • Groups

    Move options into their correct groups using drag and drop

  • Keywords

    Type keywords to answer requirements

  • Comments

    Type a number of words to answer

Content components:

  • HTML

    HTML formatted content

  • Embedded media

    Embedded video player using iframe

  • Link

    External url

  • Section

    Split quiz into sections

Reusable elements

To create reusable Web Components you can use the Polymer Seed Element which sets up a test, demo and documentation page. But rather than have the overhead of managing and publishing multiple custom elements during development, it was faster to have the custom elements bundled with the project – the idea being once we had finished the project we could extract and publish them as separate elements. (One ‘gotya’ to be aware of is that custom element names need to be hyphenated.)

All the Web Components for the Badge Builder needed to operate on two different views – the editor (CMS) screen and the interactive viewer (quiz) screen.

Badge Builder Editor (CMS)


Badge Builder Viewer (quiz)


For the editor we wanted the quiz elements to be pretty WYSIWYG so for the most part the same element was used for the editor and viewer. The Polymer dom-if template was a good way to render the parts unique to each view in this case.

Displaying dynamic content using Web Components

To render the dynamic components to the page an empty placeholder was used.

The quiz content was loaded with Polymer’s iron-ajax element and the array of content was parsed in the response handler using a switch statement to check against specific element types.

Most elements are unique and are handled separately, apart from the default case which for elements that share exactly the same object properties. In this case the element type is passed to the function to create the element and set the properties by using the document.createElement method. (The other option is to define custom constructor but it’s not necessary.)

Once the element has been created and properties set it still needs added to the DOM. This is handled with appendChild(element) Javascript method. Notice that we can use Polymer’s ‘$’ selector to append children to our div tag with id="components". Because the elements are added dynamically in Javascript and therefore manipulating the DOM it is necessary to wrap the selector using the Polymer DOM API.

The add element method was used when loading saved content, but also when adding new elements to the page. One usability tweak is to have the page scroll down to show a newly added component. The problem with scrolling down here is that height of the new element will not be known until the DOM has updated, so we will need to add a listener to handle the dom-change event. Now we can scroll down to see the element we have added.

Saving dynamic content using Web Components

To save the dynamic content for each element I would need to be able to get the content as JSON. A nice way to handle this for all components is to use a shared behaviour. This would hold the _id property assigned by the database and also assign the element’s type using the built-in method this.localName.

Finally, when changes need to be saved it’s just a case of returning a list of all our custom elements and grabbing the data as JSON using the element’s getData behaviour. This data array can then be posted using Polymer’s iron-ajax element for saving to the database.

Azure App Services for Unity3D

Azure Mobile Services will be migrated to App Services on Sept 1st 2016. To prepare for this migration I’ve renamed and updated the open source Mobile Service Unity3d projects to support Azure App Service going forward.

Using Azure App Services to create highscores leaderboard for Unity

To demonstrate the Azure App Service I have created a sample Highscores demo for Unity to insert, update and query a user’s highscores. But to run the project in Unity Editor you will need to hook it up to an Azure App Service. Using an Azure account simply create a new App Service in the Azure portal, (for this demo I am using an App Service with Javascript backend). In a couple of minutes the Azure App Service should be up and running and ready to configure.

  1. Open Settings, search for Easy Tables and add a ‘Highscores’ table.

  2. Set all table permissions to allow anonymous access to start with.

  3. Manage schema to add Number column for ‘score’ and String column for ‘userId’

  4. Additionally, if you want to store user data or game scores you can enable authentication using Facebook, Twitter, Microsoft account or Google account. If you want to use the Facebook login in this demo you will need to create a Facebook app. Once you’ve created the Facebook app add the Facebook App ID and Secret to your Azure App Service Facebook Authentication settings.


    Then configure the Facebook App Basic and Advanced settings with your Azure App Service URL:


    If in doubt how to configure these settings check out the Azure App Service documentation.

  5. Once authentication is setup the ‘Highscores’ table script can be edited to save ‘userId’ information.


  6. In addition to table scripts you can also create custom APIs. In Settings, search for Easy APIs and add an example ‘hello’ API.


Once you have setup Azure App Service you can update the Unity scene with your App Service ‘https’ url and hit run!

Responsive Design from problem to production

Responsive Design is often seen in terms of technical execution or production. In this article I will describe what it means to design responsively as a design process from problem to production.



The need for responsive design

The idea of designing multiple versions of a website optimized for mobile and desktop might sound like a good idea, but a separate design approach will not scale easily as “the number of unique screen resolutions being used to access web sites is increasingly varied and growing at a rapid pace” [1]. I only have to look back at my last three phones I’ve purchased and each one has a larger physical display than the last one. (Admittedly this was not always by choice as the new models I wanted were not made available in the smaller form factor, due to the “bigger is better” [2] style trend of the phone industry.) As a result, my phone displays more pixels than my old 20” desktop screen which is easier to comprehend with the release of phones with 4K displays. So if I end up on some mobile ‘optimized’ site with reduced functionality or content I will always request the full-fat Desktop experience. I feel the very fact that there is a button to request the ‘Desktop version’ of a website on a mobile device is like an admission of design failure.

Responsive design is the ability for a website to display the same content across all screen sizes and resolutions often by using a resizable layout or grid (therefore removing the need for the user to choose what version of the site they want to see). Ethan Marcotte who first described ‘Responsive Design’ as the way forward proposed “rather than tailoring disconnected designs to each of an ever-increasing number of web devices, we can treat them as facets of the same experience” [3]. Since then there have been plenty of articles describing the technical characteristics of responsive web design and why it is recommended; ultimately our goal is about creating the best experience for users, but responsive design will benefit SEO for mobile searches as well.


What makes good design?

There are many design apps and developer tools available, but some tools and techniques are better suited for responsive web design. But before I launch into responsive design I’d like to consider the design aspect. If I was to share one truth from my time learning graphic design and all the years of experience as a designer, it would be; good design needs a good problem. As a designer I always have the desire to produce an award winning or world class design for every project. To reproduce success is really hard and that’s why designers develop some form of working habit or pattern to try to repeat successful outcomes. This is often explained as the ‘Design Process’. I don’t wish to cover every variation of the design process but I feel its good practice to review the general principles:

  1. Research / investigation
  2. Design brief
  3. Generation of ideas
  4. Synthesis
  5. Final design and production

The word ‘design’ infers the need to solve a particular problem. Therefore, it is important to start the design process with knowledge and thought. Sometimes its all to easy to think we know enough about what the end product should look like that we fail to investigate or question the motivation for design. When the problem isn’t immediately obvious it will take a certain amount of research into the subject to be able to ask the right questions to find out the problem which the design will aim to solve. When the problem is known, we can describe the solution which will solve the problem – this forms the design brief. When it comes to generating ideas it maybe helpful to have a brain storming session first. The best ideas (traditionally three) are identified as concepts for further development and design synthesis. Finally, the strongest concept is selected as the solution for final design and production.

I encourage designers to define your own design process (or pattern for success). When Steve Jobs asked designer Paul Rand to generate some logo ideas for them to look at he declined suggesting that he would only present them with the solution to their problem. I admire Rand’s thinking – I feel when I have to ask a client about which options they prefer its usually because I haven’t found the right solution yet.

Responsive design is the recognised technical solution to the diverse screen size problem, but we must always consider the design aspect of a project. I must constantly challenge myself to find a good problem to solve. Without a good problem to solve I will just be pushing pixels and not fulfilling my purpose as a designer.

Responsive Design for designers

If you are a designer for print it helps to have an understanding of the print production process. Similarly, with responsive web design it is important to know how responsive developer tools operate. When it comes to design for print designers use grids and guides for page layout. This grid layout mechanism is similar for web developers except the grid will dynamically resize depending on window or screen size. The most popular grids for responsive design are Bootstrap and Foundation so even if you don’t like to get your hands dirty with code, it is something that anyone can play with and see how design elements (or columns) will react as the dynamic grid changes with different widths. By default, both grid systems use a 12 column grid but you can also customize the number of columns with Bootstrap and with Foundation using Sass. Designers who have a grasp of how the dynamic grid operates on the production or development side will be in a better position to create ‘responsive-ready’ designs.

Design tools

When I started designing for web there was only the desktop browser to think about so the basic approach of designing for the lowest common resolution worked well. Initially I used Photoshop for web designs with pixel perfect layouts. But as consumer monitors became capable of displaying greater resolutions it was possible to reproduce richer layouts influenced by print design. Illustrator became a superior tool for web design as it offered advanced control of grids and guides originally used for print design. Illustrator was also vector based and that made it easier to stretch out graphics as screens got bigger. Because of this I feel vector based tools are vastly more equipped for responsive design work than pixel-based design tools. But while Illustrator is a great tool for seasoned print design professionals, some digital designers might prefer something a little lighter and easier to use like Sketch or the new Experience Design app. However, the problem with all these design tools is that none can produce design with responsive information. Even the new digital design apps still feel like design for print tools stuck with static canvas layouts and limited bitmap resizing that fail to scale in a way that mimics the production process (ie. CSS background properties). Because of the lack of professional tools capable of responsive design that means the designer has to do extra work. For responsive designs I will design at least two size layouts for each page. I like to design a page in portrait aspect to represent a mobile view, and landscape aspect to represent desktop or tablet. So as long as a designer understands how responsive grids or dynamic columns work, then these designs should be easily fused together during development or production stage.

Responsive Design for developers

There is an abundance of tools for developing responsive websites. But just like I mentioned that it was important for designers to think about the development or production I also feel responsive web developers should be mindful of the design side. Developers need to be aware of the current problem that professional design tools don’t contain responsive information and that means they will need to work closer with designers to figure out how to merge separate designs into one single responsive design. Responsive web developers will need to be familiar with the design grid so that they can turn page designs into a single dynamic layout of HTML and CSS.

The language of responsive web design

CSS is the design language of the web. But CSS is rather an unwieldy art that does not sit comfortably in a designer or developer camp. I find CSS must be constantly tweaked along with the HTML elements to achieve the required layout, especially with the added complication of responsive design media queries. It is therefore preferential to use web technologies that are fast to deploy and allow live refreshing when developing responsive design.

Responsive web kit

Just like I encouraged designers to make their own design process, I also encourage developers to use or discover the web technologies that will work best for producing the website or web app.

Unsurprisingly it’s not possible to cover every web technology in one article so I will explain the reasons behind the web technologies that I’ve been consistently using for my recent projects. Plus, I really want to share my favourite client-side web design / developer stack because if you are passionate about design I think you will like it too!

Project dependencies

Responsive web projects tend to use a number of third party dependencies, and package managers can be used to help install and version manage them all. Bower is awesome for managing project dependencies like Bootstrap or jQuery. While NPM is great for install testing and build tools like Gulp and BrowserSync. Package management is also advantageous for source controlled projects as it can be easily setup to prevent committing a shed load of third party code into your repro. Following this procedure means contributor commits are kept clean and will make it easier to inspect changes or code review.

Design as you go

A painter will add strokes of paint to his canvas, while a sculptor will chip bits of a rock to expose an image. Designing websites is a progressive art that is both additive like a painter and subtractive like a sculptor. Can you imagine asking a painter or sculptor to work blind folded? As a designer I can’t produce my best work unless I have real-time feedback of my adjustments. I need to see and interact with my design in real-time and across multiple devices. That’s why BrowserSync is the single most important responsive design tool for client-side web development. ‘Live reload’ or ‘live preview’ is important for web design, and with responsive web design it’s mission critical to test all the desktop and touch screens!

A UI kit for web apps

Ever wanted to replicate the performance of the native UITableView on iOS or ListView on Android? Polymer’s ‘iron-list’ and ‘iron-image’ elements can be used to create ‘buttery-smooth’ scrolling recyclable lists at 60fps. Polymer is also built on top of Web Components which allows you create your own reusable elements, but Polymer also provides a ‘Material Design’ UI kit suited for responsive web app development. I also find the template and binding model lends itself well for creating responsive designs. Polymer is well suited for developing SPAs (single page applications) and can support client-side routing.

Smarter CSS

Design should be an enjoyable art, but can you imagine what a lot of CSS is like to manage! All these responsive elements, layout grids, images and glyphs will add lines and lines of CSS. The sheer amount of CSS required by a responsive design project could very easily and quickly become unmanageable. Sass or SCSS is just like writing CSS, except you can do it with less code and fewer lines of code are easier to manage. Sass variables will enable designers to create a theme to easily define or tweak colours, type styles and spacing. Another powerful feature is ‘mixins’ which can be used to reuse common styles, define responsive media queries, generate image tiles, build font faces and include browser prefixes. Sass will reduce the number of lines of CSS you need to manage.

Responsive Grid

When it comes to responsive web design the use of a popular grid system like Bootstrap is a good place to start. I do feel however the default four tier grid system (xs, sm, md, lg) of Bootstrap 3 doesn’t give me enough granular control to deal with phone vs phablet sized devices. So I use the Bootstrap grid as a starting point and usually add extra media queries for smaller mobile devices. Bootstrap 4 promises to address this issue and will deliver a more comprehensive five tier grid system (xs, sm, md, lg, xl) for responsive design amongst other differences.

HD is the new standard

Retina displays are everywhere these days! If you walk into a phone shop today, I reckon it would be harder to find a phone without an HD display. The new HTML5 picture element allows developers to specify higher resolution images so the graphics will display sharper. But I still prefer to use CSS media queries to handle ‘Retina’ (@2x) and ‘Retina HD’ (@3x) images.

I find the CSS method gives more control over scaling, cropping and positioning which can be advantageous for responsive designers. With the CSS background image methods I can also use an image sprite technique to load in a texture map (or texture atlas) of tiled images and this improves page load times as there will be less http requests.

One final thing though, high definition images are much larger in filesize so make sure to compress all bitmaps! ImageOptim is a great image compression tool I use on Mac, though they also recommend File Optimizer for Windows.

Vector glyphs

With responsive design there is always a need to scale graphics. Vector graphics are resolution independent and can be scaled to any size and that makes them a great asset. The good news it that most modern browsers support SVG. But if you have a set of vector icons that are monochromic, then a neater way to bring these to web is by exporting them all as a custom font. Icomoon is a free online tool to create custom font glyphs. Oh, and because its seen as a font you can take advantage of CSS font sizing and colour properties.

Automate all the things

Gulp makes it easy to develop with full source, or build a minified version for production. Gulp also watches for source code changes and works in conjunction with BrowserSync. So whether you fiddle with HTML, edit a line of script, tweak a style, modify an image or asset it can notify BrowserSync to reload. Gulp can even compile Sass into normal CSS for reloading live design changes.


Building web apps with Cordova

Cordova tools make it easy to package your web app as a hybrid app for distribution on multiple app stores. But the big challenge for web app developers is creating a user experience that will look and feel as good as a native app.

App-ify web view behaviours

The web view provided by iOS and Android come with a number of behaviours that are designed to improve user experience with websites. In a website context this is true, but when it comes to responsively designed web apps these web view behaviours result in undesirable effects as far as an app experience is concerned:

  1. Page bounce or spring – pages have a bounce or spring effect, but apps don’t bounce.
  2. Double tap zoom – pages allow double tap regional zooming, but apps don’t zoom.
  3. 300ms tap delay – page interactions are artificially slower to accommodate the double tap zoom gesture, but apps don’t exhibit unresponsiveness.
  4. Long tap inline magnification – pages allow prolonged selection for inline magnification, but apps don’t show inline magnification everywhere.
  5. Global user selection – page selection is everywhere, but apps only provide selection where user input is desired.

Fortunately, most of these web view behaviours can be tamed so a hybrid app can behave in a native app manner that a user would expect.

  1. Page bounce or spring behaviour can be disabled by setting Cordova’s ‘DisallowOverscroll’ preference to ‘true’.
  2. Double tap zoom behaviour can be disabled by setting Cordova’s ‘EnableViewportScale’ preference to ‘true’ and setting the HTML5 viewport meta tag ( to disable user scaling.

  3. The 300ms click delay is fixable on Chrome by setting the device width on the HTML5 viewport meta tag (shown above).
  4. Long tap inline magnification can be disable by setting Cordova’s ‘Suppresses3DTouchGesture’ preference to ‘true’.
  5. Global user selection can be disabled with CSS ‘user-select’ set to ‘none’ (including the usual browser prefixes ( With iOS ‘-webkit-touch-callout’ also needs to set to ‘none’ to disable the touch callout.

    NB: As this turns off all user selection, you might need certain elements or form inputs to allow user selection. In this case certain exceptions can be added using the :not() CSS selector.

Turbo web view performance for iOS

While there are quite a number of things you can do to improve web page performance, one of the recent hybrid app performance headlines for iOS is the availability of WKWebView which provides faster performance than the older UIWebView. Cordova supports WKWebView but there is a need to install the WKWebView Cordova plugin and set the ‘CordovaWebViewEngine’ preference to use ‘CDVWKWebViewEngine’ in Cordova’s ‘config.xml’ file.

A couple of time saving Cordova scripts


Responsive web design for designers

  • Understanding the dynamic grid to design responsively
  • Separate designs that lend themselves to a single responsive design
  • The advantages of vector-based design tools

Responsive web design for developers

  • Understanding the design grid to merge separate designs
  • Responsive design with multiple device testing and live reloading
  • Developer web kit for responsive design

Production of hybrid app

  • Removing the unwanted web view behaviours for responsive Cordova hybrid apps
  • Turn on turbo performance of Cordova hybrid apps for iOS
  • Scripts to help production of Cordova hybrid apps across platforms


  1. Jason Sperling (2013) The Big Argument for Responsive Design [Online] Viget. Available: [Accessed 2 May 2016]
  2. Ben Taylor (2014) Why smartphone screens are getting bigger: Specs reveal a surprising story [Online] PCWorld. Available: [Accessed 2 May 2016]
  3. Ethan Marcotte (2010) Responsive Web Design [Online] A List Apart. Available: [Accessed 2 May 2016]

Additional media: