Querying Application Insights for data visualisation

Ibex dashboard is an open source web app for displaying telemetry data from Application Insights. It comes with a number of sample templates including analytics dashboards for Bots. If you’re developing a bot and you want to see how your bot is performing over time then you can select the Bot Instrumentation template which requires you to enter your Application Insights App Id and App Key. Also depending on your bot you will need to add Node.js instrumentation or C# instrumentation in order to enable logging to Application Insights. Then after a couple of minutes you will start to see the data come through! The dashboard can be completely customised using generic components including charts, tables, score cards and drill-down dialogs. These elements can be used to review how your bot performs over time, monitor usage stats, message sentiment, user retention and inspect user intents.

If you are new to Application Insights one of the useful features of the Ibex dashboard is the ability to inspect an element’s Application Insights query and the formatted JSON data side by side. 

This query can be copied and played back inside your Application Insights live code editor. This is a good way to learn how the Application Insights queries work as you can step through the query by commenting out various lines with double slashes ‘//’.

Writing Azure Log Analytics queries for Ibex dashboard

The Ibex dashboard schema is composed of meta data, data sources, filters, elements and dialogs. Each data source allows you to define a query and a ‘calculated’ javascript function to process the query’s results for display purposes. Before learning to write Application Insights queries I was used to writing javascript map / reduce functions to aggregate data and so it’s all too easy to rely on previous javascript knowledge to process the data from a basic query. But often this javascript ‘reduce’ aggregation logic can done in an Application Insights query with a lot less effort. So invest some time up front to learn the key Application Insights query concepts and it will pay off in the long run!

To help to illustrate this we can look at the Application Insights query for tracking a bot to human hand-off during a user’s conversation session. For this scenario we built a QnA bot with the hand-off module installed. If a customer asks the QnA bot a question and no answer was found in the knowledge base we trigger an automatic hand-off to human. We want to show the fastest, longest and average times for customer waiting for an human agent to respond in the dashboard.

We can start by writing a basic query in Application Insights to get all the transcripts from the ‘customEvents’ table and ‘project’ only the information we need.

But in this example we are not using Application Insights to aggregate the results so we end up with a lot of results to process. Given the query above the following code snippet is the amount of Javascript required.

The first ‘reduce’ block is required to group the transcripts per user Id. Then for every user we track the state change from waiting and talking to human agent and calculate the time difference in seconds. Where ‘state’ is an integer value that marks the current status of the conversion.

0 = Bot
1 = Waiting
2 = Human agent

But we can optimise the code by doing the aggregation within the Application Insights query by using the ‘summarize’ operator and ‘count’ function.

Notice how you can apply aggregations in multiple passes, in this case the ‘summarize’ operator and ‘count’ function is used to aggregate results twice in conjunction with multiple ‘where’ statements that are used to filter the results. Now the javascript ‘calculated’ function code can be greatly simplified:

The only thing we do is a ‘reduce’ function to convert the time format ‘hh:mm:ss’ returned from the Application Insights query into a number of seconds for the various calculations for displaying in a score card element.

The final Application Insights query is available in the hand-off to human dashboard template and is included with Ibex dashboard.

Further reading and resources:

Unity3d and Azure Blob Storage

Previously I’ve looked at using Azure App Services for Unity, which provided a backend for Unity applications or games using Easy Tables and Easy APIs. But what if I wanted to lift and shift heavier data such as audio files, image files, or Unity Asset Bundles binaries? For storing these types of files, I would be better using Azure Blob storage. Recently I created an Azure Blob storage demo project in Unity to show how to save and load these various asset types in Unity. One of the exciting new applications for Unity is developing VR, AR or MR experiences for HoloLens where a backend could serve media content dynamically whether it’s images, audio, or prefabs with models, materials and referenced scripts. When thinking of cloud gaming the tendency is to consider it in terms of end user scenarios like massive multiplayer online games. While Azure is designed to scale, it is also helpful to use during early stage development and testing. There is an opportunity to create productive cloud tools for artists, designers and developers especially when extensive hardware testing is required in Virtual Reality, Augmented Reality or Mixed Reality development. For example, imagine being able to see and test updates on the hardware without having to rebuild the binaries in Unity or Visual Studio each time. There are many more use cases than I’ve mentioned here like offering user generated downloadable content for extending your game or app.

I’ll be covering the load and save code snippets from the Unity and Azure Blob storage demo commentary which you can watch to see how you can save and load image textures, audio clips as .wav files, and Asset Bundles. The Unity Asset Bundle demo will also include loading Prefabs and dynamically adding them into a Unity Scene using XML or JSON data which should give you some ideas of how you might like to use Blob storage in your Unity development or end user scenario.

Setup Azure Blob Storage

Setting up Blob Storage for the Unity demo can be done quickly in just a couple of steps:

  1. Sign in to your Azure portal and create a new Storage Account.
    01-StorageAccount
  2. Once the Storage account is provisioned then select the add new container button which will be used for storing the blobs.
  3. 02-CreateContainer
  4. Create the ‘Blob‘ type container which permits public read access for the purposes of this demo.
  5. 03-NewContainer-BlobAccess

Audio files

Saving Unity Audio Clips into Blob Storage

For the Unity audio blob demo I created a helper script to convert Unity Audio Clip recording to .wav files for the purpose of saving to Azure Blob Storage.
Once the audio has been recorded in Unity I can upload the file using the PutAudioAudio method which takes a callback function, the wav bytes, the container resource path, the filename and the file’s mime type. By the way this method must be wrapped using StartCoroutine which is the way Unity 5 handles asynchronous requests. Once the request is completed it will trigger the PutAudioCompleted callback function I have provided my script with a response object. If the response is successful you will see the wav file blob added in your Blob Container.

☞ Tip: Grab the Storage Explorer app for viewing all the blobs!

Loading .wav files from Blob Storage

As we used the Blob type container with public read access you can use the UnityWebRequest.GetAudioClip method to directly load the .wav file from Azure Blob Storage and handle it as a native Unity AudioClip type for playback.

Image files

For the Unity image blob demo I used Unity’s Application.CaptureScreenshot method to generate a png image representation of the current state of the game screen.

Saving Images into Blob Storage

The image is saved using the PutImageBlob method which is similar to the audio blob except we pass the image bytes and mime type.

Loading Image Textures from Blob Storage

As we used the Blob type container with public read access you can use the UnityWebRequest.GetTexture method to directly load the .png file from Azure Blob Storage and handle it as a native Unity Texture type for use. As I want to use the Texture in Unity UI to display as an Image I need to convert it to a sprite using my ChangeImage function.

Unity Asset Bundles

Unity Asset Bundles provide a way to dynamically load in assets in your project. This Asset Bundle demo for Blob Storage is a little more complicated than the other examples. An important note to remember is that Asset Bundle binaries need to be build for each target platform. Refer to Unity documentation on building Asset Bundles for more info on building Asset Bundles. Also make sure to review the code stripping section if you want to be able to use referenced scripts in your Prefabs when you do a build.

Building and uploading the Asset Bundles for each platform to Blob Storage

I have included the Editor scripts with the demo to build the Asset Bundle for each platform. NB: Windows 10 Store App (or HoloLens) bundles can only be built on the Windows Unity Editor at time of writing this. Building the Asset Bundles and uploading them is performed inside Unity Editor:

  1. Select Assets > Build Asset Bundles
  2. Select Window > Upload Asset Bundles…

Loading Asset Bundles from Blob Storage

If you like the Azure Storage Services library for Unity let me know about it on Twitter. Any issues, features or blob storage demo requests please create it as an issue on github for others to learn from and collaborate.

Merging Unity scenes, prefabs and assets with git

When it comes to working as a team on the same project we are all thankful for source control. But even if you’re cool with git there are some things to be aware of when starting new source controlled Unity projects that should help to reduce the chance of nasty merge conflicts.

Solo Scenes

Something to generally avoid in Unity is working on the same scene. Thats why the question of how to merge a scene when a team of developers are working on it is a fairly hot topic. One basic strategy is for each person to clone the main scene and work on their own version, then nominate a scene master to combine the various elements into in the main scene to avoid conflicts. But because this is quite a restricted way of working Unity 5 introduced Smart Merge and the UnityYAMLMerge tool that can merge scenes and prefabs semantically.

Asset Serialization using “Force Text”

By default Unity will save scenes and prefabs as binary files. But there is an option to force Unity to save scenes as YAML text based files instead. This setting can be found under the Edit > Project Settings > Editor menu and then under Asset Serialization Mode choose Force Text.

unity-edit-projectsettings-editor-assetserializationmode-forcetext

But as this is not the default setting make sure when applying this mode that everyone else on the team is happy to switch.
If you select “Force Text” to save files in YAML format you should add a .gitattributes file that tells git to treat *.unity, *.prefab and *.asset files as binary to ensure git doesn’t try to merge scenes automatically. Paste the following into the .gitconfig file inside your Unity project:

Another result of saving in text file mode is that you can see the changes in source control commits.

Setting up UnityYAMLMerge with Git

You can access the UnityYAMLMerge tool from command line and also hook it up with version control software. Paste the following into the .gitconfig file inside your Unity project:

UnityYAMLMerge (Windows):

UnityYAMLMerge (Mac):

GitMerge for Unity

Worth a mention is the free GitMerge tool for Unity for merging scene and prefabs inside Unity Editor but unfortunately this editor plugin is currently broken in Unity 5. Once you start merging and are in a git merge state you can resolve the conflicts inside the Unity app using GitMerge Window for Unity which is opened via menu Window > GitMerge.

Merging Unity C# script conflicts with P4Merge app

For merging conflicts I prefer to use the free P4Merge visual merge tool which is available for Mac and Windows. Here’s how to hook up the P4Merge app as the global git merge tool when issuing the git mergetool command:

P4Merge (Windows):

P4Merge (Mac):

Setup a .gitignore file for Unity projects

First up there are certain Unity folders and files you don’t want to include in the repo. Only ‘Assets’ and ‘ProjectSettings’ need to be included. Other Unity generated folders like ‘Library’, ‘obj’, ‘Temp’ should be added to the .gitignore file. Or you can just copy the boilerplate Unity .gitignore file. I also suggest ignoring generated files like OS and source control temp files:

Unfortunately I made the over zealous mistake of adding all *.meta files to the .gitignore file. At first this seemed like a good idea until the repo gets cloned and you end up with broken script and resource links in the Unity Editor scene. The Unity source control documentation mentions that these .meta files should be added to source control. However I found that its only the meta files associated with resource files and scripts that are linked to a GameObject in the Unity Editor that are required. By using the exclusion rule in gitignore I can limit it so the only .meta files to be saved are those within the Unity special folders like: ‘Prefabs’, ‘Resources’, ‘Scenes’ as well as a ‘Scripts’ folder. So if you wish to limit the meta files just add the following rules to the .gitignore:

For example if I import the Azure AppServices library for Unity by copying it into the Assets/AppServices directory that would mean no meta files would be pushed in commits for this folder as it’s outside the Assets/Scripts folder. But what if I use a library that will be linked with GameObjects like TSTableView for example which attaches to a Unity UI Scroll View. Either I can drop the TSTableView folder inside the Assets/Scripts directory, or if you prefer to keep third party scripts outside as I do then you also need to add the Assets/TSTableView directory to the list of exceptions in the .gitignore file:

If you adopt this convention just be aware that every time you add third party MonoBehaviour script libraries outside the Assets/Scripts folder then these directories will need to be added as .gitignore exceptions to save the associated .meta files.

Swift JSON parsing for iOS development

Recently I started a new iOS Swift project and spent way more time than I would like trying to find a JSON parser that could handle the various JSON data models I was working with. In this post I will document some real code samples that should prove useful for other iOS developers looking to get off to head start with data modelling in Swift.

The search for a Swift JSON parser…

Handling JSON is a very common task with modern app development whether its consuming some REST Service API, loading a JSON file or document objects from database. With regards to Windows C# apps Newtonsoft JSON is the popular choice and similarly with Java for Android there is GSON. But what library to use for iOS apps? Previously I had used libraries like JSONModel to parse JSON data into native objects and it worked pretty well. But the iOS developer landscape has changed with the shift from Objective C to Swift so I wanted to find a Swift based framework. There are a number of open source Swift JSON parsers, but the ones I tried resulted in code mountains just to parse some format of JSON. This felt like a fail compared to the elegant manner of Newtonsoft or GSON object models. I was surprised how hard it was to pinpoint the one Swift library that could satisfy all my parsing needs. But with Argo I feel I’ve discovered the golden JSON parsing library for iOS Swift development.

Getting on board with Argo

I’m a long time user of CocoaPods for Xcode source control projects as it makes it easier to avoid jamming up a repro with binaries. However the precompiled versions on CocoaPods don’t always provide the latest version available on GitHub. This is where Carthage comes in as you specifically request a tag version or branch on GitHub. Carthage can be quickly installed using Homebrew brew install carthage as mentioned in the installing Carthage docs. To setup create a new text file and save it as ‘Cartfile’ inside your Xcode project folder. (In this case I’m requesting a specific version of Argo and Curry for use with Swift 2)

Once you have installed Carthage and saved a ‘Cartfile’ then you need to build the frameworks.

  1. In Terminal navigate to the project folder and run carthage update to build frameworks for all platforms. NB: For packages that can only be built for a single platform use carthage build --platform iOS
  2. Drop built ‘*.framework’ folder into Xcode project
  3. Add Build Phases > Run Script carthage copy-frameworks and add Input Files path to ‘*.framework’

JSON data modelling with Argo

Argo decodes standard property types (String, Int, UInt, Int64, UInt64, Double, Float, Bool) as well as arrays and optional properties. You can decode a nested object or an array of nested objects that conform to the ‘Decodable’ protocol. In fact you can even do inception – using the same struct within itself as shown below. One thing that might require explanation is Argo’s sugar syntax. The summary of the sugar syntax is this:

  • <^> syntax pulls the first property, and <*> pulls subsequent properties.
  • <| syntax relates to a property.
  • <|? syntax relates to an optional property.
  • <|| syntax relates to an array of 'decodable' objects.
  • <||? syntax relates to an array of optional 'decodable' objects.

What about decoding JSON values into native types like NSURL and NSDate?

It can be advantageous to parse URL and date values as native types instead of String types. To get this to work with Argo you need to make a parser which wraps NSURL and NSDate in the 'Decoded' type. But first I made a Uri helper to encode url strings as NSURL and a Date helper to convert a date string (of a known format) to NSDate.

The Parser helper returns objects wrapped in Decoded type:

Example model with NSURL and NSDate using the Parser helper (note the extra brackets):

Three things to avoid in your JSON models for smoother sailing with Argo

  1. Two dimensional arrays (arrays within an array) aren't handled out of the box. There are multi-dimensional array workarounds but it can cause compiler melt down if your model is particularly complex. Better to avoid this complexity by flattening arrays to a single array or use nested property arrays.
  2. Best to limit object model to no more than 10 properties. This is because there are limits of how many things can be curried with Argo before the complier gives up. Try to use nested objects to group things together, but if that is not possible then there are techniques to deal with complex expressions.
  3. Array of mixed objects (dynamic types). Argo can be made to decode an array of different types but it will increase complexity as you will have to use subclasses instead of structs.

How to load JSON file within iOS app bundle in Swift

Often the first thing I like to do is to load a JSON file to configure my app. For example you might have various JSON config files for localhost, staging and production settings.

The data model using Argo & Curry would look like this in Swift:

To load the JSON file within the app bundle I use a file helper:

The loaded JSON can be parsed into the 'ConfigModel' using Argo's decode method.

While this is fine for converting one type of object, what if you have multiple data models? You could quickly end up with a lot of repetitive code. One of the powerful things with Swift 2 is that it supports Abstract Types. Argo needs a little help to ensure the abstract type conforms to the Decodable type so there is slightly more boilerplate in this case, but it should help keep things DRY.

The JSON config file can be loaded in AppDelegate in the 'didFinishLaunchingWithOptions' method:

Parsing JSON response from REST service

I also needed to parse various JSON results provided by via REST service API. To handle the REST request here I'll be using the Alamofire library for Swift. Alamofire can also be added to the Cartfile:

Below is an example snippet taken from a login POST request. When using Alamofire the JSON data is available as response.result.value which can be parsed with the Argo decode method.

One thing to point out: I have used very simple parse error detection here - it either decodes or it doesn't and there is no indication of what went wrong during the decode process. With smaller data models this form of indication is perfectly adequate. But when you are working with complex data models then this type of error reporting is not granular enough to pinpoint the exact the problem if you get a parse error. Fortunately Argo provides a way to parse with failure reporting by using a Decoded type.

I found this an absolutely invaluable technique to be able to debug issues with my complex models, especially as models are pretty verbose and its always hard to spot that one string mistake.

What's next…

What about storing loaded data for offline use? JSON documents can be stored with revisions using a Couchbase Lite database. The problem here is Argo only accommodates decode, but the native objects will need encoded back into JSON for use with Couchbase. This is where Ogra (Argo in reverse) comes in. The only thing is you will need to extend the data object with an encode method. If you found this post useful or if you would be interested to see some Ogra to Couch examples just fire me a tweet @deadlyfingers.

Creating content with Web Components

Many web projects rely on a CMS of some description. The system itself is not important, but rather the content it helps to create. The primary function of a CMS is to enable the creation of content – it should empower content creation. If a new project requires a CMS the question that would tend to spring into a developer’s mind is – can I use an existing CMS already out there, or do I need to build a CMS from scratch for this project? But before that can be answered, perhaps some simple questions need to be asked first.

Asking the simple questions…

Content Management Systems are designed to make it easier to create and publish content. With so many open source systems available there’s a good chance you can find something to do the job you need. Often in the case where additional functionality is required most systems can be extended with some sort of plugin to add that ‘must have’ feature. So why would you ever need to build your own CMS from scratch? This decision should not hang solely upon application’s technical requirements, but rather it depends on who will be using it – we need to ask ourselves who will be the one creating the content? Sounds like a simple question, perhaps even an obvious question but it merits deep thought and careful design decisions. If it is a non-technical audience then displaying a bunch of features that the user doesn’t need is distracting, in the worst case intimidating, ultimately leading to a poor user experience. What if you could design something from scratch so it could be tailored exactly to fit the user’s requirements? Imagine if the UI only contained the functions needed without extraneous menu options or clutter and was designed to maximise ease of use and content creation.

Starting from scratch

Recently I was working on the ‘Badge Builder’ project which required a CMS to author quiz content. But rather than manipulate some existing CMS or plugin that might roughly fit the use case we wondered if we could design and build our own bespoke CMS components during a one week hack. At the very outset of the project we wanted to build a system that would be easy to use and quick to create content regardless of the technical abilities of the user.

Badge Builder

The main problem with building all the CMS components from scratch would be the time required – with only three weeks. However there are a number of things that I feel made the most of the development time we had.

  • Web Components

    By leveraging Web Components we could make our own custom HTML elements for each quiz and content element. Common behaviours could also be shared across elements.

  • Polymer

    During our one week hack the Polymer Starter Kit was a good kick start and saved time by setting up a stack of things like node and bower dependencies. Polymer provides a nice UI kit for web apps which can be separately imported for use. The PSK boilerplate is now available through Polymer-cli.

  • SASS and Foundation grid

    Because nobody likes working with thousands of lines of CSS, SASS can reduce physical line count and can be easily split into separate files which makes it easier to manage in source controlled projects. Also SASS makes it easy to import Foundation Grid for responsive design.

  • Live reload of server and client

    A combination of Nodemon and BrowserSync allowed us to see live updates of all changes made on server and client side. This combo is essential to fine tune the interface and user experience and is my personal ‘must have’ for designing and developing a web app project.

  • Document database

    Saving content as a JSON object allowed greater freedom developing components on client side.

Polymer Web Components

Developing Web Components for each quiz element and content element felt very intuitive. A quiz could be built using a combination of a number of individual quiz and content components.

Quiz components:

  • Single choice

    Select the correct answer from a number of options

  • Multiple choice

    Select one or more answers that apply from a number of options

  • Ordered list

    Move options into their correct order using drag and drop

  • Groups

    Move options into their correct groups using drag and drop

  • Keywords

    Type keywords to answer requirements

  • Comments

    Type a number of words to answer

Content components:

  • HTML

    HTML formatted content

  • Embedded media

    Embedded video player using iframe

  • Link

    External url

  • Section

    Split quiz into sections

Reusable elements

To create reusable Web Components you can use the Polymer Seed Element which sets up a test, demo and documentation page. But rather than have the overhead of managing and publishing multiple custom elements during development, it was faster to have the custom elements bundled with the project – the idea being once we had finished the project we could extract and publish them as separate elements. (One ‘gotya’ to be aware of is that custom element names need to be hyphenated.)

All the Web Components for the Badge Builder needed to operate on two different views – the editor (CMS) screen and the interactive viewer (quiz) screen.

Badge Builder Editor (CMS)

BadgeBuilder-MicroBit

Badge Builder Viewer (quiz)

BadgeBuilder-MicroBit-Quiz

For the editor we wanted the quiz elements to be pretty WYSIWYG so for the most part the same element was used for the editor and viewer. The Polymer dom-if template was a good way to render the parts unique to each view in this case.

Displaying dynamic content using Web Components

To render the dynamic components to the page an empty placeholder was used.

The quiz content was loaded with Polymer’s iron-ajax element and the array of content was parsed in the response handler using a switch statement to check against specific element types.

Most elements are unique and are handled separately, apart from the default case which for elements that share exactly the same object properties. In this case the element type is passed to the function to create the element and set the properties by using the document.createElement method. (The other option is to define custom constructor but it’s not necessary.)

Once the element has been created and properties set it still needs added to the DOM. This is handled with appendChild(element) Javascript method. Notice that we can use Polymer’s ‘$’ selector to append children to our div tag with id="components". Because the elements are added dynamically in Javascript and therefore manipulating the DOM it is necessary to wrap the selector using the Polymer DOM API.

The add element method was used when loading saved content, but also when adding new elements to the page. One usability tweak is to have the page scroll down to show a newly added component. The problem with scrolling down here is that height of the new element will not be known until the DOM has updated, so we will need to add a listener to handle the dom-change event. Now we can scroll down to see the element we have added.

Saving dynamic content using Web Components

To save the dynamic content for each element I would need to be able to get the content as JSON. A nice way to handle this for all components is to use a shared behaviour. This would hold the _id property assigned by the database and also assign the element’s type using the built-in method this.localName.

Finally, when changes need to be saved it’s just a case of returning a list of all our custom elements and grabbing the data as JSON using the element’s getData behaviour. This data array can then be posted using Polymer’s iron-ajax element for saving to the database.

Azure App Services for Unity3D

Azure Mobile Services will be migrated to App Services on Sept 1st 2016. To prepare for this migration I’ve renamed and updated the open source Mobile Service Unity3d projects to support Azure App Service going forward.

Using Azure App Services to create highscores leaderboard for Unity

To demonstrate the Azure App Service I have created a sample Highscores demo for Unity to insert, update and query a user’s highscores. But to run the project in Unity Editor you will need to hook it up to an Azure App Service. Using an Azure account simply create a new App Service in the Azure portal, (for this demo I am using an App Service with Javascript backend). In a couple of minutes the Azure App Service should be up and running and ready to configure.

  1. Open Settings, search for Easy Tables and add a ‘Highscores’ table.

    AppService_1-EasyTables
  2. Set all table permissions to allow anonymous access to start with.

    AppService_2-TablePermissions
  3. Manage schema to add Number column for ‘score’ and String column for ‘userId’

    AppService_3-ManageSchema
  4. Additionally, if you want to store user data or game scores you can enable authentication using Facebook, Twitter, Microsoft account or Google account. If you want to use the Facebook login in this demo you will need to create a Facebook app. Once you’ve created the Facebook app add the Facebook App ID and Secret to your Azure App Service Facebook Authentication settings.

    AppService_Auth

    Then configure the Facebook App Basic and Advanced settings with your Azure App Service URL:

    FacebookAppDomains
    FacebookAppSecureCanvasURL
    FacebookAppAdvancedSettings

    If in doubt how to configure these settings check out the Azure App Service documentation.

  5. Once authentication is setup the ‘Highscores’ table script can be edited to save ‘userId’ information.

    AppService_4-TableInsertScript

  6. In addition to table scripts you can also create custom APIs. In Settings, search for Easy APIs and add an example ‘hello’ API.

    AppService_EasyAPIs
    AppService_EasyAPIs-hello.js

Once you have setup Azure App Service you can update the Unity scene with your App Service ‘https’ url and hit run!

Responsive Design from problem to production

Responsive Design is often seen in terms of technical execution or production. In this article I will describe what it means to design responsively as a design process from problem to production.

Contents:

Background

The need for responsive design

The idea of designing multiple versions of a website optimized for mobile and desktop might sound like a good idea, but a separate design approach will not scale easily as “the number of unique screen resolutions being used to access web sites is increasingly varied and growing at a rapid pace” [1]. I only have to look back at my last three phones I’ve purchased and each one has a larger physical display than the last one. (Admittedly this was not always by choice as the new models I wanted were not made available in the smaller form factor, due to the “bigger is better” [2] style trend of the phone industry.) As a result, my phone displays more pixels than my old 20” desktop screen which is easier to comprehend with the release of phones with 4K displays. So if I end up on some mobile ‘optimized’ site with reduced functionality or content I will always request the full-fat Desktop experience. I feel the very fact that there is a button to request the ‘Desktop version’ of a website on a mobile device is like an admission of design failure.

Responsive design is the ability for a website to display the same content across all screen sizes and resolutions often by using a resizable layout or grid (therefore removing the need for the user to choose what version of the site they want to see). Ethan Marcotte who first described ‘Responsive Design’ as the way forward proposed “rather than tailoring disconnected designs to each of an ever-increasing number of web devices, we can treat them as facets of the same experience” [3]. Since then there have been plenty of articles describing the technical characteristics of responsive web design and why it is recommended; ultimately our goal is about creating the best experience for users, but responsive design will benefit SEO for mobile searches as well.

Intro

What makes good design?

There are many design apps and developer tools available, but some tools and techniques are better suited for responsive web design. But before I launch into responsive design I’d like to consider the design aspect. If I was to share one truth from my time learning graphic design and all the years of experience as a designer, it would be; good design needs a good problem. As a designer I always have the desire to produce an award winning or world class design for every project. To reproduce success is really hard and that’s why designers develop some form of working habit or pattern to try to repeat successful outcomes. This is often explained as the ‘Design Process’. I don’t wish to cover every variation of the design process but I feel its good practice to review the general principles:

  1. Research / investigation
  2. Design brief
  3. Generation of ideas
  4. Synthesis
  5. Final design and production

The word ‘design’ infers the need to solve a particular problem. Therefore, it is important to start the design process with knowledge and thought. Sometimes its all to easy to think we know enough about what the end product should look like that we fail to investigate or question the motivation for design. When the problem isn’t immediately obvious it will take a certain amount of research into the subject to be able to ask the right questions to find out the problem which the design will aim to solve. When the problem is known, we can describe the solution which will solve the problem – this forms the design brief. When it comes to generating ideas it maybe helpful to have a brain storming session first. The best ideas (traditionally three) are identified as concepts for further development and design synthesis. Finally, the strongest concept is selected as the solution for final design and production.

I encourage designers to define your own design process (or pattern for success). When Steve Jobs asked designer Paul Rand to generate some logo ideas for them to look at he declined suggesting that he would only present them with the solution to their problem. I admire Rand’s thinking – I feel when I have to ask a client about which options they prefer its usually because I haven’t found the right solution yet.

Responsive design is the recognised technical solution to the diverse screen size problem, but we must always consider the design aspect of a project. I must constantly challenge myself to find a good problem to solve. Without a good problem to solve I will just be pushing pixels and not fulfilling my purpose as a designer.

Responsive Design for designers

If you are a designer for print it helps to have an understanding of the print production process. Similarly, with responsive web design it is important to know how responsive developer tools operate. When it comes to design for print designers use grids and guides for page layout. This grid layout mechanism is similar for web developers except the grid will dynamically resize depending on window or screen size. The most popular grids for responsive design are Bootstrap and Foundation so even if you don’t like to get your hands dirty with code, it is something that anyone can play with and see how design elements (or columns) will react as the dynamic grid changes with different widths. By default, both grid systems use a 12 column grid but you can also customize the number of columns with Bootstrap and with Foundation using Sass. Designers who have a grasp of how the dynamic grid operates on the production or development side will be in a better position to create ‘responsive-ready’ designs.

Design tools

When I started designing for web there was only the desktop browser to think about so the basic approach of designing for the lowest common resolution worked well. Initially I used Photoshop for web designs with pixel perfect layouts. But as consumer monitors became capable of displaying greater resolutions it was possible to reproduce richer layouts influenced by print design. Illustrator became a superior tool for web design as it offered advanced control of grids and guides originally used for print design. Illustrator was also vector based and that made it easier to stretch out graphics as screens got bigger. Because of this I feel vector based tools are vastly more equipped for responsive design work than pixel-based design tools. But while Illustrator is a great tool for seasoned print design professionals, some digital designers might prefer something a little lighter and easier to use like Sketch or the new Experience Design app. However, the problem with all these design tools is that none can produce design with responsive information. Even the new digital design apps still feel like design for print tools stuck with static canvas layouts and limited bitmap resizing that fail to scale in a way that mimics the production process (ie. CSS background properties). Because of the lack of professional tools capable of responsive design that means the designer has to do extra work. For responsive designs I will design at least two size layouts for each page. I like to design a page in portrait aspect to represent a mobile view, and landscape aspect to represent desktop or tablet. So as long as a designer understands how responsive grids or dynamic columns work, then these designs should be easily fused together during development or production stage.

Responsive Design for developers

There is an abundance of tools for developing responsive websites. But just like I mentioned that it was important for designers to think about the development or production I also feel responsive web developers should be mindful of the design side. Developers need to be aware of the current problem that professional design tools don’t contain responsive information and that means they will need to work closer with designers to figure out how to merge separate designs into one single responsive design. Responsive web developers will need to be familiar with the design grid so that they can turn page designs into a single dynamic layout of HTML and CSS.

The language of responsive web design

CSS is the design language of the web. But CSS is rather an unwieldy art that does not sit comfortably in a designer or developer camp. I find CSS must be constantly tweaked along with the HTML elements to achieve the required layout, especially with the added complication of responsive design media queries. It is therefore preferential to use web technologies that are fast to deploy and allow live refreshing when developing responsive design.

Responsive web kit

Just like I encouraged designers to make their own design process, I also encourage developers to use or discover the web technologies that will work best for producing the website or web app.

Unsurprisingly it’s not possible to cover every web technology in one article so I will explain the reasons behind the web technologies that I’ve been consistently using for my recent projects. Plus, I really want to share my favourite client-side web design / developer stack because if you are passionate about design I think you will like it too!

Project dependencies

Responsive web projects tend to use a number of third party dependencies, and package managers can be used to help install and version manage them all. Bower is awesome for managing project dependencies like Bootstrap or jQuery. While NPM is great for install testing and build tools like Gulp and BrowserSync. Package management is also advantageous for source controlled projects as it can be easily setup to prevent committing a shed load of third party code into your repro. Following this procedure means contributor commits are kept clean and will make it easier to inspect changes or code review.

Design as you go

A painter will add strokes of paint to his canvas, while a sculptor will chip bits of a rock to expose an image. Designing websites is a progressive art that is both additive like a painter and subtractive like a sculptor. Can you imagine asking a painter or sculptor to work blind folded? As a designer I can’t produce my best work unless I have real-time feedback of my adjustments. I need to see and interact with my design in real-time and across multiple devices. That’s why BrowserSync is the single most important responsive design tool for client-side web development. ‘Live reload’ or ‘live preview’ is important for web design, and with responsive web design it’s mission critical to test all the desktop and touch screens!

A UI kit for web apps

Ever wanted to replicate the performance of the native UITableView on iOS or ListView on Android? Polymer’s ‘iron-list’ and ‘iron-image’ elements can be used to create ‘buttery-smooth’ scrolling recyclable lists at 60fps. Polymer is also built on top of Web Components which allows you create your own reusable elements, but Polymer also provides a ‘Material Design’ UI kit suited for responsive web app development. I also find the template and binding model lends itself well for creating responsive designs. Polymer is well suited for developing SPAs (single page applications) and can support client-side routing.

Smarter CSS

Design should be an enjoyable art, but can you imagine what a lot of CSS is like to manage! All these responsive elements, layout grids, images and glyphs will add lines and lines of CSS. The sheer amount of CSS required by a responsive design project could very easily and quickly become unmanageable. Sass or SCSS is just like writing CSS, except you can do it with less code and fewer lines of code are easier to manage. Sass variables will enable designers to create a theme to easily define or tweak colours, type styles and spacing. Another powerful feature is ‘mixins’ which can be used to reuse common styles, define responsive media queries, generate image tiles, build font faces and include browser prefixes. Sass will reduce the number of lines of CSS you need to manage.

Responsive Grid

When it comes to responsive web design the use of a popular grid system like Bootstrap is a good place to start. I do feel however the default four tier grid system (xs, sm, md, lg) of Bootstrap 3 doesn’t give me enough granular control to deal with phone vs phablet sized devices. So I use the Bootstrap grid as a starting point and usually add extra media queries for smaller mobile devices. Bootstrap 4 promises to address this issue and will deliver a more comprehensive five tier grid system (xs, sm, md, lg, xl) for responsive design amongst other differences.

HD is the new standard

Retina displays are everywhere these days! If you walk into a phone shop today, I reckon it would be harder to find a phone without an HD display. The new HTML5 picture element allows developers to specify higher resolution images so the graphics will display sharper. But I still prefer to use CSS media queries to handle ‘Retina’ (@2x) and ‘Retina HD’ (@3x) images.

I find the CSS method gives more control over scaling, cropping and positioning which can be advantageous for responsive designers. With the CSS background image methods I can also use an image sprite technique to load in a texture map (or texture atlas) of tiled images and this improves page load times as there will be less http requests.

One final thing though, high definition images are much larger in filesize so make sure to compress all bitmaps! ImageOptim is a great image compression tool I use on Mac, though they also recommend File Optimizer for Windows.

Vector glyphs

With responsive design there is always a need to scale graphics. Vector graphics are resolution independent and can be scaled to any size and that makes them a great asset. The good news it that most modern browsers support SVG. But if you have a set of vector icons that are monochromic, then a neater way to bring these to web is by exporting them all as a custom font. Icomoon is a free online tool to create custom font glyphs. Oh, and because its seen as a font you can take advantage of CSS font sizing and colour properties.

Automate all the things

Gulp makes it easy to develop with full source, or build a minified version for production. Gulp also watches for source code changes and works in conjunction with BrowserSync. So whether you fiddle with HTML, edit a line of script, tweak a style, modify an image or asset it can notify BrowserSync to reload. Gulp can even compile Sass into normal CSS for reloading live design changes.

Production

Building web apps with Cordova

Cordova tools make it easy to package your web app as a hybrid app for distribution on multiple app stores. But the big challenge for web app developers is creating a user experience that will look and feel as good as a native app.

App-ify web view behaviours

The web view provided by iOS and Android come with a number of behaviours that are designed to improve user experience with websites. In a website context this is true, but when it comes to responsively designed web apps these web view behaviours result in undesirable effects as far as an app experience is concerned:

  1. Page bounce or spring – pages have a bounce or spring effect, but apps don’t bounce.
  2. Double tap zoom – pages allow double tap regional zooming, but apps don’t zoom.
  3. 300ms tap delay – page interactions are artificially slower to accommodate the double tap zoom gesture, but apps don’t exhibit unresponsiveness.
  4. Long tap inline magnification – pages allow prolonged selection for inline magnification, but apps don’t show inline magnification everywhere.
  5. Global user selection – page selection is everywhere, but apps only provide selection where user input is desired.

Fortunately, most of these web view behaviours can be tamed so a hybrid app can behave in a native app manner that a user would expect.

  1. Page bounce or spring behaviour can be disabled by setting Cordova’s ‘DisallowOverscroll’ preference to ‘true’.
  2. Double tap zoom behaviour can be disabled by setting Cordova’s ‘EnableViewportScale’ preference to ‘true’ and setting the HTML5 viewport meta tag (http://www.w3schools.com/css/css_rwd_viewport.asp) to disable user scaling.

  3. The 300ms click delay is fixable on Chrome by setting the device width on the HTML5 viewport meta tag (shown above).
  4. Long tap inline magnification can be disable by setting Cordova’s ‘Suppresses3DTouchGesture’ preference to ‘true’.
  5. Global user selection can be disabled with CSS ‘user-select’ set to ‘none’ (including the usual browser prefixes (https://developer.mozilla.org/en-US/docs/Web/CSS/user-select)). With iOS ‘-webkit-touch-callout’ also needs to set to ‘none’ to disable the touch callout.

    NB: As this turns off all user selection, you might need certain elements or form inputs to allow user selection. In this case certain exceptions can be added using the :not() CSS selector.

Turbo web view performance for iOS

While there are quite a number of things you can do to improve web page performance, one of the recent hybrid app performance headlines for iOS is the availability of WKWebView which provides faster performance than the older UIWebView. Cordova supports WKWebView but there is a need to install the WKWebView Cordova plugin and set the ‘CordovaWebViewEngine’ preference to use ‘CDVWKWebViewEngine’ in Cordova’s ‘config.xml’ file.

A couple of time saving Cordova scripts

Summary

Responsive web design for designers

  • Understanding the dynamic grid to design responsively
  • Separate designs that lend themselves to a single responsive design
  • The advantages of vector-based design tools

Responsive web design for developers

  • Understanding the design grid to merge separate designs
  • Responsive design with multiple device testing and live reloading
  • Developer web kit for responsive design

Production of hybrid app

  • Removing the unwanted web view behaviours for responsive Cordova hybrid apps
  • Turn on turbo performance of Cordova hybrid apps for iOS
  • Scripts to help production of Cordova hybrid apps across platforms

References:

  1. Jason Sperling (2013) The Big Argument for Responsive Design [Online] Viget. Available: https://www.viget.com/articles/the-big-argument-for-responsive-design [Accessed 2 May 2016]
  2. Ben Taylor (2014) Why smartphone screens are getting bigger: Specs reveal a surprising story [Online] PCWorld. Available: http://www.pcworld.com/article/2455169/why-smartphone-screens-are-getting-bigger-specs-reveal-a-surprising-story.html [Accessed 2 May 2016]
  3. Ethan Marcotte (2010) Responsive Web Design [Online] A List Apart. Available:
    http://alistapart.com/article/responsive-web-design/ [Accessed 2 May 2016]

Additional media:

Making sense of web app debug logs on multiple Android and iOS devices

If you’ve ever needed to debug Cordova web apps on iOS and Android it can be quite an awful experience for a web developer. Since working on Postcard web app I’ve found a couple of neat tips to help make sense of those noisy debug logs.

From localhost to native land

Designing and developing in localhost is familiar territory to web developers and so it makes sense to try and get as much work done there as possible. But when you need to use the physical hardware features of a mobile platform the only way to see if things really work is to go native.

For Cordova app development on Android there are a few ways to debug web apps but usually it’s a case of cordova build android then cordova run android --device --debug. Then if you have setup Android SDK’s PATH environment var you can run monitor in command line to open up the Android Device Monitor app to see all the logs.

AndroidDeviceMonitor

For Cordova app development on iOS it’s usually cordova build ios in Terminal and then you just open the project in Xcode to debug the app.

XcodeLogs

Both these options allow you to run and debug multiple devices although you have to tab or switch views to see the other devices logs. Also Android logs are so noisy it’s almost impossible to spot the stuff you want to look at. Ideally I would like to look at all logs at the same time and dial in on the stuff I’m really interested in.

While developing the Postcard app I mocked up the UX flow to perform an Identity Exchange with two devices using our peer to peer Thali Cordova plugin for iOS and Android. Needless to say the Android logs got very busy and I also wanted to be able to get the logs of iOS devices outside of Xcode sessions. Thankfully there are a couple of better ways to debug multiple Android and iOS devices from one Mac –

Debugging Cordova web apps on multiple Android devices using adb and logcat filtering

List android devices attached:

Because I have two android devices attached I want to be able to target both from command line.

Once Cordova has installed the app on device you can use logcat to see the logs in Terminal. However with multiple devices you want to use adb -s to target a device and then use logcat -s to filter out all the noise! With the Postcard app we are using jxcore to run Node.js on mobile so I used ‘jxcore-log:*’ as my filter.

In another Terminal tab or split window in iTerm.

logcat-jxcore

Now that’s much cleaner and stuff doesn’t scroll faster than you can actually read! 😉

Assuming the Android device is already developer enabled you can also inspect the web view and javascript console logs in Chrome: chrome://inspect/#devices

Debugging Cordova web apps on multiple iOS devices using iOS Console

With iOS all you need to do is attach your devices and download iOS Console which is a handy freeware app to view iOS logs on a Mac with filtering. With our iOS Postcard app I have set the filter to ‘jxcore’.

iOSConsole-logs

iOS Console doesn’t need Xcode running and the logs are a lot cleaner than the default Console app.

Also you can inspect the web view and javascript console in Safari once the app is running. In Safari browser go to:
Safari > Preferences > Advanced and turn on “Show Develop menu in menu bar” then select:
Safari > Developer > Device Name

Note: If you are using mobile Safari to test your web app instead of Cordova then you will have to enable “Web Inspector” on the iOS device under Settings > Safari > Advanced

Faster Cordova web app deployment with hotwire script

Building for iOS can be very time consuming. Every time you make changes to a Cordova web app you need to do a cordova build to update the app project. Then you have to go into Xcode to debug on device. But if you don’t need to make changes to native code and you only need to update web elements like HTML, Javascript, image and media files then you can save time by just updating those bits.

I’ve made a Hotwire IPA bash script to replace the ‘www’ web app folder with the updated directory. All you need to do is create an ‘*.ipa’ archive and the hotwire script can quickly update it with all web app changes and deploy to device (without need to jailbreak).

Example usage:

sh hotwire-ipa.sh -f ~/Desktop/app.ipa -d "www" -p ~/Cordova/app/www -b ~/Cordova/app/platforms/ios/www -i

where:
-f is the path to *.ipa archive
-d is the dir to delete inside app
-p is the dir to copy in place
-b is the dir with Cordova build plugins and scripts

Setup and instructions for deploying iOS app using hotwire-ipa over on GitHub.

Time results for iOS Cordova app:

3m 09s – Each time you update web files you need to execute cordova build ios to stage the updates.
0m 38s – Open in Xcode
8m 10s – Debug from Xcode

Total: 11m 57s

To run the script we need to first create an Archive and export it as an *.ipa archive in Xcode. Once this is done then future updates can be pushed using the script.
2m 53s – Create Archive
2m 13s – Export as .ipa
4m 39s – Deploy to device using hotwire-ipa script with -i switch to install as *.ipa instead of *.app.

Total: 8m 45s (11m 54s if you include initial cordova build ios)

That’s 3 minutes 12 seconds saved the first time if you have already done cordova build ios just to compile the native code, then 7 minutes 18 seconds saved to deploy repeated web app updates.

All times recorded using Postcard web app (using ‘Story_0’ branch) on MacBook 1.2 GHz Intel Core M

Azure Mobile Services for Unity3d

State of play

If you’ve followed my previous Unity3D Azure tutorials I’ve covered two well known Unity Azure plugins – Prime31 and Bitrave. Bitrave had better multi-platform support, however it required the ‘JSON.NET’ paid asset to support iOS and Android. But then there was issues with iOS AOT compiler. Because of this I decided to start a new Azure Mobile Services library for Unity3d to support multi-platforms like Unity3d – iOS, Android and Windows without need for paid plugins.

Using Azure Mobile Services in Unity3d

You can drop the Unity3dAzure library into your existing Unity project or try out the demo project to get started.

Getting started

  1. Download the Unity3d Azure demo project or use git to clone the project:
  2. Create a Mobile Service
    • Create ‘Highscores’ table for app data
    • Modify ‘Highscores’ table Insert node script to save userId
    • Create a custom API called ‘hello’
  3. In Unity3d open scene Scenes/HighscoresDemo.unity
    • Check the Demo UI script is attached to the Camera. (The script can be attached by dragging & dropping the Scripts/HighscoresDemoUI.cs script unto the Scene’s ‘Main Camera’ in the Hierarchy panel.)
  4. Paste Azure Mobile Service app’s connection strings into Unity Editor Inspector fields (or else directly into script Scripts/HighscoresDemoUI.cs)
    • Mobile Service URL
    • Mobile Service Application Key
  5. If you want to save score with userId then create Facebook app
    • Fill in Azure Mobile Service’s Identity > Facebook settings (App Id & App Secret)
    • Paste Facebook access user token into Unity Editor Inspector field (or else directly into Scripts/HighscoresDemoUI.cs)
      Play in UnityEditor

Credits

Special thanks to Jason Fox and Bret Bentzinger who put together the UnityRestClient library using the JsonFX plugin.