Adding Artificial Intelligence to your forms

Single-layer feedforward artificial neural – By Akritasa [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)]

Smap Forms are a great way of implementing what used to be known as “Expert Systems”.   While you are collecting data with a Form questions can be displayed or hidden based on their perceived “relevance”.   In the same way recommendations and feedback can be shown based on the answers that have been provided.  If A and B then show C.

One reason that this is great with Smap is that the expert advice can be embedded easily in a system whose primary function is the humdrum capturing of information.  It is not necessary for the user to sit down for a dedicated session of receiving expert advice.  Another reason is that a domain expert can create these forms without needing input from a programmer.

However these rules are a long way from Artificial Intelligence.  Once your form has collected the answers to 20 plus questions as multiple choice, text, audio, images and video then there may be critical information that you want to feedback to the user immediately hidden in that data that you cannot extract with a few “if” statements.

Version 5.5 of fieldTask (available from the google play store) adds the ability to call a server passing it data you have collected and to get back a response that can be used to guide the next steps.  You need to be filling out the form on line of course and the service you call need not be an Artificial Intelligence web service, it may just look up reference data, however we see the use of AI services in this way as particularly compelling.

You can try this out in fieldTask by adding a “calculate” that calls a lookup_image_labels() function.  So in an xlsForm your questions might look like this:

type           name                      label / calculation
image        scene                      Take a photo
calculate   scene_objects        lookup_image_labels(${scene})
note           show_objects        Objects in photo are: ${scene_objects}

And this is what the above form looks like on the phone.

Firstly Take the Photo

And here are the objects that were identified

Well this is not actually very useful!

The AI service called by lookup_image_labels() is the AWS Rekognition service that can identify things in photographs such as cars, people, computers, desks etc.   Even that task is done far from perfectly, it did not identify the  bowl, unwashed or otherwise.  And it will probably identify nothing that the person filling in the form could not identify for themselves.   This information would be useful on the server in order to search for images  but you can already generate these labels automatically, using Smap, once the form has been submitted.  It hardly seems necessary to make the person completing the form wait for these labels to be returned from the server.

So why do it?

Well actually the Rekognition service is just an example.  We can add calls to your custom AI engines that can identify patterns in the data or images and that may be critical in identifying the advice to be provided or the further information that needs to be collected.

Smap can be used to collect the data required to train an AI engine and now it can use the decisions from that AI engine in collecting more data creating positive feedback to add value to your processes.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.