Once the essential concepts and art style of the prototype was well on its way, we decided to do some tech tests which would further influence the design and give us valuable feedback on the limitations of the system. The first test was obvious - to test the Vurforia plugin in conjunction with Unity.
Test #1 - Image Target AR test.
Using a simple Image Target and a quick 3D type asset spelling out the letter Air as shown below I tested Vuforia's AR capabilities. This worked quite well - the image recognition using a Samsung S7 phone was instant and the 3D model held up quite well even while moving the phone around.
Find below a small video of the example working -
Test #2 -Model Target AR test
Vuforia also offers Model Targets which enable physical objects to be recognized and tracked using a digital 3D model of the object. This potentially opened up the possibility of a more tangible puzzle product and then augmenting that to enhance the learning experience. We wanted to test this quickly to see if this could be a direction our prototype should take. I have plenty of experience in 3D printing so building out the letters would not be a challenge.
Its a 2 step process. The first step is to scan the 3D object using the Vuforia Scanner app and then use the results of the 3d scan as a model target in Unity.
Step 1 Scanning the object. A small sculpture of a traditional Dutch house was chosen to be the model of choice. It had enough information on it to be a suitable target. The scanner detected around 300 unique points.
Step 2 Augmenting the 3D model.
Again, it was impressive how quickly the Samsung s7 detected the 3d object and immediately overlaid the 3d assets in the scene. However, as you can tell the result was slightly jittery. In addition to that, thinking about our project a little deeper - I concluded that it is better to stick to an Image Target AR solution - reason being that the model detection heavily depends not only on the shape of the 3D object and but also the information present on it. In our case - we were planning on either 3d printing or having hand crafted wood letters which would essentially be blank and have no information on them, making detection harder - we would also run into the issue that some 3D letters will be a lot simpler than others - so geometrically thinking - the letter "I" would have significantly less points than the letter "C" making the experience inconsistent. Image target seems to be the way to go !
Tech test # 3 - Core interaction test
With this new found technical direction and using the most recent concept for the experience we mocked up an interaction for the AR puzzle. We will detail out the entire User flow using Sketch as the next step, however, the core interaction of the puzzle is that the letter (A in this case) has been split into smaller parts (think 3d jigsaw puzzle) and scattered all around the image target. Touching each part of the "A" triggers an animation and that part finds its way towards completing the 3D letter "A". Touch all the pieces to complete the letter. There are other fun elements mixed into the scene, some animate, some dont. You have to click on the correct ones to complete the letter.
In the tech test below we used 3 elements of the "A" and animated 2 of them in Maya - then imported those animations into Unity3D and set up the Vuforia Image Tracker. Then, a simple C# script was used to cast a ray so that when an object is "touched" its associated animation gets triggered and also plays an audio effect.
Result of the interaction test.