Today was the day of the Critique, I presented my project using the similar format I had on Friday, but adding a few new points to my presentation e.g. establishing a use case and clearly explaining each point.
I went through explaining my inspiration for the project, what I looked at, how I came to a resolution for my design concept. From this showing examples of other projects with similar idea’s relating to Palmyra and the use of VR that inspired me.
Although I didn’t get much in terms of critique, I had my own personal opinions of the design. I felt like the prototype I created will still nothing more than a concept, didn’t have much design to it rather than it’s an “model in a black void”. What I think I failed to explain during the critique was that, although I originally came with the outcome of it being for mobile to view it. In addition, one of the points brought up on Friday that the concept is definitely larger than just for mobile, instead would be more appropriate for a VR headset in which you can fully walk around in this experience. However, as of current due to my own technical ability I scoped it for mobile as although not understanding Unity that well, it was more capable to somewhat design for.
Further, the live prototype I showed during the critique was intended to show the interaction that I user could experience albeit lacking much in terms actual interaction then moving around the model.
Some the interactions I intended to display;
Able to look and gain more information on particular sites e.g.
Potentially know the scale and diameter of the space you are in.
Some sections for it’s History and the damage caused by ISIS.
These interactions – some may seem technically difficult or even fantasy, but I feel more my concept it would be suitable as you could gain additional information rather then being there. However, these interactions could potentially lessen the experience the person is seeing and add unwanted clutter, rather serve as a option.
As brought up in the presentation before, I didn’t really have a concrete use-case for the project that I was working on. I had a clear concept that I think is well developed and backed up through research, but not for how or why someone would use this.
Chernobyl VR as looked at before brings the experience of the radiated town of Chernobyl to you, as you explore the ruined desolate town. It’s aim is to serve as a virtual “tour” of the site, they see VR as a fast developing tool when it comes to addressing important social issues. These ideals are resonate to my own project, as Palmyra has much context around it, not just historically, but politically with some of the structures and artefacts being destroyed damaging the site. Through this, naturally my use-case would be for education purposes to inform people the history of the site and the events surrounding it.
I went back through the three.js documentation to find a workaround, I changed the format that I was loading to an object. Furthermore, looking at other things I include such as how to create clickable elements to imitate touch for the interaction.
I found when loading the model there was some issues with the model being deformed and some faces being removed. From this I went back into blender to edit the model further to fix this problem so it looked correctly.
I went through merging faces and tweaking the model so it didn’t deform and get it correctly working.
As I found working on the Unity Prototype too technically out of reach as I didn’t know how I would go about coding and editing the script, I went back to my previous attempt.
Further, the web example I created was intended to show the interaction rather than creating it for on a mobile device, as before I didn’t know about Vuforia to test it in Unity. I went back to the original testing using three.js, trying to figure out the issue. In continuation, what I found out is that I was too fixated on one specific format, JSON, rather than trying over different formats and dropped the prototype. Although I couldn’t get a mobile version working and tested, I gained some insight that it was possible to do, albeit technically complex and out of my capabilities.
First Initial Prototype of Concept using Unity, while researching into how I would create Markerless AR I found it to be technically challenging. I followed an example tutorial on how to go about this,.
In the tutorial it demonstrates the illustion of walking around a model in 3D space using the gyroscape, this works out the distance between you and the model.
However, I found when I live tested it the script in C# wasn’t working accordingly so the model remained fixed rather than being able to walk around it. Furthermore, porting it was difficult as I didn’t have an android device I could build onto. Moreover, as I haven’t learnt C# I was able to make any alterations to the script to improve upon it. However, I did another Demo trying it out using a marker and managed to get it to work.
I think one of the problems with the initial test, was that attempting to do marker-less AR with Vuforia, was that I was trying to move away form it’s attended use – rather than trying to use markers with the design. However, referring back to the concept the idea was that you could open it up anywhere and immerse yourself in this location.
While working on the project I had trouble finding a fix to the error with the loader. Furthermore, as I was working with three.js someone question why I was doing this for the web, while the concept was model based. Moreover, I did some research into how to create the concept for Mobile devices and someone suggested using Unity to develop it.
Vuforia allows the use of Augmented Reality, it’s also compatible with Unity to create Virtual Reality Games or Experiences. In addition to this, I did some research into the software and how I would go about modifying my design for mobile.
Some of my inspiration and the ideas I had for my concept came from research into other AR experiences designed for VR Headsets.
Most of these examples use VR or are converted for Web use. However, for my concept the idea was for it to be mobile as you would be able to open it up anywhere and experience this place, in the space you are in. Further, linking to popular examples of Augmented Reality like Pokemon GO, in which these models are mapped over the space you are in using the camera.
As Palmyra is the main subject of my project I did some research into services and API’s that look at Heritage Sites and Protected areas, and information on the site itself.
Protected Planet API was an API was considering using as Palmyra was really just a example to base my design off, as its gained a lot of information on it historically, and in recent news. However, the concept would be a learning environment bringing a a site to your location through AR.
Based on the research and looking into API we did on Monday, I decided to gather the assets required to develop my design. Using the Sketchfab models, on the Palmyra locations that were destroyed I downloaded the blender files.
I wanted to try out an initial prototype using Three.js that I had previously looked into with other projects. For this I had to convert the model into JSON using a three.js Converter that I found while looking into going about it.
However, I found when following the guide to convert it, after compiling it successfully and putting it into code. Furthermore, there was some problems that the JSON wasn’t compatible or the model wouldn’t load correctly.
I found while researching into this error that other people were having similar problems saying various problems to do with the exporter. I tried troubleshooting this with the default Blender Cube and converted it with exporter. From then, I tried inserting back with my code to see if the issue was with the JSON rather then the exporter and checked if it was correctly formatted. After confirming that the JSON was correct, I came to the conclusion that it was down to Three.js not accepting the format from possibly being a older version from the exporter.