Augmented Reality and User Perspective Rendering

5 minute read

Published:

Till the last week, the AR application, that we developed consisted of face detection in 2D along with AR object rendering in parallel. However, there was no interaction between the two separate applications because of communication issues between unity and android.

enter image description here

New Version of Application

Due to the paucity of sources which deals with Unity and Android, there were a lot of problems faced to come up with the interaction. Initial design of the application consisted of importing unity-android augmented object render project as a library in the face detection application, however, this process was making the unity augmented object getting initialized again and again, rendering no point in interactions as all the local variables were being initialized again and again.

Adding to this, communication from Android to Unity using the method : UnityPlayer.UnitySendMessage(“object”, “method”, “parameters”) was not giving any results. There was no message being sent from the Application to Unity project so as to alter the augmented object’s position. Thus, instead I thought to reverse the process and start communication the other way ,i.e, from Unity to Android. This interaction was achieved using AndroidJavaClass and AndroidJavaObject. This helped in achieving the task of achieving compatibility between C# and Java. The methods were accessed using obj_Activity.call(“function name”, “args”).

However, still there was the issue of re-initialization. This issue was addressed by totally removing the library import as .aar package and rather simply just adding the face detection code in android exported unity project of augmented object rendering.

Thus, from the above processes, I was able to obtain the interactions between android and unity. Adding to this, after writing the script for augmented object movement based on the head movement, I was also able to alter the position and scale of augmented object depending on the user’s head movement.

enter image description here enter image description here

User Perspective Rendering

The next task is to achieve the user perspective rendering (UPR) based on tracking head movement. Thus, to figure out the best possible way to achieve this task, I read the following papers to get the idea of the various approaches possible :

The best approach needs to be discussed and picked up.

Novel View Synthesis for UPR

I was also thinking of using Novel View Synthesis for countering UPR problems. This method will only be applicable for fixed scenes without much complexity. However, this method can prove out to be very good in removing inconsistencies and mis-alignments. Even regeneration of the missing FOV is possible using this method. The approach and its viability needs to be discussed further. To get idea of NVS on scene rather than just an object, I read the following paper : Novel View Synthesis for Large-scale Scene using Adversarial Loss This paper provides insights of using NVS on a complicated scene and uses GAN for filling up the spaces and de-blurring.