GSOC 2017 - Week 4 of GSoC 17
Published:
This blog is dedicated to the third week of Google Summer of Code (i.e June 24 - July 1). This week was concentrated on cross-testing and analysis of the API with some challenging tests.
Due to the paucity of sources which deals with Unity and Android, there were a lot of problems faced to come up with the interaction. Initial design of the application consisted of importing unity-android augmented object render project as a library in the face detection application, however, this process was making the unity augmented object getting initialized again and again, rendering no point in interactions as all the local variables were being initialized again and again.
Adding to this, communication from Android to Unity using the method : UnityPlayer.UnitySendMessage(“object”, “method”, “parameters”) was not giving any results. There was no message being sent from the Application to Unity project so as to alter the augmented object’s position. Thus, instead I thought to reverse the process and start communication the other way ,i.e, from Unity to Android. This interaction was achieved using AndroidJavaClass and AndroidJavaObject. This helped in achieving the task of achieving compatibility between C# and Java. The methods were accessed using obj_Activity.call(“function name”, “args”).
However, still there was the issue of re-initialization. This issue was addressed by totally removing the library import as .aar package and rather simply just adding the face detection code in android exported unity project of augmented object rendering.
Thus, from the above processes, I was able to obtain the interactions between android and unity. Adding to this, after writing the script for augmented object movement based on the head movement, I was also able to alter the position and scale of augmented object depending on the user’s head movement.
The next task is to achieve the user perspective rendering (UPR) based on tracking head movement. Thus, to figure out the best possible way to achieve this task, I read the following papers to get the idea of the various approaches possible :
Approximated User-Perspective Rendering in Tablet-Based Augmented Reality
This paper uses the concept of homography for achieving UPR. The approach is based on coming up with user-perspective projection and using homography to remove inconsistency between DPR and UPR. However, there are limitations of applying homography itself like the assumption that the scene should be planar and some geometric distortions are also introduced.
A Hand-Held AR Magic Lens with User-Perspective Rendering This paper is just a survey stating the experiments and concludes that UPR is preferred over DPR for selection task. Also, tablet was better was UPR than mobile phones. The implementation is based on Kinect Fusion, which is out of the scope of our project.
Evaluating Dual-view Perceptual Issues in Handheld Augmented Reality: Device vs. User Perspective Rendering This paper also analyzes the DPR vs UPR approach. However, it makes very strong assumptions of having a fixed viewpoint and a perpendicular one for UPR. However, it presents the UPR issues very well.
Towards User Perspective Augmented Reality for Public Displays This paper provides ideas about tracking face and screen. It also gives insights about calculating intrinsic and extrinsic parameters. Also, it gives ideas on how to estimate the relative poses.
Adaptive User Perspective Rendering for Handheld Augmented Reality This paper combines the ideas of the above two papers along with introduction of threshold using optical flow. This approach seems optimal and promising, but is approximated and may introduce some inconsistencies.
User-Perspective Augmented Reality Magic Lens From Gradients This paper uses gradients to achieve 3D reconstruction, and thus, the approach is binocular, thus, is outside the scope. However, the paper provides some approaches for calibration, finding relative poses and face tracking.
A Perspective Geometry Approach to User-Perspective Renderingin Hand-Held Video See-Through Augmented Reality This paper provides another approach related to dynamic view and frustum. This paper claims to remove registration inaccuracies that are introduced by the previous papers.
User-Perspective Rendering for Handheld Applications This paper provides calibration methods, viewpoint estimation using off-axis projections and rendering methods for virtual objects. This paper is highly in line with our task at hand. But, it only deals with virtual object and not the scene rendering.
A survey of diminished reality: Techniques for visually concealing, eliminating, and seeing through real objects This paper explains the concept of diminished reality and provides various surveys about the same.
The best approach needs to be discussed and picked up.
I was also thinking of using Novel View Synthesis for countering UPR problems. This method will only be applicable for fixed scenes without much complexity. However, this method can prove out to be very good in removing inconsistencies and mis-alignments. Even regeneration of the missing FOV is possible using this method. The approach and its viability needs to be discussed further. To get idea of NVS on scene rather than just an object, I read the following paper : Novel View Synthesis for Large-scale Scene using Adversarial Loss This paper provides insights of using NVS on a complicated scene and uses GAN for filling up the spaces and de-blurring.