MaKami College recently had another research paper published on Artificial Intelligence and Mixed Reality.
The paper explores a new mixed reality system named VideoM2R, that automatically pulls data from 2D videos to generate immersive mixed reality 3D instructions.
“We are very proud of this project as it represents our first exploration into combining Artificial Intelligence with Mixed Reality,” says MaKami’s mixed reality developer and researcher Mehrad Faridan. “It’s also our largest collaboration to date, with researchers from Carnegie Mellon University, Adobe Inc. and other top institutions contributing to the work.”
The technology compares the user’s movements with the instructors movements, putting the student in the instructor’s position and moving the student’s 3D avatar based on their own movements.
Mixed reality instructions have great potential for physical training and educational settings, offering an immersive and interactive learning experience that is not possible with traditional 2D video instructions. In fact, previously published papers have shown that the ability to see a 3D avatar can improve a student’s understanding of postures and can help students learn and notice differences better than just imitating a 2D screen.
“With this technology, we can leverage a vast variety of professional videos already available online to create immersive instructional experiences for various physical activities, such as yoga, dancing, exercise, and many other sports,” says Faridan.” The technology could be used to increase the scalability, availability and diversity of mixed reality 3D instructions.
The paper has been published by various research journals. It was most recently published at IUI (Intelligent User Interfaces). Read the full research paper.
Faridan also published a research paper last year on the use of Augmented Reality (AR) and Virtual Reality (VR) in more tactile classroom settings.