Professor and researcher Michael McGuffin presents and clarifies the educational potential of augmented and virtual reality.
Michael McGuffin is a research professor and engineer in the Department of Software Engineering and IT[C1] at École de technologie supérieure (ETS[C2] ), an Engineering School in Montreal. He’s earned a Bachelor’s degree of Applied Sciences in Computer Engineering from Waterloo University with a specialization in Software Engineering as well as a Ph.D. in Computer Sciences from the University of Toronto. His research revolves around user interfaces and data visualization. Lately, alongside his master and doctoral students who are interested in augmented and virtual reality, he’s been pursuing exciting research in this field.
Can you explain the difference between virtual reality and augmented reality?
The definitions vary, but generally, virtual reality completely replaces the visible world around us with synthetic images projected before our eyes. Augmented reality, in contrast, leaves a large part of the physical world visible while overlaying virtual elements over the physical world. Both can be seen with helmets, and augmented reality can also be seen with a simple smartphone (as is the case with Pokémon GO).
What’s the educational potential of these two technologies for businesses and consumers?
Any educational material currently provided via a screen – be it TV, online, etc. - could be conveyed more imaginatively with virtual and augmented realities. These new technologies allow for a 3D experience that is stereoscopic, which means each eye sees a different image. This, in turn, adequately responds to head mouvements and allows us to access different points of view by simply moving our bodies. In addition, virtual reality can immerse the user in a 3D world. These technologies are also interactive and can be multi-user.
So think of historical re-enactments, tropical forests tours, whale watching under the sea, physical simulations ranging from within the atoms to complete galaxies, architectural modeling, understanding car engines, taking human anatomy and physiology courses, etc.
Do you any have concrete examples of how they’ve been used?
There are many, but here’s a non-exhaustive list of examples:
• An augmented reality Auto Owner's Manual for smartphone;
• Augmented reality for anatomy lessons;
• A recent Facebook demo on possible interactions within virtual reality;
• 3D modeling with an augmented reality helmet featuring an integrated smartphone – a project put together by my student Alexander Millette and myself – pay special attention between 3:48 to 4:23;
• Other 3D modeling examples:
What challenges remain in this area?
1. Improve VR headset performance in terms of resolution, latency, and "field of view" (visible field). Currently, Microsoft’s HoloLens has a visible field of only about 32 degrees. The VrVana Totem (a Montreal-based company) has an excellent visible field, but a lack of resolution.
3. Develop tools to facilitate content creation for these headsets in order to democratize these technologies.
Here are two longer videos that you can watch for even more context and examples:
Find all of the videos featuring Michael McGuffin’s work on his website