Includes environmental context mapping and real world explorations to better understand user goals within the defined environment.
"3D virtual objects integrated into a 3D real environment in real time" [note]https://www.cs.unc.edu/~azuma/ARpresence.pdf[/note], [note]http://ieeexplore.ieee.org/document/6681863/[/note].
AR and VR sometimes get tossed in the same bucket, but AR is no more similar to VR than it is to flat interactions (web, apps, kiosks, etc) in general.
Mixed reality interfaces provide something critical and intangible that neither VR nor flat interactions can provide easily — environmental context — innate and in real time. With virtual world-building, the context needs to be designed and built-in. With flat interactions, the context has to be borrowed or forced (remember design’s skeuomorphic phase).
The most usable applications of this tech are going to be ones that focus on the shifting problems of real environments, and how that affects people, nature, and other complex systems.
To define the environment, I looked at distance, direction, surface, and immersion potential. What does the environment consist of? What are its limitations? Where are its entry and exit points?
Trees
Rocks
Dirt
Trails
Streams
Hills
Bugs
Wildlife
Weather
Temperature
Time of day
Visibility
Terrain
Foggy
Rainy
Steep
Dark
Muddy
Overgrown
Dangerous
Sharp
Poisonous
Circumference
Distance between
Hazard level
Height
No clean, running water
Limited restrooms
No electricity
No heat
Limited shelter
Limited mobility
No roads
No cars
Roads
Trails
City parks
State parks
Private property
Public property
Much of the personal context is already defined from user research, we can try to dig a little deeper into it from the environmental side and find an insight or two, but I would save that for a next round of experience testing, and continue to refine the relationship between the user and the environment.
Look into what brings people to natural environments, explore the cultural context of hammocking (holidays and seasons), and how it bridges between being a group and solitary activity.
Part of the social context for an AR app includes how comfortable a person feels using it in public.
For my initial explorations, I did non-technical real-world immersive walk-throughs of the target environment. I focused on how I was holding the phone, and made notes to log for future tests. Of note was how far someone held the phone from their body, and at what position above or below their line of sight, or how often they moved the phone up or down and side to side in an attempt to get the “full picture” of a scene.
The following two explorations highlight initial findings.
Research Focus: Experience of looking through camera view and seeing a representational hammock in a natural setting.
Methodology: Paper prototype, real world environment.
Implementation: Holding a transparent sheet through the viewfinder on camera while the camera is pointed at a group of trees.
This exploration was instrumental in understanding lighting changes, and in defining the mixed light matrix: the various lighting situations a person may find themselves in while using the app.
With the way sunlight filters through trees, you could easily find yourself in this common lighting situation while in the mountains or forest. The sun could be in your eyes, but shade on your screen, pointing at a mixed light scene [note] For further reading, http://www8.cs.umu.se/education/examina/Rapporter/WajidAli.pdf talks about some of the same problem of mixed light environments on tree detection[/note].
As you move around a grouping of trees to find spots with your phone, now the shade is in your eyes, but the sun is hitting your screen. You could experience this strobe effect multiple times during a session.
There are other environmental scenarios where lighting plays a necessary and important role in the user experience.
Fog: Difficult to find edges and estimate distance
Snow: Difficult to estimate height from ground
Rain: Lowered edge detection, rain may affect sensors
Fire: Makes it difficult to anything
A short browse through the ARKit [note]https://developer.apple.com/documentation/arkit/arlightestimate[/note] and ARCore [note]https://developers.google.com/ar/reference/java/com/google/ar/core/LightEstimate[/note] SDKs shows that low light capabilities are limited to primary use cases, well-lit indoor spaces, in these early releases of the technology. Environmental factors that happen when outdoors, like rain or dark, can limit the functionality of the design.
Mapping and prototyping these scenarios can help engineers understand how design is thinking about interactions with outdoor, natural environments.
Understanding distance and immersive potential.
I took the hammock model for one of my primary personas, Robin, a spool of string and a tape measure, and set out in search of workable grove of trees.
When I found a large enough grove, I measured out two pieces of string based on my hammock model: min height for hang point, 6.2ft and min distance between trees, 10ft.
I measured the distanced between all the trees in the grove to see which ones had about 10 feet of distance between them. I ignored all the trees that didn't fit within the model.
After identifying workable trees, I took my second measurement string for min height and marked off with ribbon the hang point for each tree.
From there, I took the spool of ribbon and wrapped lines from one hang point to the next all around the grove.
Right away, just having the 10ft string to walk and measure between trees showed me how off my original eyeballing had been. I needed more space between trees than my mind visualized.
With the abundance of trees, spaced in what seems like a reasonable distance, almost every tree could be a potential spot. I found 5 good spots in an area that covered about 8 trees.
I found myself needing to move around in a larger perimeter circle in order to get a clear view of all the options and the accurate distances between trees. The more trees, the more blocks to my line-of-sight, hence the need to constantly shift perspective.
The basic interaction is similar to taking a panoramic photo. The differences start with the user’s intention when looking into the viewfinder. When taking a photo or capturing video, you’re capturing a moment, active, and in anticipation. Because the scene is primary, controls gather around the edges and out of the way.
With AR, in this implementation, the scene is half as important as the digital information woven throughout. Instead of waiting to pounce on the shutter or miss the moment, the intention is centered around comparing the digital scene to the real scene, and exploring both. This will most likely translate to more physical movement, and for a longer duration, in comparison to photo-taking.
For the purposes of understanding the user’s natural perspective and motion, I’m leaning heavily on Dreyfuss’ "Measure of Man and Woman" [note]http://design.data.free.fr/RUCHE/documents/Ergonomie%20Henry%20DREYFUS.pdf[/note] for my baseline human factors variables.
I’ve integrated mobile phone posture ranges [note]http://www.auspicesafety.com/2017/01/17/text-neck-forward-head-posture/[/note] into my models to illustrate how much a context switch between phone and real world can trigger a considerable amount of head and body movement outside comfortable ranges.
Design for a first-person perspective, with an open discovery viewpoint. Optimize for a medium mixed-reality immersion level, made viable with a five meter interaction range.
Five meters is defined as edge of both the social proxemic zone and comfortable depth perception.
For an outdoors AR app, this variable may manifest in the form of a compass. Primarily defined for VR when you cant rely on your natural senses for determining a change in direction.
The role of design is to envision what can be, such that engineering can make it so. Knowing something is implementable is critical to product development, and this UX research and design project includes engineering assessments.
The major technical challenge is how successfully we can train a system to be markerless in an environment that thrives on everything blending in; getting a computer to see the forest from the trees.
At the time of writing this, there are two primary ways to overlay virtual elements onto real-world scenes: with external markers [note]https://www.kudan.eu/kudan-news/augmented-reality-fundamentals-markers/[/note] and without [note]https://www.marxentlabs.com/markerless-augmented-reality-everything-you-need-to-know/[/note]. With markers, the system is given a pre-defined pattern and asked to match it.
Markerless AR involves dynamic pattern-matching, immersing into the context of a scene. From a user-centered perspective, implementing a markerless interaction model is ideal.
AR markers are still useful. AR marker-based prototyping can assist in usability testing for an augmented reality app even if it is a markerless product. For example, putting markers on every tree isn't desireable or scalable for a user product, but marker-based prototypes can still be useful for testing hammock spot searches in a single grove of trees.
There are apps on the market that utilize augmented reality to help with the measurement of an area and placement of objects in real space. These make useful adjacent research opportunities, since they share similar goals (helping the user measure something) and use similar technologies (augmented reality) to accomplish that goal.
Explore this Project
Hammocks7 minute read
User Research12 minute read
AR Research10 minute read
Strategy6 minute read
Interactions8 minute read
Conservation7 minute read
Participate4 minute read