Search Unity

Cognitive Implications of Widespread VR

March 14, 2016 in Technology | 11 min. read
Topics covered
Share

Is this article helpful for you?

Thank you for your feedback!

First, let’s talk a bit about the brain. If you’ve studied cog psych, you’ve likely heard about about how neurons process information. There’s way too much to cover in one blog post, so we’ll focus on these four specific use cases:

  • How does the brain know where it is?
  • How does it know what time is?
  • How does it remember to avoid pain so strongly?
  • And finally, how does the brain prioritize what’s worth remembering, anyway?

Let’s start with orientation. How does the brain know where it is?

Orientation

There’s a lot of faucets to the question of ‘where’. How does the brain orient you in space? In daylight, in the dark? How do you know where you are the day after an earthquake, when all major landmarks are gone?

There’s no overarching theory in science (yet), because there’s a lot of different things the hippocampus does now—the brain has a lot of backup systems. Effectively, there’s a lot of neurons doing similar things, but in a different way.

Grid cells work as you walk around; they are quite regular and fire at intervals. The information they gather is fed to the place cells, which we’ll talk about in a minute. Grid cells are arranged in a hexagonal format—hence the name—and can work in complete darkness: this means they’re getting physical feedback data as well as visual. They fire at such regular intervals there’s a question about whether they’re also used to measure time. More on that in a bit.

Place cells are more dedicated; they remember the information from the place cells and hold a series of cognitive maps in your head of particular spaces. This is how you recognize an area: your office, your house, your city. They can re-fire and re-map themselves to update information, too.

Interestingly, place fields are usually not affected by large sensory changes, like removing a big landmark. We’re not sure why, but it does makes sense: if a tree falls down or there is a big earthquake or landslide, you should still be able to recognize and remap the space. This is where triangulation might come in handy: it’s an additional processing tool.

There are also dedicated head direction cells, neurons that fire when the head is facing a certain direction. This helps you maintain a sense of direction anytime, but especially in the dark or in unfamiliar environments—or, say, maybe when you’re wearing a big old plastic-and-glass mask. They can be confused eventually: if the environment repeatedly changes a lot, or if you wander in the dark too far.

So much for where we are in space. How about time, the other big locator?

Time

To be clear, we’re not talking circadian rhythms; those are kind of instinctual or chemical time-based patterns, but they can be altered pretty easily by adjusting natural time indicators—sunrise, when it gets dark, temperature, and so on.

But how the brain processes time, moving in the fourth dimension, is even more of an open puzzle than space. It’s just difficult to test, in part because you can stand still in space, but not in time. It’s simply a quirk of living in four dimensions: time is genuinely the invariable variable.

Initially scientists thought that the entire hippocampus had random cells devoted to time recording, and there’s new research indicating that grid cells can also possibly be used to record time. For example, scientists put rats on a treadmill—no place data—and noticed the grid cells still generated consistent patterns tied to the length of the session. Fifteen-second sessions got a specific pattern: 30-second sessions got a different pattern.

But the big question here, then, is, are those patterns really sensing a specific time period, or are they just firing in a pattern that is being played out in time? This is the problem with doing research about time.

But we know that humans are quite good at keeping certain kinds of time. We’re good at timing our movements—to catch a door, or a yellow light, for example. Dancers and athletes and musicians all have great timing, great rhythm. But each of those examples is also a spatial example, heavily tied to our physical bodies? We’re good at gauging how far can we go in X period, how fast can we go, and our velocity.

It’s a quirk of many languages that when asked how far away something is, you can answer either in time or distance: things are two hours away, or 600 miles, a five-days’ walk, and so on. But if you’re just sitting somewhere for several hours, particularly if you’re distracted, you’ll likely lose track of time.

Still, we do remember time passing, and we remember things in order, in a sequence. So maybe time cells, whether they are grid cells or other types of neurons, are also just cataloging and indexing those experiences. Interestingly, since grid cells are heavily involved in place, and maybe involved in time, there’s a theory that cognitive maps might also organize memories by location—so you go back to your hometown, and you remember events at that place.

So—effectively, we’re pretty iffy on time. But if there’s one thing we do know about it, it’s pain and how to avoid it.

Let’s talk about pain.

Pain information takes different paths to get from nerve endings to the cortex; the way pain receptors work is a nice demonstration of how humans have evolved. There’s a few different types of pain nerve fibers, most importantly A-delta and C. A-deltas are faster and newer. They are the reason that you pull your hand away from a hot stove before you’ve realized what has happened. C fibers are slower, but more common—over 70% of pain nerves are C fibers—and they can respond to more types of pain. For example, A-delta fibers don’t feel chemical pain, only C, which is why it takes a second for you to realize how hot spicy food really is.

So the brain gets the information from the nerve fibers, and then has to figure out how to react to the information it’s been given, based on a number of factors: am I somewhere unknown or dangerous? Have I been injured here before? Can I literally see the bones sticking out from under my skin?

If the injury doesn’t seem too bad, we’ll do things like rub it, shake it off, walk a bit—standard ways to calm down the nervous system. The brain can actually talk back to the nervous system, asking for more information or less, turning up sensitivity or turning it down, depending on what’s going on. If you’re really, really concentrating, say on a battlefield or playing sports, your brain can make the distraction threshold for nerve signals extremely high.

There’s other types of pain, visceral pain and deep somatic pain, that are hard to identify: stomachaches, deep sprains, aches from chronic disease. If you’re anything like us, the brain’s best advice is usually ‘lie down and take a nap till you feel better’.

Your brain is also naturally primed to remember bad, creepy, or terrifying things; to create stronger neural connections when they happen, and to create a longer-lasting bond. If you’ve been in a life-threatening situation in a specific place and had to go back to that place, it was probably fairly intense for you in any case, but if you were injured, it was likely more intense.

If you’re not in obvious danger, just in pain, the brain checks to make sure everything’s okay and deals with it. But the brain does not instinctively know what’s really happening in the body. There’s a couple reasons for this, but one big one is that pain signals hit different parts of your brain at different points, making some types of pain sources especially difficult to find.

This has some really interesting implications for VR. If you’ve played Asunder: Earthbound, you know that the beginning plays on these visual cues: as the game begins, you’re in prison with a visible body, and your avatar’s hands look terrible: long nails, grey skin. Instantly, the part of your brain concerned with your well-being gets very concerned that you are sick, and it take a few minutes to shake it off.

You can do similar tricks in real life just by using mirrors or blocking yourself or using plastic dummy hands, primed to feel like ‘your’ hand. Check out this video from the BBC show Horizons in which the hosts try this trick out on unsuspecting people—then hit the hand with a hammer.

It’s worth noting the experiment works well even in broad daylight, on a beach, using a fairly cheap looking hand. If that work so well, imagine how immersive that could get in VR. Of course this is the basis for a lot of VR physical therapy happening now—there’s a lot of companies focusing on this now. The brain’s neuroplasticity really comes in handy.

So that’s the basics of pain. Pain is so straightforwardly bad most of the time, though, that it’s obvious we need to remember it. Deep lizard brain working there. But how does the brain, in general, remember, you know, what to remember?

What is worth remembering?

Pretty simply: surprise, recency, repetition, and—somewhat related to both surprise and recency–the first and last. Additionally, our brain prioritizes things it wants to remember: if you really need to keep something beyond working memory, you’ll instinctively repeat, repeat, repeat.

For example, those grid cells we’ve talked about earlier? Rats will play through a pattern when they’re learning it, and then their brain plays through it again after they go to sleep. The theory is that the repetition effectively transfer the pattern to long-term storage. So things like space maps are given a priority, which makes sense; it’s always handy to know where you are.

On the flip side, there's a lot of data we get that we learn to ignore at a young age. For example, infants learning to talk, by the age of 10 months or so, already start to lose sensitivity to unimportant differences in speech in their native tongue. Same thing with different octave types: the Arab tone system has 24 divisions, using quarter-notes, while the Western has 12 and uses nothing smaller than a half-tone. If you’re used to Western tone scales, you might not be able to even recognize quarter tones; they just sound flat or sharp.

Different types of memories are stored differently. We've already addressed spatial memory above, but episodic and semantic memory are the other two big categories. Episodic memory stores life events, both things that happened to you, or around you. These generally have a time marker.

Semantic memory is about trivia and knowledge—meanings, concepts, etc. How you remember all of these types of memories has much to do with how you processed them in the first place. Scary things or things that happen during heightened times tend to be remembered fast and hard, but they aren’t usually very pleasant—in fact some memories, like the ones PTSD survivors live with, are too terrible to be recalled. But in general, for more benign things like taking a test, repetition does the trick: keep thinking about things, talking about them, writing them down, and reviewing them.

You also tend to remember information more accurately if you’re back at the place you learned it. This has some fun implications for VR: imagine studying in the same jungle landscape you later take your test. Recall on exams should go through the roof; but will it be considered cheating?

So that’s some basics of how the brain processes certain types of information. In the next article we cover current research and apps using these types of brain quirks to improve humanity.

Timoni West & Dio Gonzalez work at Unity Labs; Dio is a tech lead, and Timoni is a principal designer. As part of their work in VR, they’ve done some research into how the brain processes types of information, and how this can affect how the brain processes environments in VR. There’s been quite a bit of prior research on the topic, too, so we’ll go over some interesting findings, and finally talk a bit about how perception quirks lead to interesting UX challenges in VR.

March 14, 2016 in Technology | 11 min. read

Is this article helpful for you?

Thank you for your feedback!

Topics covered