It’s the internet’s latest drain on workplace productivity but also a provocative statement about just how subjective individual experiences of the world really are.
So, be honest: Which do you hear — Laurel or Yanny?
The four-second clip is the audio equivalent of the 2015 viral photo of a dress that was either white and gold or blue and black. (The former, definitely.)
The snippet spawned a huge debate on social media Tuesday, and in offices and breakrooms around the country, as listeners took staunch positions as either #TeamYanny or #TeamLaurel. Some declared their opposition to be crazy, hearing impaired, or tricksters.
Science, as usual, has an explanation.
Or, rather, three explanations, as Kevin Franck, director of audiology at Massachusetts Eye and Ear, explained in a phone interview Wednesday morning.
One explanation is that each person’s hearing is a little different. Your hearing is different now than it was 10 years ago, and from what it will be 10 years from now, said Franck, who has a doctorate in hearing science.
Hearing not only changes over time, it differs between men and women, and it’s affected by individual experiences, like working in a quiet or noisy environment, he said.
“My perceptual system is different from your perceptual system,” Franck said.
And your ears aren’t the only equipment that’s different. Unless every member of #TeamYanny and #TeamLaurel is passing around the same dirty earbuds, you’re listening to audio that is subtly different from what others hear. That’s why you might hear “Laurel” clear as day on your computer but hear “Yanny” if you listen on your cellphone, as a certain reporter did.
Another explanation is that human speech is different than speech digitally generated by a computer, Franck explained.
Normal human speech is redundant, he said, carrying sets of sounds complex enough that people understand each other even when speaking over a phone connection that cuts out repeatedly.
If one element of sound is lost, others can fill it in so the meaning is conveyed.
Computer speech, by contrast, is stripped down to more basic elements, without that redundancy, Franck said, “so any change can make the perception quite different.”
A final consideration, Franck said, is where and when you learned language and the set of sounds to which your brain was first exposed.
“Languages vary by how sounds are divided up for meaning,” Franck said. “Some languages have more divisions than others, based on how they evolved over history.”
Franck suggested visualizing sound categories as a sheet of paper divided by a plus sign into four quadrants. Designate a spot in the center of a quadrant, and its sound would be unambiguous. Designate a spot on the line between quadrants, and it would sound like both, or neither.
Now keep the spot where it is, but spin the plus-sign so that it looks like an X. The spot that was centered in one quadrant is now right on the dividing line. The sound didn’t change; only the reference points.
That’s a visual equivalent, Franck said, of shifting from one language to another.
When a sound is not quite one thing and not quite another, the brain will shift it into a category.
“Your brain is not programmed to hear an ambiguous word because it’s got to put it into one of these perceptual categories,” Franck said.
The varied reactions to the audio file are like the different recollections of witnesses to an event, Franck said. An experience is not an objective fact; each person’s brain processes and recalls it subjectively. And that, he said, serves as a lesson.
“Don’t believe what you see, hear, or remember because none of them are perfect,” Franck said. “In this case, you could build a signal much like you could build a memory that is quite subject to that interpretation.”
Listen to the clip below: