Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on June 14, 2021

Is ‘brain drift’ the key to machine consciousness?

Could this currently inexplicable phenomenon be what's keeping our robots from experiencing reality?


Is ‘brain drift’ the key to machine consciousness?

Think about someone you love and the neurons in your brain will light up like a Christmas tree. But if you think about them again, will the same lights go off? Chances are: the answer’s no. And that could have big implications for the future of AI.

A team of neuroscientists from the University of Columbia in New York recently published research demonstrating what they refer to as “representational drift” in the brains of mice.

Per the paper:

Although activity in piriform cortex could be used to discriminate between odorants at any moment in time, odour-evoked responses drifted over periods of days to weeks.

The performance of a linear classifier trained on the first recording day approached chance levels after 32 days. Fear conditioning did not stabilize odour-evoked responses.

Daily exposure to the same odorant slowed the rate of drift, but when exposure was halted the rate increased again.

Up front: What’s interesting here is that, in lieu of a better theory, it’s been long believed that neurons in the brain associate experiences and memories with static patterns. In essence, this would mean that when you smell cotton candy certain neurons fire up in your brain and when you smell pizza different ones do.

And, while this is basically still true, what’s changed is that the scientists no longer believe that the same neurons fire up when you smell cotton candy as did the last time you smelled cotton candy.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

This is what “representation drift,” or “brain drift” as we’re calling it, means. Instead of the exact same neurons firing up every time, different neurons across different locations fire up to represent the same concept.

The scientists used mice and their sense of smell in laboratory experiments because it represents a halfway point between the ephemeral nature of abstract memories (what does London feel like?) and the static nature of our other brain connections (our brain’s connection to our muscles, for example).

What the team found was that, despite the fact that we can recognize objects by smell, our brain perceives the same smells differently over time. What you smell one month will have a totally different representation a month later if you take another whiff.

The interesting part: The scientists don’t really know why. This is because they’re bushwhacking a path where few have trod. There just isn’t much in the way of data-based research on how the brain perceives memory and why some memories can seemingly teleport unchanged across areas of the brain.

But perhaps most interesting are the implications. In many ways our brains function similar to binary artificial neural networks. However, the distinct differences between our mysterious gray matter and the meticulously plotted AI systems human engineers build may be where we find everything we need to reverse engineer sentience, consciousness, and the secret of life.

Quick take: According to the scientists’ description, the human brain appears to tune in memory associations over time like an FM radio in a car. Depending on how time and experience has changed you and your perception of the world, your brain may just be readjusting to reality in order to integrate new information seamlessly.

This would indicate we don’t “delete” our old memories or simply update them in place like replacing the contents of a folder. Instead, we re-establish our connection with reality and distribute data across our brain network.

Perhaps the mechanisms driving the integration of data in the human brain – that is, whatever controls the seemingly unpredictable distribution of information across neurons – is what’s missing from our modern-day artificial neural networks and machine learning systems.

You can read the whole paper here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with