TNW València is here 🇪🇸 Pre-register for 2024

This article was published on April 16, 2021

How a theoretical mouse could crack the stock market

Turns out, brain models don't have to be so complicated

How a theoretical mouse could crack the stock market Image by: Rama | Wikimedia Commons
Tristan Greene
Story by

Tristan Greene

Editor, Neural by TNW

Tristan is a futurist covering human-centric artificial intelligence advances, quantum computing, STEM, physics, and space stuff. Pronouns: Tristan is a futurist covering human-centric artificial intelligence advances, quantum computing, STEM, physics, and space stuff. Pronouns: He/him

A team of physicists at Emory University recently published research indicating they’d successfully managed to reduce a mouse’s brain activity to a simple predictive model. This could be a breakthrough for artificial neural networks. You know: robot brains.

Let there be mice: Scientists can do miraculous things with mice such as grow a human ear on one’s back or control one via computer mouse. But this is the first time we’ve heard of researchers using machine learning techniques to grow a theoretical mouse brain.

Per a press release from Emory University:

The dynamics of the neural activity of a mouse brain behave in a peculiar, unexpected way that can be theoretically modeled without any fine tuning.

In other words: We can observe a mouse’s brain activity in real-time, but there are simply too many neuronal interactions for us to measure and quantify each and every one – even with AI. So the scientists are using the equivalent of a math trick to make things simpler.

How’s it work? The research is based on a theory of criticality in neural networks. Basically, all the neurons in your brain exist in an equilibrium between chaos and order. They don’t all do the same thing, but they also aren’t bouncing around randomly.

The researchers believe the brain operates in this balance in much the same way other state-transitioning systems do. Water, for example, can change from gas to liquid to solid. And, at some point during each transition, it achieves a criticality where its molecules are in either both states or neither.

[Read: The biggest tech trends of 2021, according to 3 founders]

The researchers hypothesized that brains, organic neural networks, function under the same hypothetical balance state. So, they ran a bunch of tests on mice as they navigated mazes in order to establish a database of brain data.

Next, the team went to work developing a working, simplified model that could predict neuron interactions using the experimental data as a target. According to their research paper, their model is accurate to within a few percentage points.

What’s it mean? This is early work but, there’s a reason why scientists use mice brains for this kind of research: because they’re not so different from us. If you can reduce what goes on in a mouse’s head to a working AI model, then it’s likely you can eventually scale that to human-brain levels.

On the conservative side of things, this could lead to much more robust deep learning solutions. Our current neural networks are a pale attempt to imitate what nature does with ease. But the Emory team’s mouse models could represent a turning point in robustness, especially in areas where a model is likely to be affected by outside factors.

This could, potentially, include stronger AI inferences where diversity is concerned and increased resilience against bias. And other predictive systems could benefit as well, such as stock market prediction algorithms and financial tracking models. It’s possible this could even increase our ability to predict weather patterns over long periods of time.

Quick take: This is brilliant, but it’s actual usefulness remains to be seen. Ironically, the tech and AI industries are also at a weird, unpredictable point of criticality where brute-force hardware solutions and elegant software shortcuts are starting to pull away from each other.

Still, if we take a highly optimistic view, this could also be the start of something amazing such as artificial general intelligence (AGI) – machines that actually think. No matter how we arrive at AGI, it’s likely we’ll need to begin with models capable of imitating nature’s organic neural nets as closely as possible. You got to start somewhere.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with