The next evolution of AI begins with ours
The genome has space for only a small fraction of the information needed to control complex behaviors. So then how, for example, does a newborn sea turtle instinctually know to follow the moonlight? Cold Spring Harbor neuroscientists have devised a potential explanation for this age-old paradox. Their ideas should lead to faster, more evolved forms of artificial intelligence.
In a sense, each of us begins life ready for action. Many animals perform amazing feats soon after they’re born. Spiders spin webs. Whales swim. But where do these innate abilities come from? Obviously, the brain plays a key role as it contains the trillions of neural connections needed to control complex behaviors. However, the genome has space for only a small fraction of that information. This paradox has stumped scientists for decades. Now, Cold Spring Harbor Laboratory (CSHL) Professors Anthony Zador and Alexei Koulakov have devised a potential solution using artificial intelligence.
When Zador first encounters this problem, he puts a new spin on it. “What if the genome’s limited capacity is the very thing that makes us so smart?” he wonders. “What if it’s a feature, not a bug?” In other words, maybe we can act intelligently and learn quickly because the genome’s limits force us to adapt. This is a big, bold idea — tough to demonstrate. After all, we can’t stretch lab experiments across billions of years of evolution. That’s where the idea of the genomic bottleneck algorithm emerges.
In AI, generations don’t span decades. New models are born with the push of a button. Zador, Koulakov, and CSHL postdocs Divyansha Lachi and Sergey Shuvaev set out to develop a computer algorithm that folds heaps of data into a neat package — much like our genome might compress the information needed to form functional brain circuits. They then test this algorithm against AI networks that undergo multiple training rounds. Amazingly, they find the new, untrained algorithm performs tasks like image recognition almost as effectively as state-of-the-art AI. Their algorithm even holds its own in video games like Space Invaders. It’s as if it innately understands how to play.
Does this mean AI will soon replicate our natural abilities? “We haven’t reached that level,” says Koulakov. “The brain’s cortical architecture can fit about 280 terabytes of information — 32 years of high-definition video. Our genomes accommodate about one hour. This implies a 400,000-fold compression technology cannot yet match.”
Nevertheless, the algorithm allows for compression levels thus far unseen in AI. That feature could have impressive uses in tech. Shuvaev, the study’s lead author, explains: “For example, if you wanted to run a large language model on a cell phone, one way [the algorithm] could be used is to unfold your model layer by layer on the hardware.”
Such applications could mean more evolved AI with faster runtimes. And to think, it only took 3.5 billion years of evolution to get here.
Source link