Site icon Aliens, Angels, Asteroids, AI, and UFOs

Is this the next frontier towards non-human general intelligence?

Human brain organoids. Credit: David Baillot/UC San Diego Jacobs School of Engineering (CC BY 2.0) via Flickr

The Brunswick lab looks perfectly normal. Perfectly normal masked-and-gloved scientists are pulling petri dishes from fridges, lining up microscopes, and describing the workings of a dark blob splayed untidily on a microchip.

The blob is anything but normal.

It is a mini brain – or a mini hippocampus, to more accurately describe the cells – and it is growing onto a tiny silicon chip. Wiggling, gossamer threads spray outwards above dark, map-like lines of circuitry, as the organoid sends feelers outward into its tiny world.

“What are they doing? Great question. They’re extending… That’s the thing with biology. It’s beautiful. It seeks out connection. So it seeks out connection, and then you end up with these integrated circuits that we control,” Cortical Labs Chief Scientific Officer Brett Kagan told Cosmos.

“We’ve got a paper under review now comparing the organoid intelligence pathway versus what we’re calling bio-engineered intelligence… the interesting thing is that neurons show a huge variety of responses, so figuring that out is part of the sort of algorithms of intelligence, if you will.”

Kagan’s company is one of a handful at the forefront of work to build a biological computer that can skip past the limitations of the 0s and 1s that artificial intelligence is built on.

But while biological intelligence may be able to solve many of the problems currently facing artificial intelligence – the huge energy requirements and fundamentally different way of reasoning for a start – it also comes with big ethical questions we don’t have answers for yet.

Bio vs artificial intelligence

Biological computers – brains – are undeniably more powerful than silicon ones, as demonstrated in 2021 by researchers from Hebrew University of Jerusalem.

They showed that one cortical neuron has similar processing power to a multi-layer deep neural network – the foundation of deep learning models.

But the difference runs deeper, right down to the way biological and artificial intelligence operates.

“Causal reasoning is the neural root of tomorrow-dreaming… It’s our brain’s ability to think: this-leads-to-that. It can be based on some data or no data—or even go against all data,” wrote neuroscientist-turned-English-professor Angus Fletcher in Nautilus magazine during the depths of the COVID19 pandemic.

“This feature of A equals Z means that computers can’t think in A causes Z. The closest they can get is ‘if-then’ statements such as: ‘If Bob bought this toothpaste, then he will buy that toothbrush.’ This can look like causation but it’s only correlation.”

That fundamental difference means there are things that artificial intelligence simply can’t do, and some people, such as Kagan, believe reaching generalised intelligence for machines will require a biological element.

“With machine learning, you can get amazing results, but normally through batch processing, accelerating the learning and having incredibly stable environments. None of this is possible in the real world, right? You’re bound by reality,” Kagan says.

“If you look at every leading robotic company, reputable ones, anyway… they acknowledge the need to go beyond current non-human architecture.”

Mini brains, brain-on-a-chip, minimum viable brains

Organoids – mini 3D clusters of cells made from induced pluripotent stem cells (iPSCs) – have been emerging for several decades, with the first brain organoid made in 2013.

In the last 2 years, the concept of “wetware” computing, or the integration of organic materials, has also turned from science fiction to fact. This is because neurons and silicon chips both work by using electrical signals.

At Cortical Labs, they’re being used in the company’s first product, a shoe-box sized device which researchers can use to run tests on brain material. The box keeps the organoids fed and clean, while the experiment provides the information-rich environment needed to find out how they react to, say, different medications.

A new paper led by the company’s head of biology Brad Watmuff showed that treating the company’s Dishbrain with anti-epiletic medications could restore learning function – a breakthrough because until now medications couldn’t be tested on real brain cells, let alone cells from an individual’s own body.

Last year, Swiss startup Finalspark launched an online platform that allows scientists to conduct remote experiments on 16 living brain organoids.

“Over the past 3 years, the Neuroplatform was utilised with over 1,000 brain organoids, enabling the collection of more than 18 terabytes of data,” wrote FinalSpark co-founder Fred Jordan and his colleagues in a paper last year.

“A dedicated Application Programming Interface (API) has been developed to conduct remote research directly… This allows for the execution of complex 24/7 experiments, including closed-loop strategies and processing using the latest deep learning or reinforcement learning libraries.”

Teaching these cells is also a learning process.

Finalspark teaches its organoids by giving them dopamine for correct answers. Cortical Labs teaches its organoids with patterned data for correct answers and scrambled data for incorrect answers.

Kagan says they have had success using a free energy principle, which is based on how the cell could respond to information entropy changes, and they are now testing other principles – but these are a work in progress he declines to elaborate on.

Upping the complexity

Brain organoids began as single cell layers from specific locations. Now researchers are building organoids using layers of cells from different parts of the brain in what’s being called a chimeroid. The goal is to mimic, and therefore study, how a real brain grows.

Building these models is undeniably complex but being able to reproduce them is even harder, says Safagh Waters, an expert in stem cell and organoid medicine and a founding board member at the New South Wales (NSW) Organoid Innovation Centre.

2D cryosection of a human brain organoid stained with DAPI (teal) and VIPR2 (magenta). Credit: Nreis1 (CC BY 4.0) https://creativecommons.org/licenses/by/4.0/deed.en

“We are still at a time in development of organoids that are very complex, but they’re not complex enough. And we need to put a lot more effort towards reproducibility of the organoids,” she told Cosmos.

Waters is a cystic fibrosis specialist and in the early stages of testing the “crosstalk” between brain and gut organoids when different bacteria are manipulated. 

“This might not necessarily even be electrical signals, it might just be at the molecular level. We might be able to see some changes in the brain organoid expression of the different RNA and the different protein from the brain as a result of a secondary organ, and as a result of changes in the microbiome of that secondary organ,” she says.

“This is not something that’s already established anywhere in the world. We are one of the first people trying to put this together.”

Waters is also part of the new Non-Animal Technologies Network (NAT-Net) in NSW, which is pitching organoids generally as an alternative to animal testing.

But is it right?

Building a biologically intelligent computer comes with ethical questions that companies such as Neuralink or Synchron, which are putting AI implants inside human brains, don’t need to engage with.

Can the organoids feel pain? Are they conscious? Questions like these introduce the issue of moral status: if they could feel pain, is it more wrong to subject them to experiments than, say, animals?

Melbourne Law School ethicist Julian Savalescu says different applications of the technology bring up different ethical considerations. Integrating neurons into microchips or growing simple brain organoids in a petri dish without blood supply, isn’t the same as implanting human organoids into animals to make human-animal chimeras – as a research team University of Cambridge did in 2022 with newborn rats.

“When it comes to the transplantation of organoids into rat brains or creating human-non-human chimeras, the entity’s clearly conscious,” he tells Cosmos.

“The edge of the debate is once you start to modify something that has consciousness, or the potential for consciousness, what are the experiences of that entity? And then, you know, how should it be treated so that it doesn’t suffer? And then, at what point does it become wrong to kill it or to turn it off in the same way as would be wrong to kill, you know, a human being?”

The problem is there is no agreement on what consciousness is, nor how it should be measured, nor what it looks like in terms of brain structure. 

“We need to agree on some sort of moral lines, some sort of points that we think are important if we cross them. And secondly, we need to develop functional assessments of whatever we create to determine whether it’s crossed that line or not,” Savalescu says.

“If you’re talking about a new life form or a human-pig chimera, you need to evaluate its capacities before you start to experiment on it. This also applies to artificial intelligence without human neurons that may become conscious.”

Biological intelligence might be the next frontier towards a non-human general intelligence. But with it comes a new series of questions that modern-day Frankensteins must answer – as well as the new era of possibilities in medical science.

Exit mobile version