13 JUL 2024 · Open and remotely accessible Neuroplatform for research in wetware computing
Fred D. Jordan
Fred D. Jordan*Martin KutterMartin KutterJean-Marc CombyJean-Marc CombyFlora BrozziFlora BrozziEwelina KurtysEwelina Kurtys
FinalSpark, Rue du Clos 12, Vevey, Switzerland
Wetware computing and organoid intelligence is an emerging research field at the intersection of electrophysiology and artificial intelligence. The core concept involves using living neurons to perform computations, similar to how Artificial Neural Networks (ANNs) are used today. However, unlike ANNs, where updating digital tensors (weights) can instantly modify network responses, entirely new methods must be developed for neural networks using biological neurons. Discovering these methods is challenging and requires a system capable of conducting numerous experiments, ideally accessible to researchers worldwide. For this reason, we developed a hardware and software system that allows for electrophysiological experiments on an unmatched scale. The Neuroplatform enables researchers to run experiments on neural organoids with a lifetime of even more than 100 days. To do so, we streamlined the experimental process to quickly produce new organoids, monitor action potentials 24/7, and provide electrical stimulations. We also designed a microfluidic system that allows for fully automated medium flow and change, thus reducing the disruptions by physical interventions in the incubator and ensuring stable environmental conditions. Over the past three years, the Neuroplatform was utilized with over 1,000 brain organoids, enabling the collection of more than 18 terabytes of data. A dedicated Application Programming Interface (API) has been developed to conduct remote research directly via our Python library or using interactive compute such as Jupyter Notebooks. In addition to electrophysiological operations, our API also controls pumps, digital cameras and UV lights for molecule uncaging. This allows for the execution of complex 24/7 experiments, including closed-loop strategies and processing using the latest deep learning or reinforcement learning libraries. Furthermore, the infrastructure supports entirely remote use. Currently in 2024, the system is freely available for research purposes, and numerous research groups have begun using it for their experiments. This article outlines the system’s architecture and provides specific examples of experiments and results.
1 Introduction
The recent rise in wetware computing and consequently, artificial biological neural networks (BNNs), comes at a time when Artificial Neural Networks (ANNs) are more sophisticated than ever.
The latest generation of Large Language Models (LLMs), such as Meta’s Llama 2 or OpenAI’s GPT-4, fundamentally rely on ANNs.
The recent acceleration of ANN use in everyday life, such as in tools like ChatGPT or Perplexity combined with the explosion in complexity in the underlying ANN’s architectures, has had a significant impact on energy consumption. For instance, training a single LLM like GPT-3, a precursor to GPT-4, approximately required 10 GWh, which is about 6,000 times the energy a European citizen uses per year. According to a recent publication the energy consumption projected may increase faster than linearly (De Vries, 2023). At the same time, the human brain operates with approximately 86 billion neurons while consuming only 20 W of power (Clark and Sokoloff, 1999). Given these conditions, the prospect of replacing ANNs running on digital computers with real BNNs is enticing (Smirnova et al., 2023). In addition to the substantial energy demands associated with training LLMs, the inference costs present a similarly pressing concern. Recent disclosures reveal that platforms like OpenAI generate over 100 billion words daily through services such as ChatGPT as reported by Sam Altman, the CEO of OpenAI. When we break down these figures, assuming an average of 1.5 tokens per word—a conservative estimate based on OpenAI’s own tokenizer data—the energy footprint becomes staggering. Preliminary calculations, using the LLaMA 65B model (precursor to Llama 2) as a reference point, suggest energy expenditures ranging from 450 to 600 billion Joules per day for word generation alone (Samsi et al., 2023). While necessary for providing AI-driven insights and interactions to millions of users worldwide, this magnitude of energy use underscores the urgency for more energy-efficient computing paradigms.
Connecting probes to BNNs is not a new idea. In fact, the field of multi-unit electrophysiology has an established state of the art spanning easily over the past 40 years. As a result, there are already well-documented hardware and methods for performing functional electrical interfacing and micro-fluidics needed for nutrient delivery (Gross et al., 1977; Pine, 1980; Wagenaar et al., 2005a; Newman et al., 2013). Some systems are also specifically designed for brain organoids (Yang et al., 2024). However, their research is mostly focused on exploring brain biology for biomedical applications (e.g., mechanisms and potential treatments of neurodegenerative diseases). The possibility of using these methods for making new computing hardware has not been extensively explored.
For this reason, there is comparatively less literature on methods that can be used to reliably program those BNNs in order to perform specific input–output functions (as this is essential for wetware computing, not for biomedical applications). To understand what we need for programming of BNNs, it is helpful to look at analogous problem for ANNs.
For ANNs, the programming task involves finding the network parameters, globally denoted as
S
below, that minimize the difference
L
computed between expected output
E
and actual output
O
, for given inputs
I
, given the transfer function
T
of the ANN. This can be written as:
L
=
f
(
O
,
E
)
, with
O
=
T
(
I
,
S
)
where
f
is typically a function that equals 0 when
O
=
E
.
The same equation applies to BNNs. However, the key differences compared to ANNs include the fact that the network parameters
S
cannot be individually adjusted in the case of BNNs, and the transfer function
T
is both unknown and non-stationary. Therefore, alternative heuristics must be developed, for instance based on spatiotemporal stimulation patterns (Bakkum et al., 2008; Kagan et al., 2022; Cai et al., 2023a,b). Such developments necessitate numerous electrophysiological experiments, including, for instance, complex closed-loop algorithms where stimulation is a function of the network’s prior responses. These experiments can sometimes span days or months.