Discoveries & Research

AI beats experts in predicting future quality of ‘mini-organs’

Organoids — miniature, lab-grown tissues that mimic organ function and structure — are transforming biomedical research. They promise breakthroughs in personalized transplants, improved modeling of diseases like Alzheimer’s and cancer, and more precise insights into the effects of medical drugs.

Now, researchers from Kyushu University and Nagoya University in Japan have developed a model that uses artificial intelligence (AI) to predict organoid development at an early stage. The model, which is faster and more accurate than expert researchers, could improve the efficiency and lower the cost of culturing organoids.

In this study, published in Communications Biology on December 6, 2024, the researchers focused on predicting the development of hypothalamic-pituitary organoids. These organoids mimic the functions of the pituitary gland, including the production of adrenocorticotropic hormone (ACTH): a crucial hormone for regulating stress, metabolism, blood pressure and inflammation. Deficiency of ACTH can lead to fatigue, anorexia and other issues that can be life-threatening.

“In our lab, our studies on mice show that transplanting hypothalamic-pituitary organoids has the potential to treat ACTH deficiency in humans,” says corresponding author Hidetaka Suga, Associate Professor of Nagoya University’s Graduate School of Medicine.

However, one key challenge for the researchers is determining if the organoids are developing correctly. Derived from stem cells suspended in liquid, organoids are sensitive to minute environmental changes, resulting in variability in their development and final quality.

The researchers found that one sign of good progression is the broad expression of a protein called RAX at an early developmental stage, which often results in organoids with strong ACTH secretion later on.

“We can track development by genetically modifying the organoids to make the RAX protein fluoresce,” says Suga. “However, organoids intended for clinical use, like transplantation, can’t be genetically modified to fluoresce. So our researchers must judge instead based on what they see with their eyes: a time-consuming and inaccurate process.”

Suga and his colleagues at Nagoya therefore collaborated with Hirohiko Niioka, Professor of the Data-Driven Innovation Initiative in Kyushu University, to train deep-learning models for the job instead.

“Deep-learning models are a type of AI that mimics the way the human brain processes information, allowing them to analyze and categorize large amounts of data by recognizing patterns,” explains Niioka.

The Nagoya researchers captured both fluorescent images and bright-field images — which show what the organoids look like under normal white light without any fluorescence — of organoids with fluorescent RAX proteins at 30 days of development. Using the fluorescent images as a guide, they classified 1500 bright-field images into three quality categories: A (wide RAX expression, high quality); B (medium RAX expression, medium quality) and C (narrow RAX expression, low quality).

Niioka then trained two advanced deep-learning models, EfficientNetV2-S and Vision Transformer, developed by Google for image recognition, to predict the quality category of the organoids. He used 1200 of the bright-field images (400 in each category) as the training set.

After training, Niioka combined the two deep-learning models into an ensemble model to further improve performance. The research team used the remaining 300 images (100 from each category) to test the now optimized ensemble model, which classified the bright-field images of organoids with 70% accuracy.

In contrast, when researchers with years of experience with organoid culture predicted the category of the same bright-field images, their accuracy was less than 60%.

“The deep-learning models outperformed the experts in all respects: in their accuracy, their sensitivity, and in their speed,” says Niioka.

The next step was to check if the ensemble model was also able to correctly classify bright-field images of organoids without genetic modification to make RAX fluoresce.

The researchers tested the trained ensemble model on bright-field images of hypothalamic-pituitary organoids without fluorescent RAX proteins at 30 days of development. Using staining techniques, they found that the organoids the model classified as A (high quality) did indeed show high expression of RAX at 30 days. When they continued culturing, these organoids later showed high secretion of ACTH. Meanwhile, low levels of RAX, and later ACTH, was seen for the organoids the model classified as C (low quality).

“Our model can therefore predict at an early stage of development what the final quality of the organoid will be, based solely on visual appearance,” says Niioka. “As far as we know, this is the first time in the world that deep-learning has been used to predict the future of organoid development.”

Moving forward, the researchers plan to improve the accuracy of the deep-learning model by training it on a larger dataset. But even at the current level of accuracy, the model has profound implications for current organoid research.

“We can quickly and easily select high-quality organoids for transplantation and disease modeling, and reduce time and costs by identifying and removing organoids that are developing less well,” concludes Suga. “It’s a game-changer.”


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button