Connectomics, the ambitious field of study that seeks to map the intricate network of animal brains, is undergoing a growth spurt. Within the span of a decade, it has journeyed from its nascent stages to a discipline that is poised to (hopefully) unlock the enigmas of cognition and the physical underpinning of neuropathologies such as in Alzheimer’s disease.
At its forefront is the use of powerful electron microscopes, which researchers from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Samuel and Lichtman Labs of Harvard University bestowed with the analytical prowess of machine learning. Unlike traditional electron microscopy, the integrated AI serves as a “brain” that learns a specimen while acquiring the images, and intelligently focuses on the relevant pixels at nanoscale resolution similar to how animals inspect their worlds.
“SmartEM” assists connectomics in quickly examining and reconstructing the brain’s complex network of synapses and neurons with nanometer precision. Unlike traditional electron microscopy, its integrated AI opens new doors to understand the brain’s intricate architecture.
The integration of hardware and software in the process is crucial. The team embedded a GPU into the support computer connected to their microscope. This enabled running machine-learning models on the images, helping the microscope beam be directed to areas deemed interesting by the AI. “This lets the microscope dwell longer in areas that are harder to understand until it captures what it needs,” says MIT professor and CSAIL principal investigator Nir Shavit. “This step helps in mirroring human eye control, enabling rapid understanding of the images.”
“When we look at a human face, our eyes swiftly navigate to the focal points that deliver vital cues for effective communication and comprehension,” says the lead architect of SmartEM, Yaron Meirovitch, a visiting scientist at MIT CSAIL who is also a former postdoc and current research associate neuroscientist at Harvard. “When we immerse ourselves in a book, we don’t scan all of the empty space; rather, we direct our gaze towards the words and characters with ambiguity relative to our sentence expectations. This phenomenon within the human visual system has paved the way for the birth of the novel microscope concept.”
For the task of reconstructing a human brain segment of about 100,000 neurons, achieving this with a conventional microscope would necessitate a decade of continuous imaging and a prohibitive budget. However, with SmartEM, by investing in four of these innovative microscopes at less than $1 million each, the task could be completed in a mere three months.
Nobel Prizes and little worms
Over a century ago, Spanish neuroscientist Santiago Ramón y Cajal was heralded as being the first to characterize the structure of the nervous system. Employing the rudimentary light microscopes of his time, he embarked on leading explorations into neuroscience, laying the foundational understanding of neurons and sketching the initial outlines of this expansive and uncharted realm — a feat that earned him a Nobel Prize. He noted, on the topics of inspiration and discovery, that “As long as our brain is a mystery, the universe, the reflection of the structure of the brain will also be a mystery.”
Progressing from these early stages, the field has advanced dramatically, evidenced by efforts in the 1980s, mapping the relatively simpler connectome of C. elegans, small worms, to today’s endeavors probing into more intricate brains of organisms like zebrafish and mice. This evolution reflects not only enormous strides, but also escalating complexities and demands: mapping the mouse brain alone means managing a staggering thousand petabytes of data, a task that vastly eclipses the storage capabilities of any university, the team says.
Testing the waters
For their own work, Meirovitch and others from the research team studied 30-nanometer thick slices of octopus tissue that were mounted on tapes, put on wafers, and finally inserted into the electron microscopes. Each section of an octopus brain, comprising billions of pixels, was imaged, letting the scientists reconstruct the slices into a three-dimensional cube at nanometer resolution. This provided an ultra-detailed view of synapses. The chief aim? To colorize these images, identify each neuron, and understand their interrelationships, thereby creating a detailed map or “connectome” of the brain’s circuitry.
“SmartEM will cut the imaging time of such projects from two weeks to 1.5 days,” says Meirovitch. “Neuroscience labs that currently can’t be engaged with expensive and long EM imaging will be able to do it now,” The method should also allow synapse-level circuit analysis in samples from patients with psychiatric and neurologic disorders.
Down the line, the team envisions a future where connectomics is both affordable and accessible. They hope that with tools like SmartEM, a wider spectrum of research institutions could contribute to neuroscience without relying on large partnerships, and that the method will soon be a standard pipeline in cases where biopsies from living patients are available. Additionally, they’re eager to apply the tech to understand pathologies, extending utility beyond just connectomics. “We are now endeavoring to introduce this to hospitals for large biopsies, utilizing electron microscopes, aiming to make pathology studies more efficient,” says Shavit.
Two other authors on the paper have MIT CSAIL ties: lead author Lu Mi MCS ’19, PhD ’22, who is now a postdoc at the Allen Institute for Brain Science, and Shashata Sawmya, an MIT graduate student in the lab. The other lead authors are Core Francisco Park and Pavel Potocek, while Harvard professors Jeff Lichtman and Aravi Samuel are additional senior authors. Their research was supported by the NIH BRAIN Initiative and was presented at the 2023 International Conference on Machine Learning (ICML) Workshop on Computational Biology. The work was done in collaboration with scientists from Thermo Fisher Scientific.
By MIT News, November 7, 2023.