Jul. 18, 2025
A rare native Atlanta Walter Henderson, Phys 93, associate director for the Materials Characterization Facility and principal research scientist for the Institute for Matter and Systems, jokes that he grew up in “the Stone Age.” But the work that he does managing 12 research leaders who train more than 800 fellow scientists to do over 40,000 hours of work that contributes nearly $400 million in research funding to Georgia Tech each year? That’s positively “Space Age” in nature.
He also notes that they have a surprising amount of fun on the job. For instance, Henderson smiles, consider that time when fast food giant Arby’s asked the team to create the world’s smallest ad by using technology to etch an advertisement onto a sesame seed back in 2018. “We were approached by an ad agency who wanted to earn a Guinness World Record for the smallest sign on the market,” he chuckles. “We used a focused ion beam to do it. It’s a bit like using a laser to inscribe things, except instead of a light beam, it’s this beam of gallium metal ions that you use to etch into samples such as the seed, which also featured the Arby’s logo.”
The ad was then set up at one of their restaurants with an electronic microscope for viewing, given that it was basically about as wide as a human hair, Henderson notes. The agency also made a follow-on internet commercial. “My claim to fame is that I suited up in a bunny suit for them to shoot the video, and some footage in the clean room itself,” he says. “But the actual work was done in our basement-floor Microanalysis lab in the Marcus Nanotechnology Building. In any event, you can still find the ad on YouTube. I just wish I’d had a better agent: I didn’t get any royalties at all, not even, like, a year of free Arby’s.”
It’s not the only time that the facility — a typically serious scientific setting — has been put to equally unique or interesting purposes, though. “For instance, we’ve been asked to analyze pieces of clothing and conduct forensics for crimincal investigations,” notes Henderson. “Given our advanced research equipment, we’ve also been asked to review everything from moon rocks to frogs’ tongues — and practical applications that companies can derive from their scientific properties. On top of it, they’ve also had us test samples and run mechanical property analyses for the Library of Congress and [on] trade secret items for different companies or matters of national security for the government.”
While life inside the lab is fairly routine, Henderson notes, it’s definitely more interesting and varied than some might suspect. “There are certainly moments,” he says. That said, just don’t ask him what happened to the original see, which has since gone AWOL. “I don’t know what happened to it… or if someone at it,” he muses. “But I still have a bottle of sesame seeds in my office, so we could always make a new one.”
by Scott Steinberg, Mgt 99
Read the latest issue of the Georgia Tech Alumni Magazine
News Contact
Amelia Neumeister | Research Communications Program Manager
Institute for Matter and Systems
Jul. 18, 2025
As more satellites launch into space, the satellite industry has sounded the alarm about the danger of collisions in low Earth orbit (LEO). What is less understood is what might happen as more missions head to a more targeted destination: the moon.
According to The Planetary Society, more than 30 missions are slated to launch to the moon between 2024 and 2030, backed by the U.S., China, Japan, India, and various private corporations. That compares to over 40 missions to the moon between 1959 and 1979 and a scant three missions between 1980 and 2000.
A multidisciplinary team at Georgia Tech has found that while collision probabilities in orbits around the moon are very low compared to Earth orbit, spacecraft in lunar orbit will likely need to conduct multiple costly collision avoidance maneuvers each year. The Journal of Spacecraft and Rockets published the Georgia Tech collision-avoidance study in March.
“The number of close approaches in lunar orbit is higher than some might expect, given that there are only tens of satellites, rather than the thousands in low Earth orbit,” says paper co-author Mariel Borowitz, associate professor in the Sam Nunn School of International Affairs in the Ivan Allen College of Liberal Arts.
Borowitz and other researchers attribute these risky approaches in part to spacecraft often choosing a limited number of favorable orbits and the difficulty of monitoring the exact location of spacecraft that are more than 200,000 miles away.
“There is significant uncertainty about the exact location of objects around the moon. This, combined with the high cost associated with lunar missions, means that operators often undertake maneuvers even when the probability is very low — up to one in 10 million,” Borowitz explains.
The Georgia Tech research is the first published study showing short- and long-term collision risks in cislunar orbits. Using a series of Monte Carlo simulations, the researchers modeled the probability of various outcomes in a process that cannot be easily predicted because of random variables.
“Our analysis suggests that satellite operators must perform up to four maneuvers annually for each satellite for a fleet of 50 satellites in low lunar orbit (LLO),” said one of the study’s authors, Brian Gunter, associate professor in the Daniel Guggenheim School of Aerospace Engineering.
He noted that with only 10 satellites in LLO, a satellite might still need a yearly maneuver. This is supported by what current cislunar operators have reported.
Favored Orbits
Most close encounters are expected to occur near the moon’s equator, an intersection point between the orbit planes of commonly used “frozen” and low lunar orbits, which are preferred by many operators. Other possible regions of congestion can occur at the Lagrangian points, or regions where the gravitational forces of Earth and the moon balance out. Stable orbits in these regions have names such as Halo and Lyapunov orbits.
“Lagrangian points are an interesting place to put a satellite because it can maintain its orbit for long periods with very little maneuvering and thrusting. Frozen orbits, too. Anywhere outside these special areas, you have to spend a lot of fuel to maintain an orbit,” he said.
Gunter and other researchers worry that if operators aren’t coordinated about how they plan lunar missions, opportunities for collision will increase in these popular orbits.
“The close approaches were much more common than I would have intuitively anticipated,” says lead study author Stef Crum.
The 2024 graduate of Georgia Tech’s aerospace engineering doctoral program notes that, considering the small number of satellites in lunar orbit, the need for multiple maneuvers was “really surprising.”
Crum, who is also co-founder of Reditus Space, a startup he founded in 2024 to provide reusable orbital re-entry services, adds that the cislunar environment is so challenging because “it’s incredibly vast.”
His research also examines ways to improve object monitoring in cislunar space. Maintaining continuous custody of these objects is difficult because a target’s position must be monitored over the entire duration of its trajectory.
“That wasn’t feasible for translunar orbits, given the vast volume of cislunar orbit, which stretches multiple millions of kilometers in three dimensions,” he says.
By estimating a satellite’s orbit using observed data and constraining the presumed location and direction of the satellite, rather than continuous tracking (a process known as continuous custody), Crum greatly simplified the process.
“You no longer need thousands of satellites or a set of enormous satellites to cover all potential trajectories,” he explains. “Instead, one or a few satellites are required, and operators can lose custody for a time as long as the connection is reacquired later.”
Since the team started their study, there has been a lot of interest in the moon and cislunar activity — both NASA and China’s National Space Administration are planning to send humans to the moon. In the last two years, India, Japan, the U.S., China, Russia, and four private companies have attempted missions to the moon.
Why the Moon
Spacefaring nations’ intense interest in exploring the lunar surface comes as no surprise given that the moon offers a variety of resources, including solar power, water, oxygen, and metals like iron, titanium, and uranium. It also contains Helium-3, a potential fuel for nuclear fusion, and rare earth metals vital for modern technology. With the recent discovery of water ice, it could be a plentiful source for rocket fuel that can be created from liquifying oxygen and hydrogen needed to launch deep space missions to destinations like Mars. In February, Georgia Tech announced that researchers have developed new algorithms to help Intuitive Machines’ lunar lander find water ice on the moon.
Commercial space companies like Axiom Space and Redwire Space, as well as space agencies, are actively building lunar infrastructure, from satellite constellations to orbital platforms to support communication, navigation, scientific research, and eventually space tourism.
A key project involves the Lunar Gateway, a joint venture of NASA and international space agencies like ESA, JAXA, and CSA, as well as commercial partners. Humanity’s first space station around the moon will serve as a central hub for human exploration of the moon and is considered a stepping stone for future deep space missions.
Getting Ahead of a Gold Rush to the Moon
All this activity underscores the urgency to get out in front of potential crowding issues — something that hasn’t occurred in LEO, where near-miss collisions, or conjunctions, are frequent. LEO, which is 100 to 1,200 miles above the Earth’s surface, is host to more than 14,000 satellites and 120 million pieces of debris from launches, collisions, and wear and tear, reports Reuters.
“Using the near-Earth environment as an example, the space object population has gone from approximately 6,000 active satellites in the early 2020s to an anticipated 60,000 satellites in the coming decade if the projected number of large satellite constellations currently in the works gets deployed. That poses many challenges in terms of how we can manage that sustainably,” observed Gunter. “If something similar happens in the lunar environment, say if Artemis (NASA’s program to establish the first long-term presence on the moon) is successful and a lunar base is established, and there is discovery of volatiles or water deposits, it could initiate a kind of gold rush effect that might accelerate the number of actors in cislunar space.”
For this reason, Borowitz argues for the need to begin working on coordination, either in the planning of the orbits for future missions or by sharing information about the location of objects operating in lunar orbit. She pointed out that spacecraft outfitted for moon missions are expensive, making a collision highly costly. Also, debris from such a scenario would spread in an unpredictable way, which could be problematic for other objects.
Gunter agreed, noting, “If we’re not careful, we could be putting a lot of things in this same path. We must ensure we build out the cislunar orbital environment in a smart way, where we’re not intentionally putting spacecraft in the same orbital spaces. If we do that, everyone should be able to get what they want and not be in each other’s way.”
Borowitz says some coordination efforts are underway with the UN Committee on the Peaceful Uses of Outer Space and the creation of an action team on lunar activities; however, international diplomacy is a time-consuming process, and it can be a challenge to keep pace with advancements in technology.
She contends that the Georgia Tech study could provide baseline data that “could be helpful for international coordination efforts, helping to ensure that countries better understand potential future risks.”
Gunter and Borowitz say that follow-on research for the team could involve looking into the Lunar Gateway orbit and other special orbits to see how crowded that space will likely get, and then do an end-to-end simulation of these orbits to determine the most effective way to build them out to avoid collision risks. Ultimately, they intend to develop guidelines to help ensure that future space actors headed to the moon can operate safely.
Jul. 16, 2025
The National Science Foundation (NSF) has awarded Georgia Tech and its partners $20 million to build a powerful new supercomputer that will use artificial intelligence (AI) to accelerate scientific breakthroughs.
Called Nexus, the system will be one of the most advanced AI-focused research tools in the U.S. Nexus will help scientists tackle urgent challenges such as developing new medicines, advancing clean energy, understanding how the brain works, and driving manufacturing innovations.
“Georgia Tech is proud to be one of the nation’s leading sources of the AI talent and technologies that are powering a revolution in our economy,” said Ángel Cabrera, president of Georgia Tech. “It’s fitting we’ve been selected to host this new supercomputer, which will support a new wave of AI-centered innovation across the nation. We’re grateful to the NSF, and we are excited to get to work.”
Designed from the ground up for AI, Nexus will give researchers across the country access to advanced computing tools through a simple, user-friendly interface. It will support work in many fields, including climate science, health, aerospace, and robotics.
“The Nexus system's novel approach combining support for persistent scientific services with more traditional high-performance computing will enable new science and AI workflows that will accelerate the time to scientific discovery,” said Katie Antypas, National Science Foundation director of the Office of Advanced Cyberinfrastructure. “We look forward to adding Nexus to NSF's portfolio of advanced computing capabilities for the research community.”
Nexus Supercomputer — In Simple Terms
- Built for the future of science: Nexus is designed to power the most demanding AI research — from curing diseases, to understanding how the brain works, to engineering quantum materials.
- Blazing fast: Nexus can crank out over 400 quadrillion operations per second — the equivalent of everyone in the world continuously performing 50 million calculations every second.
- Massive brain plus memory: Nexus combines the power of AI and high-performance computing with 330 trillion bytes of memory to handle complex problems and giant datasets.
- Storage: Nexus will feature 10 quadrillion bytes of flash storage, equivalent to about 10 billion reams of paper. Stacked, that’s a column reaching 500,000 km high — enough to stretch from Earth to the moon and a third of the way back.
- Supercharged connections: Nexus will have lightning-fast connections to move data almost instantaneously, so researchers do not waste time waiting.
- Open to U.S. researchers: Scientists from any U.S. institution can apply to use Nexus.
Why Now?
AI is rapidly changing how science is investigated. Researchers use AI to analyze massive datasets, model complex systems, and test ideas faster than ever before. But these tools require powerful computing resources that — until now — have been inaccessible to many institutions.
This is where Nexus comes in. It will make state-of-the-art AI infrastructure available to scientists all across the country, not just those at top tech hubs.
“This supercomputer will help level the playing field,” said Suresh Marru, principal investigator of the Nexus project and director of Georgia Tech’s new Center for AI in Science and Engineering (ARTISAN). “It’s designed to make powerful AI tools easier to use and available to more researchers in more places.”
Srinivas Aluru, Regents’ Professor and senior associate dean in the College of Computing, said, “With Nexus, Georgia Tech joins the league of academic supercomputing centers. This is the culmination of years of planning, including building the state-of-the-art CODA data center and Nexus’ precursor supercomputer project, HIVE."
Like Nexus, HIVE was supported by NSF funding. Both Nexus and HIVE are supported by a partnership between Georgia Tech’s research and information technology units.
A National Collaboration
Georgia Tech is building Nexus in partnership with the National Center for Supercomputing Applications at the University of Illinois Urbana-Champaign, which runs several of the country’s top academic supercomputers. The two institutions will link their systems through a new high-speed network, creating a national research infrastructure.
“Nexus is more than a supercomputer — it’s a symbol of what’s possible when leading institutions work together to advance science,” said Charles Isbell, chancellor of the University of Illinois and former dean of Georgia Tech’s College of Computing. “I'm proud that my two academic homes have partnered on this project that will move science, and society, forward.”
What’s Next
Georgia Tech will begin building Nexus this year, with its expected completion in spring 2026. Once Nexus is finished, researchers can apply for access through an NSF review process. Georgia Tech will manage the system, provide support, and reserve up to 10% of its capacity for its own campus research.
“This is a big step for Georgia Tech and for the scientific community,” said Vivek Sarkar, the John P. Imlay Dean of Computing. “Nexus will help researchers make faster progress on today’s toughest problems — and open the door to discoveries we haven’t even imagined yet.”
News Contact
Siobhan Rodriguez
Senior Media Relations Representative
Institute Communications
Jul. 15, 2025
The National Science Foundation (NSF) has awarded Georgia Tech and its partners $20 million to build a powerful new supercomputer that will use artificial intelligence (AI) to accelerate scientific breakthroughs.
Called Nexus, the system will be one of the most advanced AI-focused research tools in the U.S. Nexus will help scientists tackle urgent challenges such as developing new medicines, advancing clean energy, understanding how the brain works, and driving manufacturing innovations.
“Georgia Tech is proud to be one of the nation’s leading sources of the AI talent and technologies that are powering a revolution in our economy,” said Ángel Cabrera, president of Georgia Tech. “It’s fitting we’ve been selected to host this new supercomputer, which will support a new wave of AI-centered innovation across the nation. We’re grateful to the NSF, and we are excited to get to work.”
Designed from the ground up for AI, Nexus will give researchers across the country access to advanced computing tools through a simple, user-friendly interface. It will support work in many fields, including climate science, health, aerospace, and robotics.
“The Nexus system's novel approach combining support for persistent scientific services with more traditional high-performance computing will enable new science and AI workflows that will accelerate the time to scientific discovery,” said Katie Antypas, National Science Foundation director of the Office of Advanced Cyberinfrastructure. “We look forward to adding Nexus to NSF's portfolio of advanced computing capabilities for the research community.”
Nexus Supercomputer — In Simple Terms
- Built for the future of science: Nexus is designed to power the most demanding AI research — from curing diseases, to understanding how the brain works, to engineering quantum materials.
- Blazing fast: Nexus can crank out over 400 quadrillion operations per second — the equivalent of everyone in the world continuously performing 50 million calculations every second.
- Massive brain plus memory: Nexus combines the power of AI and high-performance computing with 330 trillion bytes of memory to handle complex problems and giant datasets.
- Storage: Nexus will feature 10 quadrillion bytes of flash storage, equivalent to about 10 billion reams of paper. Stacked, that’s a column reaching 500,000 km high — enough to stretch from Earth to the moon and a third of the way back.
- Supercharged connections: Nexus will have lightning-fast connections to move data almost instantaneously, so researchers do not waste time waiting.
- Open to U.S. researchers: Scientists from any U.S. institution can apply to use Nexus.
Why Now?
AI is rapidly changing how science is investigated. Researchers use AI to analyze massive datasets, model complex systems, and test ideas faster than ever before. But these tools require powerful computing resources that — until now — have been inaccessible to many institutions.
This is where Nexus comes in. It will make state-of-the-art AI infrastructure available to scientists all across the country, not just those at top tech hubs.
“This supercomputer will help level the playing field,” said Suresh Marru, principal investigator of the Nexus project and director of Georgia Tech’s new Center for AI in Science and Engineering (ARTISAN). “It’s designed to make powerful AI tools easier to use and available to more researchers in more places.”
Srinivas Aluru, Regents’ Professor and senior associate dean in the College of Computing, said, “With Nexus, Georgia Tech joins the league of academic supercomputing centers. This is the culmination of years of planning, including building the state-of-the-art CODA data center and Nexus’ precursor supercomputer project, HIVE."
Like Nexus, HIVE was supported by NSF funding. Both Nexus and HIVE are supported by a partnership between Georgia Tech’s research and information technology units.
A National Collaboration
Georgia Tech is building Nexus in partnership with the National Center for Supercomputing Applications at the University of Illinois Urbana-Champaign, which runs several of the country’s top academic supercomputers. The two institutions will link their systems through a new high-speed network, creating a national research infrastructure.
“Nexus is more than a supercomputer — it’s a symbol of what’s possible when leading institutions work together to advance science,” said Charles Isbell, chancellor of the University of Illinois and former dean of Georgia Tech’s College of Computing. “I'm proud that my two academic homes have partnered on this project that will move science, and society, forward.”
What’s Next
Georgia Tech will begin building Nexus this year, with its expected completion in spring 2026. Once Nexus is finished, researchers can apply for access through an NSF review process. Georgia Tech will manage the system, provide support, and reserve up to 10% of its capacity for its own campus research.
“This is a big step for Georgia Tech and for the scientific community,” said Vivek Sarkar, the John P. Imlay Dean of Computing. “Nexus will help researchers make faster progress on today’s toughest problems — and open the door to discoveries we haven’t even imagined yet.”
News Contact
Siobhan Rodriguez
Senior Media Relations Representative
Institute Communications
Jul. 15, 2025
The Laser Interferometer Gravitational-Wave Observatory (LIGO)’s LIGO-Virgo-KAGRA (LVK) collaboration has detected an extremely unusual binary black hole merger — a phenomenon that occurs when two black holes are pulled into each other's orbit and combine. Announced yesterday in a California Institute of Technology press release, the binary black hole merger, GW231123, is the largest ever detected with gravitational waves.
Before merging, both black holes were spinning exceptionally fast, and their masses fell into a range that should be very rare — or impossible.
“Most models don't predict black holes this big can be made by supernovas, and our data indicates that they were spinning at a rate close to the limit of what’s theoretically possible,” says Margaret Millhouse, a research scientist in the School of Physics who played a key role in the research. “Where could they have come from? It raises interesting questions.”
A binary black hole merger absorbs characteristics from both of the contributors, she adds. “As a result, this is not only the most massive binary black hole ever seen but also the fastest-spinning binary black hole confidently detected with gravitational waves.”
“GW231123 is a record-breaking event,” says School of Physics Professor Laura Cadonati, who has been a member of the LIGO Scientific Collaboration since 2002. “LIGO has been observing the cosmos for 10 years now. This discovery underscores that there is still so much that this instrument can help us learn.”
A Cosmic View
The findings challenge current theories on how smaller black holes form, says School of Physics Assistant Professor and LIGO collaborator Surabhi Sachdev. Smaller black holes are the result of supernovae: dying and collapsing stars. During that collapse, explosions can tear apart or eject part of the star’s mass — limiting the size of the black hole that forms.
“Black holes from supernovae can weigh up to about 60 times the mass of our Sun,” she says. “The black holes in this merger were likely the mass of hundreds of suns.”
Because of its size, GW231123 also allowed the team to study the merger in unprecedented detail. “LIGO has observed scores of black hole mergers,” says Cadonati. “Of these, GW231123 has provided us with the clearest view of the ‘grand finale’ of a merger thus far. This adds a new clue to solve the puzzle that are black holes, including their origins and properties.”
“While we saw that our expectations matched the data, the extreme nature of this event pushed our models to their limits,” Millhouse adds. “A massive, highly spinning system like this will be of interest to researchers who study how binary black holes form.”
Decoding a Split-Second Signal
Millhouse and School of Physics Postdoctoral Fellow Prathamesh Joshi used Einstein’s equations for general relativity to confirm LIGO’s detections.
To find black holes, LIGO measures distortions in spacetime — ripples that are created when two black holes collide. These patterns in gravitational waves can be used to find the signature signal of black hole collisions.
“In this case, the signal lasted for just one-tenth of a second, but it was very clear,” says Joshi. "Previously, we designed a special study to detect these interesting signals, which accounted for all the unusual properties of such massive systems — and it paid off!”
“To ensure it wasn’t noise, the Georgia Tech team first reconstructed the signal in a model-agnostic way,” Millhouse adds. “We then compared those reconstructions to a model that uses Einstein's equations of general relativity, and both reconstructions looked very similar, which helped confirm that this highly unusual phenomenon was a genuine detection.”
Sachdev says that seeing the signal at both LIGO Observatories — placed in Hanford, Washington and Livingston, Louisiana — was also critical. “These short signals are very hard to detect, and this signal is so unlike any of the other binary black holes that we've seen before,” she says. “Without both detectors, we would have missed it.”
A Decade of Discovery
While the team has yet to determine how the original black holes formed, one theory is that they may have resulted from mergers themselves. “This could have been a chain of mergers,” Sachdev explains. “This tells us that they could have existed in a very dense environment like a nuclear star cluster or an active galactic nucleus.” Their spins provide another clue as spinning is a characteristic usually seen in black holes resulting from a merge.
The team adds that GW231123 could provide clues on how larger black holes are formed — including the mysterious supermassive black holes at the center of galaxies.
“Gravitational wave science is almost a decade old, and we're still making fundamental discoveries,” says Millhouse. “It’s exciting that LIGO is continuing to detect new phenomena, and this is at the edge of what we've seen thus far. There's still so much we can learn.”
The team expects to update their catalogue of black holes in August 2025, which will provide another window into how this exceptionally heavy black hole might fit into the universe, and what we can continue to learn from it.
Funding: The LIGO Laboratory is supported by the U.S. National Science Foundation and operated jointly by Caltech and MIT.
Jul. 11, 2025
Nanoparticles – the tiniest building blocks of our world – are constantly in motion, bouncing, shifting, and drifting in unpredictable paths shaped by invisible forces and random environmental fluctuations.
Better understanding their movements is key to developing better medicines, materials, and sensors. But observing and interpreting their motion at the atomic scale has presented scientists with major challenges.
However, researchers in Georgia Tech’s School of Chemical and Biomolecular Engineering (ChBE) have developed an artificial intelligence (AI) model that learns the underlying physics governing those movements.
The team’s research, published in Nature Communications, enables scientists to not only analyze, but also generate realistic nanoparticle motion trajectories that are indistinguishable from real experiments, based on thousands of experimental recordings.
A Clearer Window into the Nanoworld
Conventional microscopes, even extremely powerful ones, struggle to observe moving nanoparticles in fluids. And traditional physics-based models, such as Brownian motion, often fail to fully capture the complexity of unpredictable nanoparticle movements, which can be influenced by factors such as viscoelastic fluids, energy barriers, or surface interactions.
To overcome these obstacles, the researchers developed a deep generative model (called LEONARDO) that can analyze and simulate the motion of nanoparticles captured by liquid-phase transmission electron microscopy (LPTEM), allowing scientists to better understand nanoscale interactions invisible to the naked eye. Unlike traditional imaging, LPTEM can observe particles as they move naturally within a microfluidic chamber, capturing motion down to the nanometer and millisecond.
“LEONARDO allows us to move beyond observation to simulation,” said Vida Jamali, assistant professor and Daniel B. Mowrey Faculty Fellow in ChBE@GT. “We can now generate high-fidelity models of nanoscale motion that reflect the actual physical forces at play. LEONARDO helps us not only see what is happening at the nanoscale but also understand why.”
To train and test LEONARDO, the researchers used a model system of gold nanorods diffusing in water. They collected more than 38,000 short trajectories under various experimental conditions, including different particle sizes, frame rates, and electron beam settings. This diversity allowed the model to generalize across a broad range of behaviors and conditions.
The Power of LEONARDO’s Generative AI
What distinguishes LEONARDO is its ability to learn from experimental data while being guided by physical principles, said study lead author Zain Shabeeb, a PhD student in ChBE@GT. LEONARDO uses a specialized “loss function” based on known laws of physics to ensure that its predictions remain grounded in reality, even when the observed behavior is highly complex or random.
“Many machine learning models are like black boxes in that they make predictions, but we don’t always know why,” Shabeeb said. “With LEONARDO, we integrated physical laws directly into the learning process so that the model’s outputs remain interpretable and physically meaningful.”
LEONARDO uses a transformer-based architecture, which is the same kind of model behind many modern language AI applications. Like how a language model learns grammar and syntax, LEONARDO learns the "grammar" of nanoparticle movement, identifying hidden reasons for the ways nanoparticles interact with their environment.
Future Impact
By simulating vast libraries of possible nanoparticle motions, LEONARDO could help train AI systems that automatically control and adjust electron microscopes for optimal imaging, paving the way for “smart” microscopes that adapt in real time, the researchers said.
“Understanding nanoscale motion is of growing importance to many fields, including drug delivery, nanomedicine, polymer science, and quantum technologies,” Jamali said. “By making it easier to interpret particle behavior, LEONARDO could help scientists design better materials, improve targeted therapies, and uncover new fundamental insights into how matter behaves at small scales."
CITATION: Zain Shabeeb , Naisargi Goyal, Pagnaa Attah Nantogmah, and Vida Jamali, “Learning the diffusion of nanoparticles in liquid phase TEM via physics-informed generative AI,” Nature Communications, 2025.
News Contact
Brad Dixon, braddixon@gatech.edu
Jul. 11, 2025
Right now, about 70 million miles away, a Ramblin’ Wreck from Georgia Tech streaks through the cosmos. It’s a briefcase-sized spacecraft called Lunar Flashlight that was assembled in a Georgia Tech Research Institute (GTRI) cleanroom in 2021, then launched aboard a SpaceX rocket in 2022.
The plan was to send Lunar Flashlight to the moon, where the spacecraft would shoot lasers at its south pole in a search for frozen water. Mission control for the flight was on Georgia Tech’s campus, where students in the Daniel Guggenheim School of Aerospace Engineering (AE) sat in the figurative driver’s seat. They worked for several months in 2023 to coax the craft toward its intended orbit in coordination with NASA’s Jet Propulsion Lab (JPL).
A faulty propulsion system kept the CubeSat from reaching its goal. Disappointing, to be sure, but it opened a new series of opportunities for the student controllers. When it was clear Lunar Flashlight wouldn’t reach the moon and instead settle into an orbit of the sun, JPL turned over ownership to Georgia Tech. It’s now the only higher education institution that has controlled an interplanetary spacecraft.
Lunar Flashlight’s initial orbit, planned destination, and current whereabouts mirrors much of the College of Engineering’s research in space technology. Some faculty are focused on projects in low earth orbit (LEO). Others have an eye on the moon. A third group is looking well beyond our small area of the solar system.
No matter the distance, though, each of these Georgia Tech engineers is working toward a new era of exploration and scientific discovery.
News Contact
Jason Maderer
College of Engineering
Jul. 11, 2025
A study from Georgia Tech’s School of Chemical and Biomolecular Engineering introduces LEONARDO, a deep generative AI model that reveals the hidden dynamics of nanoparticle motion in liquid environments. By analyzing over 38,000 experimental trajectories captured through liquid-phase transmission electron microscopy (LPTEM), LEONARDO not only interprets but also generates realistic simulations of nanoscale movement. This innovation marks a major leap in understanding the physical forces at play in nanotechnology, with promising implications for medicine, materials science, and sensor development.
Jul. 10, 2025
Giga, a global initiative focused on expanding internet connectivity to schools, launched its new tech and innovation event series “Giga Talks” on June 19 with a keynote address from Pascal Van Hentenryck, a leading artificial intelligence expert from the Georgia Institute of Technology.
Van Hentenryck serves as the A. Russell Chandler III Chair and Professor in Georgia Tech’s H. Milton Stewart School of Industrial and Systems Engineering. He is also the director of Tech AI, Georgia Tech’s new strategic hub for artificial intelligence, and the U.S. National Science Foundation AI Institute for Advances in Optimization (AI4OPT), which operates under Tech AI’s umbrella.
In his talk, “AI for Social Good,” Van Hentenryck showcased how AI technologies can drive impact across key sectors—including mobility, education, healthcare, disaster response, and e-commerce. Drawing from ongoing research and real-world deployments, he emphasized the critical role of human-centered design and interdisciplinary collaboration in developing AI that benefits society at large.
“AI has tremendous potential to serve the public good when guided by ethics, equity, and purpose-driven innovation,” said Van Hentenryck. “At Georgia Tech, our work aims to harness this potential to create meaningful change in people’s lives.”
The event marked the debut of Giga Talks, a new speaker series designed to convene global thought leaders, engineers, and policymakers around timely issues in technology and innovation. The initiative supports Giga’s broader mission to connect every school in the world to the internet and unlock digital opportunities for children everywhere.
A video recording of Van Hentenryck’s talk is available on here.
News Contact
Breon Martin
AI Marketing Communications Manager
Jul. 10, 2025
Pascal Van Hentenryck, the A. Russell Chandler III Chair and professor at Georgia Tech, and director of the U.S. National Science Foundation AI Institute for Advances in Optimization (AI4OPT) and Tech AI, delivered a keynote address at the 11th IFAC Conference on Manufacturing Modelling, Management and Control (MIM 2025), hosted by the Norwegian University of Science and Technology (NTNU).
Combining Technologies for Real-World Results
Van Hentenryck introduced a series of foundational approaches—such as primal and dual optimization proxies, predict-then-optimize strategies, self-supervised learning, and deep multi-stage policies—that enable AI systems to operate effectively and responsibly in high-stakes, real-time environments. These frameworks demonstrate the power of integrating AI with domain-specific reasoning to achieve results unattainable by either field alone.
“This is not just about building smarter algorithms,” Van Hentenryck said. “It’s about designing AI that can adapt, learn, and optimize under uncertainty—across supply chains, energy systems, and manufacturing networks.”
Grounded in Real-World Impact
The keynote aligned directly with the MIM 2025 focus on logistics and production systems. Drawing from recent work in supply chain optimization and smart manufacturing, Van Hentenryck emphasized how AI4OPT’s research is already generating measurable impact in industry.
MIM 2025, organized by NTNU’s Production Management Research Group and supported by MHI and CICMHE, featured more than 40 experts delivering keynotes, presenting research, and leading breakout sessions across topics in modeling, control, and decision-making in manufacturing and logistics.
About Tech AI
Tech AI is Georgia Tech’s strategic initiative to lead in the development and application of artificial intelligence across disciplines and industries. Serving as a unifying platform for AI research, education, and collaboration, Tech AI connects researchers, industry, and government partners to drive responsible innovation in areas such as healthcare, mobility, energy, sustainability, and education. Director of Tech AI, Pascal Van Hentenryck helps guide the institute’s research vision and strategic alignment across Georgia Tech’s AI portfolio. Learn more at ai.gatech.edu.
About AI4OPT
The AI Institute for Advances in Optimization (AI4OPT) is one of the National Science Foundation’s flagship AI Institutes and is led by Georgia Tech. The institute brings together experts in artificial intelligence, optimization, and control to tackle grand challenges in supply chains, transportation, and energy systems.
AI4OPT is one of several NSF-funded AI institutes housed within Tech AI’s collaborative framework, enabling cross-disciplinary research with real-world outcomes. Learn more at ai4opt.org.
News Contact
Breon Martin
AI Marketing Communications Manager
Pagination
- 1 Page 1
- Next page