Feb. 27, 2026
DOE Office of Science ASCR Reports
ASCR Workshop on Inverse Methods for Complex Systems under Uncertainty
ASCR Workshop on Energy-Efficient Computing for Science

Georgia Tech researchers applied their expertise to a national research program that will shape the future of computing. Their work may yield more energy-efficient computers and better predictions for environmental challenges like carbon storage, tsunamis, wildfires, and sustainable energy. 

The Department of Energy Office of Science recently released two reports through its Advanced Scientific Computing Research (ASCR) program. The reports were produced by workshops that brought together researchers from universities, national labs, government, and industry to set priorities for scientific computing.

Professor Felix Herrmann served on the organizing committee for the Workshop on Inverse Methods for Complex Systems under Uncertainty. Assistant Professor Peng Chen joined Herrmann as a workshop participant, contributing expertise in data science and machine learning.

Inverse methods work backward from outcomes to find their causes. Scientists use these tools to study complex systems, like designing new materials with targeted properties and using past wildfires to map vulnerable areas and behavior of future fires.

The ASCR report highlighted Herrmann’s work on seismic exploration and monitoring through digital twins. Founded on inverse methods, digital twins upgrade from static models to virtual systems that accurately mirror their physical counterparts. 

Digital twins integrate real-time data sources, including fluid flows, monitoring and control systems, risk assessments, and human decisions. These models also account for uncertainty and address data gaps or limitations. 

The DOE organized the workshop to support the growing role of inverse modeling. The group identified four priority research directions (PRDs) to guide future work. The PRDs are:

  • PRD 1: Discovering, exploiting, and preserving structure
  • PRD 2: Identifying and overcoming model limitations
  • PRD 3: Integrating disparate multimodal and/or dynamic data
  • PRD 4: Solving goal-oriented inverse problems for downstream tasks

“A digital twin is a system you can control, like to optimize operations or to minimize risk,” said Herrmann, who holds joint appointments in the Schools of Earth and Atmospheric Sciences, Electrical and Computer Engineering, and Computational Science and Engineering.

“Digital twins give you a principled way to consider uncertainties, which there are a lot in subsurface monitoring. If you inject carbon dioxide too fast, you will will increase the pressure and may fracture the rock. If you inject too slow, then the process may become too costly. Digital twins help us make balanced decisions under uncertainty.”

Supercomputers, algorithms, and artificial intelligence now power modern science. However, these tools consume enormous amounts of energy. This raises concerns about how to sustain computing and scientific research as we know them in the decades ahead.

Professors Rich Vuduc and Hyesoon Kim co-authored the report from the Workshop on Energy-Efficient Computing for Science. At the three-day ASCR workshop, participants identified five key research directions:

  • PRD 1: Co-design energy-efficient hardware devices and architectures for important workloads
  • PRD 2: Define the algorithmic foundations of energy-efficient scientific computing
  • PRD 3: Reconceptualize software ecosystems for energy efficiency
  • PRD 4: Enable energy-efficient data management for data centers, instruments, and users
  • PRD 5: Develop integrated, scalable energy measurement and modeling capabilities for next-generation computing systems

“I’m cautiously optimistic about the future of energy-efficient computing. The ASCR report says, from a technological point of view, there are things we can do,” said Vuduc.

“The report lays out paths for how we might design better apps, hardware systems, and algorithms that will use less energy. This is recognition that we should think about how architectures and software work together to drive down energy usage for systems.”

News Contact

Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu

Feb. 25, 2026
A graphic showing an AI model in an outstretched hand.

Artificial intelligence (AI) systems power everything from chatbots to security cameras, yet many of the most advanced models operate as “black boxes.” Companies can use them, but outsiders can’t see how they were built, where they came from, or whether they contain hidden flaws.

This lack of transparency creates real risks. A model could contain security vulnerabilities or hidden backdoors. It could also be a lightly modified version of an open-source system — repackaged in violation of its license — with no easy way to prove it.

Researchers at the Georgia Institute of Technology have developed a new framework, ZEN, to help solve this problem. The tool can recover a model’s unique “fingerprint” directly from its memory, allowing experts to trace its origins and reconstruct how it was assembled.

“Analyzing a proprietary AI model without identifying where it came from and how it is constructed is like trying to fix a car engine with the hood welded shut,” said David Oygenblik, a Ph.D. student at Georgia Tech and the study’s lead author.

“ZEN not only X-rays the engine but also provides the complete wiring diagram.”

ZEN works by taking a snapshot of a running AI system and extracting information about both its mathematical structure and the code that defines it. It compares that fingerprint against a database of known open-source models to determine the system’s origin.

If it finds a match, ZEN identifies the exact changes and generates software patches that allow investigators to recreate a working replica of the proprietary model for testing.

That capability has major implications for both security and intellectual property protection.

“With ZEN, a security analyst can finally test a black-box model for hidden backdoors, and a company can gather concrete evidence to prove its software license was infringed,” Oygenblik said.

To evaluate the system, the research team tested ZEN on 21 state-of-the-art AI models, including Llama 3, YOLOv10, and other well-known systems.

ZEN correctly traced every customized model back to its original open-source foundation — achieving 100% attribution accuracy. Even when models had been heavily modified — differing by more than 83% from their original versions — ZEN successfully identified the changes and enabled full reconstruction for security testing.

The researchers will present their findings at the 2026 Network and Distributed System Security (NDSS) Symposium. The paper, Achieving Zen: Combining Mathematical and Programmatic Deep Learning Model Representations for Attribution and Reuse, was authored by Oygenblik, master’s student Dinko Dermendzhiev, Ph.D. students Filippos Sofias, Mingxuan Yao, Haichuan Xu, and Runze Zhang, post-doctorate scholars Jeman Park, and Amit Kumar Sikder, as well as Associate Professor Brendan Saltaformaggio.

News Contact

John Popham

Communications Officer II School of Cybersecurity and Privacy 

Feb. 24, 2026
A virtual advisor stands in a modern office with large windows overlooking a green landscape. A dialogue box shows the advisor asking for reflections on a project’s progress, with interface buttons for talking and ending the conversation.

Two research teams within the College of Lifetime Learning are piloting new approaches to online education that integrate artificial intelligence and immersive virtual reality with thoughtful instructional design. More than technology experiments, these projects show how the College refines learning innovations before scaling them across programs.

Research Scientists Eunhye Grace Flavin, Abeera Rehmat, and Jeonghyun (Jonna) Lee are developing an AI-assisted course titled Design of Learning Environments. The course is being piloted within the College to gather feedback and data before broader implementation.

“We want to study how AI can meaningfully support learning,” Flavin said, “and how it can deepen engagement and enhance instructional design rather than distract from it.”

Faculty and staff are contributing in two ways: some are enrolling in the course and participating in AI-supported activities and surveys, while others are reviewing instructional models and providing feedback. Insights from both groups will guide refinements before future rollout.

Meanwhile, Research Scientists Meryem Yılmaz Soylu and Jeonghyun (Jonna) Lee, along with Research Associate Eric Sembrat, are piloting an immersive VR module within the Online Master of Science in Analytics (OMSA) program. The module features case-based scenarios with a virtual agent, enabling students to practice leadership and workplace decision-making in realistic environments.

“Technical expertise alone is no longer enough. Our students need opportunities to practice leadership, navigate conflict, and communicate across stakeholders in realistic settings. Virtual reality allows us to create emotionally resonant, high-stakes scenarios in a safe environment where students can experiment, reflect, and grow,” Yılmaz Soylu said.

The VR experience uses branching 360° scenarios in which students’ communication choices and strategic decisions influence virtual stakeholders’ responses in real time. Insights from the pilot will inform refinements to strengthen usability, instructional alignment, and scalability before broader implementation.

“In many ways, we are building the future of online learning. We’re asking what works and what supports learning. It’s incredibly exciting to be part of a college that embraces this sort of thoughtful experimentation. Innovation like this can help us responsibly design courses for the individuals we serve,” Flavin said.

The VR module is being developed in collaboration with Lifetime Learning colleagues in instructional design, media production, and technology, as well as partners across Georgia Tech, including OMSA leadership and faculty collaborators.

Together, these initiatives reflect the College’s approach to innovation: integrating research, technology, and delivery to improve learning systems. By piloting and refining new models before scaling, the College strengthens its capacity to expand access while preserving quality and meaningful outcomes for learners across career stages.

News Contact

Yelena M. Rivera-Vale (she/her(s)/ella)
Communications Program Manager
C21U, College of Lifetime Learning

Feb. 23, 2026
George Stoica

A Georgia Tech Ph.D. candidate is getting a boost to his research into developing more efficient multi-tasking artificial intelligence (AI) models without fine-tuning.

Georgia Stoica is one of 38 Ph.D. students worldwide researching machine learning who were named a 2025 Google Ph.D. Fellow.

Stoica is designing AI training methods that bypass fine-tuning, which is the process of adapting a large pre-trained model to perform new tasks. Fine-tuning is one of the most common ways engineers update large-language models like ChatGPT, Gemini, and Claude to add new capabilities. 

If an AI company wants to give a model a new capability, it could create a new model from scratch for that specific purpose. However, if the model already has relevant training and knowledge of the new task, fine-tuning is cheaper.

Stoica argues that fine-tuning still uses large amounts of data, and that other methods can help models learn more effectively and efficiently.

“Full fine-tuning yields strong performance, but it can be costly, and it risks catastrophic forgetting,” Stoica said. “My research asks if we can extend a model’s capabilities by imbuing it with the expertise of others, without fine-tuning?

“Reducing cost and improving efficiency is more important than ever. We have so many publicly available models that have been trained to solve a variety of tasks. It’s redundant to train a new model from scratch. It’s much more efficient to leverage the information that already exists to get a model up to speed.”

Stoica said the solution is a cost-effective method called model merging. This method combines two or more AI models into a single model, improving performance without fine-tuning.

On a basic level, Stoica said an example would be combining a model that is efficient at classifying cats with one that works well at dogs.

“Merging is cheap because you just take the parameters, the weights of your existing models, and combine them,” he said. “You could take the average of the weights to create a new model, but that sometimes doesn’t work. My work has aimed to rearrange the weights so they can communicate easily with each other.”

Through his Google fellowship, Stoica seeks to apply model merging to create a cutting-edge vision encoder. A vision encoder converts image or video data into numerical representations that computers can understand. This enables tasks such as image or facial recognition and generative image captioning.

“I want to be at the frontier of the field, and Google is clearly part of that,” Stoica said. “The vision encoder is very large-scale, and Google has the infrastructure to accommodate it.”

Feb. 19, 2026
Harsh Muriki

A new robot could solve one of the biggest challenges facing indoor farmers: manual pollination.

Indoor farms, also known as vertical farms, are popular among agricultural researchers and are expanding across the agricultural industry. Some benefits they have over outdoor farms include:

  • Year-round production of food crops
  • Less water and land requirements
  • Not needing pesticides
  • Reducing carbon emissions from shipping
  • Reducing food waste

Additionally, some studies indicate that indoor farms produce more nutritious food for urban communities. 

However, these farms are often inaccessible to birds, bees, and other natural pollinators, leaving the pollination process to humans. The tedious process must be completed by hand for each flower to ensure the indoor crop flourishes.

Ai-Ping Hu, a principal research engineer at the Georgia Tech Research Institute (GTRI), has spent years exploring methods to efficiently pollinate flowering plants and food crops in indoor farms to find a way to efficiently pollinate flower plants and food crops in indoor farms.

Hu, Assistant Professor Shreyas Kousik of the George W. Woodruff School of Mechanical Engineering, and a rotating group of student interns have developed a robot prototype that may be up to the task.

The robot can efficiently pollinate plants that have both male and female reproductive parts. These plants only require pollen to be transferred from one part to the other rather than externally from another flower.

Natural pollinators perform this task outdoors, but Hu said indoor farmers often use a paintbrush or electric tootbrush to ensure these flowers are pollinated. 

Knowing the Pose

An early challenge the research team addressed was teaching the robot to identify the “pose” of each flower. Pose refers to a flower’s orientation, shape, and symmetry. Knowing these details ensures precise delivery of the pollen to maximize reproductive success. 

“It’s crucial to know exactly which way the flowers are facing,” Hu said.

“You want to approach the flower from the front because that’s where all the biological structures are. Knowing the pose tells you where the stem is. Our device grasps the stem and shakes it to dislodge the pollen.

“Every flower is going to have its own pose, and you need to know what that is within at least 10 degrees.”

Computer Vision Breakthrough

Harsh Muriki is a robotics master’s student at Georgia Tech’s School of Interactive Computing, who used computer vision to solve the pose problem while interning for Hu and GTRI.

Muriki attached a camera to a FarmBot to capture images of strawberry plants from dozens of angles in a small garden in front of Georgia Tech’s Food Processing Technology Building. The FarmBot is an XYZ-axis robot that waters and sprays pesticides on outdoor gardens, though it is not capable of pollination.

“We reconstruct the images of the flower into a 3D model and use a technique that converts the 3D model into multiple 2D images with depth information,” Muriki said. “This enables us to send them to object detectors.”

Muriki said he used a real-time object detection system called YOLO (You Only Look Once) to classify objects. YOLO is known for identifying and classifying objects in a single pass.

Ved Sengupta, a computer engineering major who interned with Muriki, fine-tuned the algorithms that converted 3D images into 2D.

“This was a crucial part of making robot pollination possible,” Sengupta said. “There is a big gap between 3D and 2D image processing.

“There’s not a lot of data on the internet for 3D object detection, but there’s a ton for 2D. We were able to get great results from the converted images, and I think any sector of technology can take advantage of that.”

Sengupta, Muriki, and Hu co-authored a paper about their work that was accepted to the 2025 International Conference on Robotics and Automation (ICRA) in Atlanta.

Measuring Success

The pollination robot, built in Kousik’s Safe Robotics Lab, is now in the prototype phase. 

Hu said the robot can do more than pollinate. It can also analyze each flower to determine how well it was pollinated and whether the chances for reproduction are high.

“It has an additional capability of microscopic inspection,” Hu said. “It’s the first device we know of that provides visual feedback on how well a flower was pollinated.”

For more information about the robot, visit the Safe Robotics Lab project page.

News Contact

Nathan Deen
College of Computing
Georgia Tech

Feb. 17, 2026
Professor Yashwanth Nakka in the Aerospace Robotics Lab. (Photo: Cameron Eure)

Professor Yashwanth Nakka in the Aerospace Robotics Lab. (Photo: Cameron Eure)

Traveling to the moon for scientific discovery is expensive. And even once you get there, operating a rover on the moon is nothing like driving on Earth — the uneven terrain, deep shadows, and unpredictable soil make autonomy essential.

So, what do you do if you want to design robots and their controlling algorithms for future moon visits? If you’re Yashwanth Nakka, you bring the moon to you.

Nakka has recreated the moon in a research lab at Georgia Tech, hauling in seven tons of basalt rock to mimic the look and feel of the lunar surface. With dark black walls and a bright light that simulates the sun’s glare, the Aerospace Robotics Lab (ARL) is the only one of its kind in a university setting.

This lab will help Nakka’s team of researchers understand how robotic rovers interact with the environment on the moon — how they perceive the terrain in different sunlight conditions, for example, and how they navigate across a surface that can easily swallow a rover wheel. 

“From a research perspective, many of today’s space mobility solutions still build upon algorithms developed two decades ago. This new lab positions us to pioneer the next generation of autonomous mobility technologies that can overcome unstructured terrain, environmental, and operational challenges. Advancing autonomous systems is critical to enabling deep-space exploration, supporting resource utilization, and empowering scientists to investigate new frontiers such as icy moons that may harbor subsurface oceans,” said Nakka, assistant professor in the Daniel Guggenheim School of Aerospace Engineering.

Unlike the Moon’s ultra-fine, clingy regolith that can coat equipment and cause severe wear and damage, Nakka’s lab uses carefully selected, gem-sized basalt rocks. This material allows researchers to realistically study how robots interact with granular terrain while avoiding the need for extensive protective equipment, making experimentation safer, more efficient, and easier to conduct. When robots are driving on the surface, they experience the same shifts and movements they would in the moondust.

Algorithms that Help Rovers Think and Decide on Their Wheels

The lab uses specialized lights that mimic the sun because lighting conditions can significantly impact rover operations. A typical rover relies on cameras to identify objects — such as determining whether something is a rock and whether the rover should drive around or over it. 

The rover also must assess slopes and evaluate whether the terrain is stable enough to traverse. These decisions are usually made with a human in the loop; Nakka is developing control systems that would allow the rovers to operate without that human intervention.

“Lighting conditions make this process challenging,” Nakka said. “For instance, direct sunlight on the camera can distort what the rover sees. One of the greatest obstacles is developing algorithms that remain robust and reliable despite these varying environmental factors.”

The team’s algorithms will empower vehicles to independently assess their surroundings, identify safe paths, and select scientifically intriguing targets, all on their own. They also will allow the rovers to work together to explore or achieve other objectives.

"Developing effective algorithms requires more than simply studying a standard vehicle and attempting to adapt autonomy solutions from there. That approach limits performance, particularly when driving at high speeds,” Nakka said. “To achieve truly dynamic and responsive autonomous control, our algorithms must understand how the vehicle interacts with the terrain, control for uncertainty, and incorporate that surface to wheel contact information in real time.”

Next-Gen Robots for the Moon’s Hidden Extremes

Alongside control algorithms, Nakka and his team are crafting new robots capable of exploring harsh moon terrain and accessing challenging environments, such as lunar vents and caves. These shape changing robots, inspired by Nakka’s previous work at NASA’s Jet Propulsion Laboratory (JPL), will cover territory that conventional rovers simply can’t reach.

"We aim to integrate robot design with algorithm development to create systems that are adaptive and capable of changing shape. For example, a rover that can crawl, lift a leg to clear debris when stuck, and continue moving—demonstrating the importance of built-in adaptability."

Nakka’s long-term vision for autonomy is to develop a rover capable of understanding both its environmental context and its own internal state. This includes recognizing available resources as well as interpreting external conditions. Achieving this level of autonomous self and environmental awareness is expected to take approximately a decade. 

Ultimately, the work being done in the ARL will shape the next decade of space robotic exploration, making it possible for rovers to go farther, think faster, and survive in places no human or robot has ever gone. 


 

News Contact

Monique Waddell

Feb. 12, 2026
DOE ECRP Qi Tang

The future of clean energy depends on algorithms as much as it does atoms.

Georgia Tech’s Qi Tang is building machine learning (ML) models to accelerate nuclear fusion research, making it more affordable and more accurate. Backed by a grant from the U.S. Department of Energy (DOE), Tang’s work brings clean, sustainable energy closer to reality.

Tang has received an Early Career Research Program (ECRP) award from the DOE Office of Science. The grant supports Tang with $875,000 disbursed over five years to craft ML and data processing tools that help scientists analyze massive datasets from nuclear experiments and simulations.

Tang is the first faculty member from Georgia Tech’s College of Computing and School of Computational Science and Engineering (CSE) to receive the ECRP. He is the seventh Georgia Tech researcher to earn the award and the only GT awardee among this year’s 99 recipients.

More than a milestone, the award reflects a shift in how nuclear research is done. Today, progress depends on computing and data science as much as on physics and engineering.

“I am honored and excited to receive the ECRP award through DOE’s Advanced Scientific Computing Research program, an organization I care about deeply,” said Tang, an assistant professor in the School of CSE. 

“I am grateful to my former colleagues at Los Alamos National Laboratory and collaborators at other national laboratories, including Lawrence Livermore, Sandia, and Argonne. I am also thankful for my Ph.D. students at Georgia Tech, whose dedication and creativity make this award possible.”

[Related: New Faculty Applies High-Performance Computing, Scientific Machine Learning Interests to Studies in Plasma Physics]

A problem in nuclear research is that fusion simulations are challenging to understand and use. These simulations generate enormous datasets that are too large to store, move, and analyze efficiently.

In his ECRP proposal to DOE, Tang introduced new ML methods to improve the analysis and storage of particle data.

Tang’s approach balances shrinking data so it is easier to store and transfer while preserving the most important scientific features. His multiscale ML models are informed by physics, so the reduced data still reflects how fusion systems really behave.

With Tang’s research, scientists can run larger, more realistic fusion models and analyze results more quickly. This accelerates progress toward practical fusion energy.

“In contrast to generic black-box-type compression tools, we aim at preserving the intrinsic structures of the particle dataset during the data reduction processes,” Tang said. 

“Taking this approach, we can meet our goal of achieving high-fidelity preservation of critical physics with minimum loss of information.”

Computing is essential in modern research because of the amount of data produced and captured from experiments and simulations. In the era of exascale supercomputers, data movement is a greater bottleneck than actual computation.

DOE operates three of the world’s four exascale supercomputers. These machines can calculate one quintillion (a billion billion) operations per second.

The exascale era began in 2022 with the launch of Frontier at Oak Ridge National Laboratory. Aurora followed in 2023 at Argonne National Laboratory. El Capitan arrived in 2024 at Lawrence Livermore National Laboratory.

With Tang’s data reduction approaches, all of DOE’s supercomputers spend more time on science and less time waiting for data transfers.

“Qi’s work in computational plasma physics and nuclear fusion modeling has been groundbreaking,” said Haesun Park, Regents’ Professor and Chair of the School of CSE. 

“We are proud of Qi and what this award means for him, Georgia Tech, and the Department of Energy toward leveraging computation to solve challenges in science and engineering, such as sustainable energy."

 

Previous Georgia Tech recipients of DOE Early Career Research Program awards include:

Itamar Kimchi, assistant professor, School of Physics

Sourabh Saha, assistant professor, George W. Woodruff School of Mechanical Engineering

Wenjing Lao, associate professor, School of Mathematics

Ryan Lively, Thomas C. DeLoach Professor, School of Chemical & Biomolecular Engineering

Josh Kacher, associate professor, School of Materials Science and Engineering

Devesh Ranjan, Eugene C. Gwaltney Jr. School Chair and professor, Woodruff School of Mechanical Engineering

News Contact

Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu

Feb. 02, 2026
Top executives from Atlanta's venture capital community participated in the College of Computing's first VC summit, held on Jan. 21.

Top executives from Atlanta's venture capital community participated in the College of Computing's first VC summit, held on Jan. 21. Photo by Terence Rushin/GT Computing

The College of Computing is forging new relationships with Atlanta’s venture capital community to advance entrepreneurial opportunities for students.

Nearly two dozen venture capital (VC) leaders based in Atlanta and the Southeast participated in a half-day summit at the College on Jan. 21.

Co-hosts Dean of Computing Vivek Sarkar and Noro-Moseley Partners General Partner Alan Taetle organized the invitation-only summit. Their goals were to:

  • Showcase the College’s research strengths and entrepreneurial culture
  • Deepen connections between academic innovation and startups
  • Explore opportunities for collaboration, commercialization, and startup growth

The summit’s guest list included founders, partners, and leaders from VC firms. Many of these firms focus on early-stage startups in SaaS, fintech, cybersecurity, and other emerging technology markets.

Research With Commercial Impact

Sarkar outlined the College of Computing’s academic mission and research priorities during his opening remarks. He emphasized the College’s role in advancing innovation in cybersecurity, artificial intelligence (AI), and other emerging research areas.

“One of the College’s strategic pillars is what I call ‘X to the power of Computing’,” Sarkar said. “Look at any discipline or industry X to see where they're innovating and where their advances are being made, and that’s where Computing meets that discipline.”

Along with remarks from the dean, the summit featured presentations highlighting Georgia Tech’s entrepreneurial ecosystem and College-led research initiatives with strong commercialization potential.

Expanding Support for Student Founders

Jen Whitlow leads Community Partnerships at Fusen, a global platform for student founders created by Atlanta philanthropist Christopher W. Klaus. She described Klaus’s support for student entrepreneurship, including GT Computing’s annual Klaus Startup Challenge. In 2025, Klaus awarded five winning teams $150,000 each to cover startup costs.

Whitlow also updated guests on Klaus’s commitment, announced in May 2025, to covering the incorporation costs for any graduating student who aspires to launch a startup.

“More than 600 graduates from last year’s Spring and Fall Commencements have accepted the gift, and more than 225 recent graduates have completed their incorporation to date,” Whitlow said. She added that a second cohort of Fall 2025 graduates is being processed over the next few weeks.

Offering an enterprise-level view, CREATE-X Rahul Saxena presented recent updates to commercialization at Georgia Tech and efforts to streamline entrepreneurial processes.

Saxena emphasized the launch of Velocity Startups, an accelerator that provides the resources and infrastructure student startups need to bring their innovations to market.

Building the Pipeline From Research to Startup

Following these updates, GT Computing faculty delivered lightning-round presentations highlighting the College’s research strengths in AI, cybersecurity, and high-performance computing.

“The tighter the local investing community is with Georgia Tech, the better off both are,” said Taetle, who has been a member of the College’s Advisory Board for more than 20 years.

“It’s critical in this super-competitive world that we do everything that we can to support this fantastic university.”

Taetle added that the summit was part of a broader effort to strengthen the College’s entrepreneurial pipeline.

“There are some really big ideas here, which could turn into really big companies,” he said. “We’ve made some great strides on the commercialization front, but we still have that opportunity and challenge in front of us.”

The afternoon concluded with a discussion of next steps and engagement opportunities, led by Sarkar and Jason Zwang, GT Computing’s senior director of development. The discussion focused on research partnership opportunities, startup formation, and student involvement.

Zwang emphasized the importance of investing in Atlanta’s innovation ecosystem, citing the city’s strong fundamentals and pro-growth climate for entrepreneurship.

“This gives us a unique opportunity to start working more closely with the local VC community, and it’s also great for our students,” Zwang said.

Sarkar agreed, saying, “There’s no downside for students to get involved in a startup. It might take off and be a bonanza. If not, the experience makes you a more competitive hire because of the breadth of experience you gain at a startup.”

To foster these opportunities for students, Zwang said that a key priority is to establish earlier, more intentional connections among students, startups, and investors.

“This is a pivotal moment,” he said. “We can determine how to connect students with the VC and startup community earlier and ensure these investors remain involved with the College.”

College leaders said the summit underscored Computing’s commitment to fostering an entrepreneurial culture and to building lasting relationships that can help accelerate the real-world impact of its research beyond the Institute.

“Georgia Tech is a force multiplier for entrepreneurship,” said Sarkar. “We’re here to change the world. We want to inspire a culture of bold, big entrepreneurial thinking, and look forward to the next steps that will follow this VC summit.”

News Contact

Ben Snedeker, Senior Communications Manager

Georgia Tech College of Computing

Jan. 29, 2026
CSE in 2026

While not as highlight-reel worthy as the Winter Olympics and the World Cup, experts expect high-performance computing (HPC) to have an even bigger impact on daily life in 2026.

Georgia Tech researchers say HPC and artificial intelligence (AI) advances this year are poised to improve how people power their homes, design safer buildings, and travel through cities.

According to Qi Tang, scientists will take progressive steps toward cleaner, sustainable energy through nuclear fusion in 2026. 

“I am very hopeful about the role of advanced computing and AI in making fusion a clean energy source,” said Tang, an assistant professor in the School of Computational Science and Engineering (CSE)

“Fusion systems involve many interconnected processes happening across different scales. Modern simulations, combined with data-driven methods, allow us to bring these pieces together into a unified picture.”

Tang’s research connects HPC and machine learning with fusion energy and plasma physics. This year, Tang is continuing work on large-scale nuclear fusion models.

Only a few experimental fusion reactors exist worldwide compared to more than 400 nuclear fission reactors. Tang’s work supports a broader effort to turn fusion from a promising idea into a practical energy source.

Nuclear fusion occurs in plasma, the fourth state of matter, where gas is heated to millions of degrees. In this extreme state, electrons are stripped from atoms, creating a hot soup of fast-moving ions and free electrons. In plasma, hydrogen atoms overcome their natural electrical repulsion, collide, and fuse together. This releases energy that can power cities and homes.

Computers interpret extreme temperatures, densities, pressures, and plasma particle motion as massive datasets. Tang works to assimilate these data types from computer models and real-world experiments.

To do this, he and other researchers rely on machine learning approaches to analyze data across models and experiments more quickly and to produce more accurate predictions. Over time, this will allow scientists to test and improve fusion reactor designs toward commercial use. 

Beyond energy and nuclear engineering, Umar Khayaz sees broader impacts for HPC in 2026.

“HPC is the need of the day in every field of engineering sciences, physics, biology, and economics,” said Khayaz, a CSE Ph.D. student in the School of Civil and Environmental Engineering

“HPC is important enough to say that we need to employ resources to also solve social problems.”

Khayaz studies dynamic fracture and phase-field modeling. These areas explore how materials break under sudden, rapid loads. 

Like nuclear fusion, Khayaz says dynamic fracture problems are complex and data-intensive. In 2026, he expects to see more computing resources and computational capabilities devoted to understanding these problems and other emerging civil engineering challenges.

CSE Ph.D. student Yiqiao (Ahren) Jin sees a similar relationship between infrastructure and self-driving vehicles. He believes AI will innovate this area in 2026.

At Georgia Tech, Jin develops efficient multimodal AI systems. An autonomous vehicle is a multimodal system that uses camera video, laser sensors, language instructions, and other inputs to navigate city streets under changing scenarios like traffic and weather patterns.

Jin says multimodal research will move beyond performance benchmarks this year. This shift will lead to computer systems that can reason despite uncertainty and explain their decisions. In result, engineers will redefine how they evaluate and deploy autonomous systems in safety-critical settings.

“Many foundational problems in perception, multimodal reasoning, and agent coordination are being actively addressed in 2026. These advances enable a transition from isolated autonomous systems to safer, coordinated autonomous vehicle fleets,” Jin said. 

“As these systems scale, they have the potential to fundamentally improve transportation safety and efficiency.”

News Contact

Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu

Jan. 27, 2026
A car's side view mirror with a alert in the center of the mirror.

A newly discovered vulnerability could allow cybercriminals to silently hijack the artificial intelligence (AI) systems in self-driving cars, raising concerns about the security of autonomous systems increasingly used on public roads.

 Georgia Tech cybersecurity researchers discovered the vulnerability, dubbed VillainNet, and found it can remain dormant in a self-driving vehicle’s AI system until triggered by specific conditions.

Once triggered, VillainNet is almost certain to succeed, giving attackers control of the targeted vehicle.

The research finds that attackers could program almost any action within a self-driving vehicle’s AI super network to trigger VillainNet. In one possible scenario, it could be triggered when a self-driving taxi’s AI responds to rainfall and changing road conditions.

Once in control, hackers could hold the passengers hostage and threaten to crash the taxi.

The researchers discovered this new backdoor attack threat in the AI super networks that power autonomous driving systems. 

“Super networks are designed to be the Swiss Army knife of AI, swapping out tools, or in this case sub networks, as needed for the task at hand," said David Oygenblik, Ph.D. student at Georgia Tech and the lead researcher on the project. 

"However, we found that an adversary can exploit this by attacking just one of those tiny tools. The attack remains completely dormant until that specific subnetwork is used, effectively hiding across billions of other benign configurations." 

This backdoor attack is nearly guaranteed to work, according to Oygenblik. This blind spot is nearly undetectable with current tools and can impact any autonomous vehicle that runs on AI. It can also be hidden at any stage of development and include billions of scenarios.

“With VillainNet, the attacker forces defenders to find a single needle in a haystack that can be as large as 10 quintillion straws," said Oygenblik. 

"Our work is a call to action for the security community. As AI systems become more complex and adaptive, we must develop new defenses capable of addressing these novel, hyper-targeted threats." 

The hypothetical fix to the problem was to add security measures to the super networks. These networks contain billions of specialized subnetworks that can be activated on the fly, but Oygenblik wanted to see what would happen if he attacked a single subnetwork tool.

In experiments, the VillainNet attack proved highly effective. It achieved a 99% success rate when activated while remaining invisible throughout the AI system. 

The research also shows that detecting a VillainNet backdoor would require 66x more computing power and time to verify the AI system is safe. This challenge dramatically expands the search space for attack detection and is not feasible, according to the researchers.

The project was presented at the ACM Conference on Computer and Communications Security (CCS) in October 2025. The paper, VillainNet: Targeted Poisoning Attacks Against SuperNets Along the Accuracy-Latency Pareto Frontier, was co-authored by Oygenblik, master's students Abhinav Vemulapalli and Animesh Agrawal, Ph.D. student Debopam Sanyal, Associate Professor Alexey Tumanov, and Associate Professor Brendan Saltaformaggio

News Contact

John Popham
Communications Officer II 
School of Cybersecurity and Privacy

 

Subscribe to Artificial Intelligence at Georgia Tech