Feb. 12, 2026
The future of clean energy depends on algorithms as much as it does atoms.
Georgia Tech’s Qi Tang is building machine learning (ML) models to accelerate nuclear fusion research, making it more affordable and more accurate. Backed by a grant from the U.S. Department of Energy (DOE), Tang’s work brings clean, sustainable energy closer to reality.
Tang has received an Early Career Research Program (ECRP) award from the DOE Office of Science. The grant supports Tang with $875,000 dispersed over five years to craft ML and data processing tools that help scientists analyze massive datasets from nuclear experiments and simulations.
Tang is the first faculty member from Georgia Tech’s College of Computing and School of Computational Science and Engineering (CSE) to receive the ECRP. He is the seventh Georgia Tech researcher to earn the award and the only GT awardee among this year’s 99 recipients.
More than a milestone, the award reflects a shift in how nuclear research is done. Today, progress depends on computing and data science as much as on physics and engineering.
“I am honored and excited to receive the ECRP award through DOE’s Advanced Scientific Computing Research program, an organization I care about deeply,” said Tang, an assistant professor in the School of CSE.
“I am also thankful for my Ph.D. students at Georgia Tech, whose dedication and creativity make this award possible.
A problem in nuclear research is that fusion simulations are challenging to understand and use. These simulations generate enormous datasets that are too large to store, move, and analyze efficiently.
In his ECRP proposal to DOE, Tang introduced new ML methods to improve the analysis and storage of particle data.
Tang’s approach balances shrinking data so it is easier to store and transfer while preserving the most important scientific features. His multiscale ML models are informed by physics, so the reduced data still reflects how fusion systems really behave.
With Tang’s research, scientists can run larger, more realistic fusion models and analyze results more quickly. This accelerates progress toward practical fusion energy.
“In contrast to generic black-box-type compression tools, we aim at preserving the intrinsic structures of the particle dataset during the data reduction processes,” Tang said.
“Taking this approach, we can meet our goal of achieving high-fidelity preservation of critical physics with minimum loss of information.”
Computing is essential in modern research because of the amount of data produced and captured from experiments and simulations. In the era of exascale supercomputers, data movement is a greater bottleneck than actual computation.
DOE operates three of the world’s four exascale supercomputers. These machines can calculate one quintillion (a billion billion) operations per second.
The exascale era began in 2022 with the launch of Frontier at Oak Ridge National Laboratory. Aurora followed in 2023 at Argonne National Laboratory. El Capitan arrived in 2024 at Lawrence Livermore National Laboratory.
With Tang’s data reduction approaches, all of DOE’s supercomputers spend more time on science and less time waiting for data transfers.
“This award reflects a team effort that wouldn’t be possible without partnership and support,” Tang said.
“I am grateful to my former colleagues at Los Alamos National Laboratory and collaborators at other national laboratories, including Lawrence Livermore, Sandia, and Argonne.”
Previous Georgia Tech recipients of DOE Early Career Research Program awards include:
Itamar Kimchi, assistant professor, School of Physics
Sourabh Saha, assistant professor, George W. Woodruff School of Mechanical Engineering
Wenjing Lao, associate professor, School of Mathematics
Ryan Lively, Thomas C. DeLoach Professor, School of Chemical & Biomolecular Engineering
Josh Kacher, associate professor, School of Materials Science and Engineering
Devesh Ranjan, Eugene C. Gwaltney Jr. School Chair and professor, Woodruff School of Mechanical Engineering
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Feb. 02, 2026
The College of Computing is forging new relationships with Atlanta’s venture capital community to advance entrepreneurial opportunities for students.
Nearly two dozen venture capital (VC) leaders based in Atlanta and the Southeast participated in a half-day summit at the College on Jan. 21.
Co-hosts Dean of Computing Vivek Sarkar and Noro-Moseley Partners General Partner Alan Taetle organized the invitation-only summit. Their goals were to:
- Showcase the College’s research strengths and entrepreneurial culture
- Deepen connections between academic innovation and startups
- Explore opportunities for collaboration, commercialization, and startup growth
The summit’s guest list included founders, partners, and leaders from VC firms. Many of these firms focus on early-stage startups in SaaS, fintech, cybersecurity, and other emerging technology markets.
Research with Commercial Impact
Sarkar outlined the College of Computing’s academic mission and research priorities during his opening remarks. He emphasized the College’s role in advancing innovation in cybersecurity, artificial intelligence (AI), and other emerging research areas.
“One of the College’s strategic pillars is what I call ‘X to the power of Computing’,” Sarkar said. “Look at any discipline or industry X to see where they're innovating and where their advances are being made, and that’s where Computing meets that discipline.”
Along with remarks from the dean, the summit featured presentations highlighting Georgia Tech’s entrepreneurial ecosystem and College-led research initiatives with strong commercialization potential.
Expanding Support for Student Founders
Jen Whitlow leads Community Partnerships at Fusen, a global platform for student founders created by Atlanta philanthropist Christopher W. Klaus. She described Klaus’s support for student entrepreneurship, including GT Computing’s annual Klaus Startup Challenge. In 2025, Klaus awarded five winning teams $150,000 each to cover startup costs.
Whitlow also updated guests on Klaus’s commitment, announced in May 2025, to covering the incorporation costs for any graduating student who aspires to launch a startup.
“More than 600 graduates from last year’s Spring and Fall Commencements have accepted the gift, and more than 225 recent graduates have completed their incorporation to date,” Whitlow said. She added that a second cohort of Fall 2025 graduates is being processed over the next few weeks.
Offering an enterprise-level view, CREATE-X Rahul Saxena presented recent updates to commercialization at Georgia Tech and efforts to streamline entrepreneurial processes.
Saxena emphasized the launch of Velocity Startups, an accelerator that provides the resources and infrastructure student startups need to bring their innovations to market.
Building the Pipeline from Research to Startup
Following these updates, GT Computing faculty delivered lightning-round presentations highlighting the College’s research strengths in AI, cybersecurity, and high-performance computing.
“The tighter the local investing community is with Georgia Tech, the better off both are,” said Taetle, who has been a member of the College’s Advisory Board for more than 20 years.
“It’s critical in this super-competitive world that we do everything that we can to support this fantastic university.”
Taetle added that the summit was part of a broader effort to strengthen the College’s entrepreneurial pipeline.
“There are some really big ideas here, which could turn into really big companies,” he said. “We’ve made some great strides on the commercialization front, but we still have that opportunity and challenge in front of us.”
The afternoon concluded with a discussion of next steps and engagement opportunities, led by Sarkar and Jason Zwang, GT Computing’s senior director of development. The discussion focused on research partnership opportunities, startup formation, and student involvement.
Zwang emphasized the importance of investing in Atlanta’s innovation ecosystem, citing the city’s strong fundamentals and pro-growth climate for entrepreneurship.
“This gives us a unique opportunity to start working more closely with the local VC community, and it’s also great for our students,” Zwang said.
Sarkar agreed, saying, “There’s no downside for students to get involved in a startup. It might take off and be a bonanza. If not, the experience makes you a more competitive hire because of the breadth of experience you gain at a startup.”
To foster these opportunities for students, Zwang said that a key priority is to establish earlier, more intentional connections among students, startups, and investors.
“This is a pivotal moment,” he said. “We can determine how to connect students with the VC and startup community earlier and ensure these investors remain involved with the College.”
College leaders said the summit underscored Computing’s commitment to fostering an entrepreneurial culture and to building lasting relationships that can help accelerate the real-world impact of its research beyond the Institute.
“Georgia Tech is a force multiplier for entrepreneurship,” said Sarkar. “We’re here to change the world. We want to inspire a culture of bold, big entrepreneurial thinking, and look forward to the next steps that will follow this VC summit.”
News Contact
Ben Snedeker, Senior Communications Manager
Georgia Tech College of Computing
albert.snedeker@cc.gatech.edu
Jan. 29, 2026
While not as highlight-reel worthy as the Winter Olympics and the World Cup, experts expect high-performance computing (HPC) to have an even bigger impact on daily life in 2026.
Georgia Tech researchers say HPC and artificial intelligence (AI) advances this year are poised to improve how people power their homes, design safer buildings, and travel through cities.
According to Qi Tang, scientists will take progressive steps toward cleaner, sustainable energy through nuclear fusion in 2026.
“I am very hopeful about the role of advanced computing and AI in making fusion a clean energy source,” said Tang, an assistant professor in the School of Computational Science and Engineering (CSE).
“Fusion systems involve many interconnected processes happening across different scales. Modern simulations, combined with data-driven methods, allow us to bring these pieces together into a unified picture.”
Tang’s research connects HPC and machine learning with fusion energy and plasma physics. This year, Tang is continuing work on large-scale nuclear fusion models.
Only a few experimental fusion reactors exist worldwide compared to more than 400 nuclear fission reactors. Tang’s work supports a broader effort to turn fusion from a promising idea into a practical energy source.
Nuclear fusion occurs in plasma, the fourth state of matter, where gas is heated to millions of degrees. In this extreme state, electrons are stripped from atoms, creating a hot soup of fast-moving ions and free electrons. In plasma, hydrogen atoms overcome their natural electrical repulsion, collide, and fuse together. This releases energy that can power cities and homes.
Computers interpret extreme temperatures, densities, pressures, and plasma particle motion as massive datasets. Tang works to assimilate these data types from computer models and real-world experiments.
To do this, he and other researchers rely on machine learning approaches to analyze data across models and experiments more quickly and to produce more accurate predictions. Over time, this will allow scientists to test and improve fusion reactor designs toward commercial use.
Beyond energy and nuclear engineering, Umar Khayaz sees broader impacts for HPC in 2026.
“HPC is the need of the day in every field of engineering sciences, physics, biology, and economics,” said Khayaz, a CSE Ph.D. student in the School of Civil and Environmental Engineering.
“HPC is important enough to say that we need to employ resources to also solve social problems.”
Khayaz studies dynamic fracture and phase-field modeling. These areas explore how materials break under sudden, rapid loads.
Like nuclear fusion, Khayaz says dynamic fracture problems are complex and data-intensive. In 2026, he expects to see more computing resources and computational capabilities devoted to understanding these problems and other emerging civil engineering challenges.
CSE Ph.D. student Yiqiao (Ahren) Jin sees a similar relationship between infrastructure and self-driving vehicles. He believes AI will innovate this area in 2026.
At Georgia Tech, Jin develops efficient multimodal AI systems. An autonomous vehicle is a multimodal system that uses camera video, laser sensors, language instructions, and other inputs to navigate city streets under changing scenarios like traffic and weather patterns.
Jin says multimodal research will move beyond performance benchmarks this year. This shift will lead to computer systems that can reason despite uncertainty and explain their decisions. In result, engineers will redefine how they evaluate and deploy autonomous systems in safety-critical settings.
“Many foundational problems in perception, multimodal reasoning, and agent coordination are being actively addressed in 2026. These advances enable a transition from isolated autonomous systems to safer, coordinated autonomous vehicle fleets,” Jin said.
“As these systems scale, they have the potential to fundamentally improve transportation safety and efficiency.”
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Jan. 27, 2026
A newly discovered vulnerability could allow cybercriminals to silently hijack the artificial intelligence (AI) systems in self-driving cars, raising concerns about the security of autonomous systems increasingly used on public roads.
Georgia Tech cybersecurity researchers discovered the vulnerability, dubbed VillainNet, and found it can remain dormant in a self-driving vehicle’s AI system until triggered by specific conditions.
Once triggered, VillainNet is almost certain to succeed, giving attackers control of the targeted vehicle.
The research finds that attackers could program almost any action within a self-driving vehicle’s AI super network to trigger VillainNet. In one possible scenario, it could be triggered when a self-driving taxi’s AI responds to rainfall and changing road conditions.
Once in control, hackers could hold the passengers hostage and threaten to crash the taxi.
The researchers discovered this new backdoor attack threat in the AI super networks that power autonomous driving systems.
“Super networks are designed to be the Swiss Army knife of AI, swapping out tools, or in this case sub networks, as needed for the task at hand," said David Oygenblik, Ph.D. student at Georgia Tech and the lead researcher on the project.
"However, we found that an adversary can exploit this by attacking just one of those tiny tools. The attack remains completely dormant until that specific subnetwork is used, effectively hiding across billions of other benign configurations."
This backdoor attack is nearly guaranteed to work, according to Oygenblik. This blind spot is nearly undetectable with current tools and can impact any autonomous vehicle that runs on AI. It can also be hidden at any stage of development and include billions of scenarios.
“With VillainNet, the attacker forces defenders to find a single needle in a haystack that can be as large as 10 quintillion straws," said Oygenblik.
"Our work is a call to action for the security community. As AI systems become more complex and adaptive, we must develop new defenses capable of addressing these novel, hyper-targeted threats."
The hypothetical fix to the problem was to add security measures to the super networks. These networks contain billions of specialized subnetworks that can be activated on the fly, but Oygenblik wanted to see what would happen if he attacked a single subnetwork tool.
In experiments, the VillainNet attack proved highly effective. It achieved a 99% success rate when activated while remaining invisible throughout the AI system.
The research also shows that detecting a VillainNet backdoor would require 66x more computing power and time to verify the AI system is safe. This challenge dramatically expands the search space for attack detection and is not feasible, according to the researchers.
The project was presented at the ACM Conference on Computer and Communications Security (CCS) in October 2025. The paper, VillainNet: Targeted Poisoning Attacks Against SuperNets Along the Accuracy-Latency Pareto Frontier, was co-authored by Oygenblik, master's students Abhinav Vemulapalli and Animesh Agrawal, Ph.D. student Debopam Sanyal, Associate Professor Alexey Tumanov, and Associate Professor Brendan Saltaformaggio.
News Contact
John Popham
Communications Officer II School of Cybersecurity and Privacy
Jan. 22, 2026
An AI-powered tool is changing how researchers study disasters and how students learn from them.
In the International Disaster Reconnaissance (IDR) course, students now use Filio, a platform built by School of Computing Instruction Senior Lecturer Max Mahdi Roozbahani, to capture immersive 360° media, photos, and video that transform real disaster sites in India and Nepal into living digital classrooms.
Offered by the School of Civil and Environmental Engineering and taught by IDR director and Regents’ Professor David Frost, the course pairs traditional fieldwork with Roozbahani’s expertise in immersive technology and data-driven learning, transforming on-the-ground observations into reusable, interactive educational resources.
How Computing Can Capture Data
Disasters are not only physical events; they are also information events, Roozbahani says. Effective response and long-term resilience depend on the ability to observe, record, and communicate critical data under pressure. Georgia Tech’s IDR course pairs structured on-campus preparation with international field experiences, enabling students to study the cascading effects of major disasters, including how local building practices, governance, and culture shape damage and recovery.
“When students step into a disaster zone, they learn quickly that resilience is a systems problem: physical, social, and informational. Our job in computing is to help them capture and reason about that system responsibly,” Roozbahani said.
Learning from the 2025 Himalayas Expedition
During spring break last year, the cohort traveled along the Teesta River corridor in Sikkim, India. The region is shaped by steep terrain, fast-moving water, and critical infrastructure in narrow valleys.
The visit followed the October 2023 glacial lake outburst flood from South Lhonak Lake, which destroyed the Teesta III hydropower dam and impacted downstream towns, including Dikchu and Rangpo. Field stops across India included Lachung, Chungthang, Dikchu, Rangpo, Gangtok, and New Delhi.
Students explored both upstream and downstream consequences.
Upstream, the team examined how steep terrain and river confinement amplify flood forces, creating cascading risks for infrastructure. Using Filio’s interactive 360° media, students captured conditions in Lachung and Chungthang, allowing viewers to explore the landscape through a 360° photo and 360° video that reveal how topography and river dynamics intensify disaster impacts.
They studied community-scale effects downstream, including damaged buildings, disrupted access, and prolonged recovery timelines.
Rangpo offered a glimpse of recovery in motion, with materials staged for rebuilding bridges and roads essential to commerce and emergency response.
Using Immersive Media as a Learning Tool
Students documented their field experience using Filio, an AI-powered visual reporting platform developed by Roozbahani through Georgia Tech’s CREATE-X ecosystem. Filio captures high-resolution photos, video, and 360° immersive media, preserving both the facts and the context of disaster sites; what the site felt like, what was lost, and what communities prioritized in recovery.
“A 360° capture lets students return months later and ask better questions. That second look is where learning accelerates,” Roozbahani said.
Supported by alumni and faculty mentors, including Tech alumnus Chris Klaus and Georgia Tech mentor Bill Higginbotham, the platform is evolving into a reusable educational library for future courses on immersive technology, responsible AI, and global resilience.
Kathmandu: The Context of Culture
The course concluded in Kathmandu, Nepal, where students examined how heritage, governance, and the everyday use of public space shape resilience.
Through Filio’s immersive documentation — including a 360° photo and 360° video from Kathmandu — the focus broadened from hazard impacts to cultural context, highlighting how recovery is not only about rebuilding structures, but also about preserving identity, memory, and community.
Looking Ahead: A Growing Resource for All Students
Frost and Roozbahani envision the IDR immersive media library as a reusable resource for students even when they cannot travel, supporting future courses on immersive technology, responsible AI, and global resilience. Spring 2026 cohorts will continue to build on this foundation by documenting, analyzing, and sharing insights that can improve education and real-world disaster response.
Jan. 20, 2026
Ever since ChatGPT’s debut in 2023, concerns about artificial intelligence (AI) potentially wiping out humanity have dominated headlines. New research from Georgia Tech suggests that those anxieties are misplaced.
“Computer scientists often aren’t good judges of the social and political implications of technology,” said Milton Mueller, a professor in the Jimmy and Rosalynn Carter School of Public Policy. “They are so focused on the AI’s mechanisms and are overwhelmed by its success, but they are not very good at placing it into a social and historical context.”
In the four decades Mueller has studied information technology policy, he has never seen any technology hailed as a harbinger of doom — until now. So, in a Journal of Cyber Policy paper published late last year, he researched whether the existential AI threat was a real possibility.
What Mueller found is that deciding how far AI can go, and its limitations, is something society shapes. How policymakers get involved depends on the specific AI application.
Defining Intelligence
The AI sparking all this alarm is called artificial general intelligence (AGI) — a “superintelligence” that would be all-powerful and fully autonomous. Part of the debate, Mueller realized, is that no one could agree on the definition of what artificial general intelligence is.
Some computer scientists claim AGI would match human intelligence, while others argue it could surpass it. Both assumptions hinge on what “human intelligence” really means. Today’s AI is already better than humans at performing thousands of calculations in an instant, but that doesn’t make it creative or capable of complex problem-solving.
Understanding Independence
Deciding on the definition isn’t the only issue. Many computer scientists assume that as computing power grows, AI could eventually overtake humans and act autonomously.
Mueller argued that this assumption is misguided. AI is always directed or trained toward a goal and doesn’t act autonomously right now. Think of the prompt you type into ChatGPT to start a conversation.
When AI seems to disregard instructions, it’s caused by inconsistencies in its instructions, not by the machine coming alive. For example, in a boat race video game Mueller studied, the AI discovered it could get more points by circling the course instead of winning the race against other challengers. This was a glitch in the system’s reward structure, not AGI autonomy.
“Alignment gaps happen in all kinds of contexts, not just AI,” Mueller said. “I've studied so many regulatory systems where we try to regulate an industry, and some clever people discover ways that they can fulfill the rules but also do bad things. But if the machine is doing something wrong, computer scientists can reprogram it to fix the problem.”
Relying on Regulation
In its current form, even misaligned AI can be corrected. Misalignment also doesn’t mean the AI would snowball past the point where humans lose control of its outcomes. To do that, AI would need to have a physical capability, like robots, to do its bidding, and the power source and infrastructure to maintain itself. A mere data center couldn’t do that and would need human intervention to become omnipotent. Basic laws of physics — how big a machine can be, how much it can compute — would also prevent a super AI.
More importantly, AI is not one homogenous being. Mueller argued that different applications involve different laws, regulations, and social institutions. For example, the data scraping AI does is a copyright issue subject to copyright laws. AI used in medicine can be overseen by the Food and Drug Administration, regulated drug companies, and medical professionals. These are just a few areas where policymakers could intervene from a specific expertise level instead of trying to create universal AI regulations.
The real challenge isn’t stopping an AI apocalypse — it’s crafting smart, sector-specific policies that keep technology aligned with human values. To avoid being a victim of AI, humans can, and should, put up focused guardrails.
News Contact
Tess Malone, Senior Research Writer/Editor
tess.malone@gatech.edu
Jan. 15, 2026
People with autism seeking employment may soon have access to a new AI-based job-coaching tool thanks to a six-figure grant from the National Science Foundation (NSF).
Jennifer Kim and Mark Riedl recently received a $500,000 NSF grant to develop large language models (LLMs) that provide strength-based job coaching for autistic job seekers.
The two Georgia Tech researchers work with Heather Dicks, a career development advisor in Georgia Tech’s EXCEL program, and other nonprofit organizations to provide job-seeking resources to autistic people.
Dicks said the average job search for people with autism can take three to six months in a good economy. It can take up to 18 months in a bad one. However, the new LLMs from Georgia Tech could help to reduce stress and fast-track these job seekers into employment.
Kim is an assistant professor who specializes in human-computer interaction technology that benefits neurodivergent people. Riedl is a professor and an expert in the development of artificial intelligence (AI) and machine learning technologies.
The team’s goal is to identify job-search pain points and understand how job coaches create better employment prospects for their autistic clients.
“Large-language models have an opportunity to support this kind of work if we can have more data about each different individual strength,” Kim said.
“We want to know what worked for them in specific settings at work, what didn’t work, and what kind of accommodations can better help them. That includes how they should prepare for interviews, how they can better represent their skills, how they can address accommodations they need, and how to write a cover letter. It’s a broad range.”
Dicks has advocated for neurodivergent people and helped them find employment for 20 years. She worked at the Center for the Visually Impaired in Atlanta before coming to Georgia Tech in 2017.
She said most nonprofits that support neurodivergent people offer career development programs and many contract job coaches, but limited coach availability often leads to long waitlists. However, LLMs could fill this availability gap to address the immediate needs of job seekers who may not have access to a job coach.
“These organizations often run at a slow pace, and there’s high turnover,” Dicks said. “An AI tool could get the job seeker quicker support. Maybe they don’t even need to wait on the government system.
“If they’re on a waitlist, it can help the user put together a resume and practice general interview questions. When the job coach is ready to work with them, they’re able to hit the ground running.”
Nailing the Interview
Dicks said the job interview is one of the biggest challenges for people with autism.
“They have trouble picking up on visual and nonverbal cues — the tone of the interview, figuring out the nuances that a question is hinting at,” she said. “They’re not giving the warm and fuzzy vibes that allow them to connect on a personal level.”
That’s why Kim wants the models to reflect a strength-based coaching approach. Strength-based coaching is particularly effective for individuals with autism. Many possess traits that employers value. These include:
- Close attention to detail
- Strong technical proficiency
- Unique problem-solving perspectives
“The issue is that they don’t know how these strengths can be applied in the workplace,” Kim said. “Once they understand this, they can communicate with employers about their strengths and the accommodations employers should provide to the job seeker so they can successfully apply their skills at work.”
Handling Rejection
Still, Kim understands that candidates will need to handle rejection to make it through the search process. She envisions LLMs that help them refocus their energy and regain their confidence after being turned down.
“When you get a lot of rejection emails, it’s easy to feel you’re not good enough,” she said. “Being constantly reminded about your strengths and their prior successes can get them through the stressful job-seeking process.”
Dicks said the models should also be able to provide feedback so that candidates don’t repeat mistakes.
“It can tell them what would’ve been a better answer or a better way to say it,” Dicks said. “It can also encourage them with reminders that you get 100 noes before you get a yes.”
You’re Hired, Now What?
Dicks said the role of a job coach doesn’t end the moment a client is hired. Government-contracted job coaches may work with their clients for up to 90 days after they start a new job to support their transition.
However, she said, sometimes that isn’t enough. Many companies have probationary periods exceeding three months. Autistic individuals may struggle with on-the-job training or communicating what accommodations they need from their new employer.
These are just a few gaps an AI tool can fill for these individuals after they’re hired.
“I could see these models evolving to being supportive at those critical junctures of the probationary period being over or the one-year job review or the annual evaluation that everyone dreads,” she said.
Dicks has an average caseload of 15 students, whom she assists in landing jobs and internships through the EXCEL program.
EXCEL provides a mentorship program for students with intellectual and developmental disabilities from the time they set foot on campus through graduation and beyond.
For more information and to apply, visit EXCEL’s website.
Jan. 15, 2026
It’s 1:47 a.m. in a Georgia Tech dorm room. A bleary-eyed student is staring down a homework problem that refuses to make sense. The professor is asleep. Classmates aren’t texting back. Even the caffeine has lost its jolt.
It’s the kind of late-night dead end that pushed the instructors of one particularly tough class to build their own backup: a custom artificial intelligence (AI) tutor created specifically for that course.
They call it the SMART Tutor, short for Scaffolded, Modular, Accessible, Relevant, and Targeted. It guides students through each problem step by step, checks their reasoning, references class notes, and flags mistakes. Instead of handing over solutions, it shows students how to work through them.
That distinction matters most to Ying Zhang, senior associate chair in the School of Electrical and Computer Engineering, who created the tool.
“Unlike ChatGPT, the tutor doesn’t just give answers,” Zhang said. “We want to teach students how to approach the problem, think critically, and become self-regulated learners.”
Born From One Infamously Tough Class
The idea for the SMART Tutor came from a course that had challenged students for years: Circuit Analysis (ECE 2040). It’s a foundational class for electrical engineering undergraduates and historically one of the most difficult in the curriculum.
Zhang saw the same pattern semester after semester. Students often needed help at the exact moment it wasn’t available.
“Many students study late into the evening,” she said. “They cannot really attend office hours during the day because of either class or work schedules. So, basically, when students work at night on their homework and get stuck, they have no one to go for help.”
Students were working late into the night; support wasn’t. Zhang and her colleagues set out to change that.
Office Hours, Upgraded
Their solution: The SMART Tutor which relies solely on course materials, NOT the open internet. When students upload their completed work, the tutor checks the calculations, the reasoning, and whether the solution holds up in practice, not just on paper. It also provides constructive feedback and shares insights with instructors, helping them identify common misconceptions and adjust in-class instruction.
Students select a homework problem and watch the system break it down step by step. It also answers broader conceptual questions using lectures and notes.
“The students, the SMART Tutor, and the instructor work as a team to help students learn,” Zhang said.
Student-Tested, Professor-Approved
During a semester-long pilot with 50 students, Zhang did not require anyone to use the tutor. But nearly everyone did.
“Most students felt the AI tutor helped them learn more effectively and at their own pace,” she said. “They valued the immediate feedback and the chance to learn from mistakes in real time.”
Nidhi Krishna, a computer engineering major, used the tutor as a sounding board when she got stuck.
“What helped most was being able to show my work and ask, ‘Where did I go wrong?’” Krishna said.
She approached it like she would a teaching assistant, working through problems independently and asking for guidance rather than solutions. Students also valued something else: help that showed up at the right moment.
Teaching Students to Think
What stood out to Zhang wasn’t improved grades. It was what the tutor revealed about how students learn.
By analyzing interaction data, she saw two patterns: students who asked questions to understand, and those who used the system to confirm answers. The difference revealed a deeper gap in learning strategies.
“Some students, especially those who need help most, lack strong learning skills,” Zhang said. “Students with lower academic preparation were more likely to ask guess-and-check questions instead of seeking deeper explanations.”
That insight is already shaping the next version of the tutor.
The SMART Tutor is now part of a broader vision called NEAT: Next-Generation Engineering Education with AI Tutoring. Zhang plans to expand the NEAT framework across Georgia Tech’s College of Engineering and eventually to partner institutions.
One factor fueling that growth is affordability. The system costs about $300 per semester for a class of 50 students, a price Zhang believes most programs can absorb. The academic return, she said, far outweighs the cost.
Always Awake, Always Ready
There will always be a 1:47 a.m. somewhere on campus.
When everything stops making sense, students won’t have to give up or wait for the next day’s office hours. The SMART Tutor won’t solve the problem for them, but it will remind them they can solve it themselves.
After midnight, that may be far more useful than another cup of coffee.
News Contact
Michelle Azriel, Sr. Writer Editor
Jan. 05, 2026
University research drives U.S. innovation, and Georgia Institute of Technology is leading the way.
The latest Higher Education Research and Development (HERD) Survey from the National Science Foundation (NSF) places Georgia Tech as No. 2 nationally for federally sponsored research expenditures in 2024. This is Georgia Tech’s highest-ever ranking from the NSF HERD survey and a 70% increase over the Institute's 2019 numbers.
In total expenditures from all externally funded dollars (including the federal government, foundations, industry, etc.), Georgia Tech is ranked at No. 6.
Tech remains ranked No. 1 among universities without a medical school — a major accomplishment, as medical schools account for a quarter of all research expenditures nationally.
“Georgia Tech’s rise to No. 2 in federally sponsored research expenditures reflects the extraordinary talent and commitment of our faculty, staff, students, and partners. This achievement demonstrates the confidence federal agencies have in our ability to deliver transformative research that addresses the nation’s most critical challenges,” said Tim Lieuwen, executive vice president for Research.
Overall, the state of Georgia maintained its No. 8 position in university research and development, and for the first time, the state topped the $4 billion mark in research expenditures. Georgia Tech provides $1.5 billion, the largest state university contribution. In the last five years, federal funding for higher education research in the state of Georgia has grown an astounding 46% — 10 points higher than the U.S. rate.
Lieuwen said, “Georgia Tech is proud to lead the state in research contributions, helping Georgia surpass the $4 billion mark for the first time. Our work doesn’t just advance knowledge — it saves lives, creates jobs, and strengthens national security. This growth reflects our commitment to drive innovation that benefits Georgia, our country, and the world.”
About the NSF HERD Survey
The NSF HERD Survey is an annual census of U.S. colleges and universities that expended at least $150,000 in separately accounted for research and development (R&D) in the fiscal year. The survey collects information on R&D expenditures by field of research and source of funds and also gathers information on types of research, expenses, and headcounts of R&D personnel.
About Georgia Tech's Research Enterprise
The research enterprise at Georgia Tech is led by the Executive Vice President for Research, Tim Lieuwen, and directs a portfolio of research, development, and sponsored activities. This includes leadership of the Georgia Tech Research Institute (GTRI), the Enterprise Innovation Institute, 11 interdisciplinary research institutes (IRIs), Office of Commercialization, Office of Corporate Engagement, plus research centers, and related research administrative support units. Georgia Tech routinely ranks among the top U.S. universities in volume of research conducted.
News Contact
Angela Ayers
Assistant Vice President of Research Communications
Georgia Tech
Dec. 17, 2025
Would you follow a chatbot’s advice more if it sounded friendly?
That question matters as artificial intelligence (AI) spreads into everything from customer service to self-driving cars. These autonomous agents often have human names — Alexa or Claude, for example — and speak conversationally, but too much familiarity can backfire. Earlier this year, OpenAI scaled down its “sycophantic” ChatGPT model, which could cause problems for users with mental health issues.
New research from Georgia Tech suggests that users may like more personable AI, but they are more likely to obey AI that sounds robotic. While following orders from Siri may not be critical, many AI systems, such as robotic guide dogs, require human compliance for safety reasons.
These surprising findings are from research by Sidney Scott-Sharoni, who recently received her Ph.D. from the School of Psychology. Despite years of previous research suggesting people would be socially influenced by AI they liked, Scott-Sharoni’s research showed the opposite.
“Even though people rated humanistic agents better, that didn't line up with their behavior,” she said.
Likability vs. Reliability
Scott-Sharoni ran four experiments. In the first, participants answered trivia questions, saw the AI’s response, and decided whether to change their answer. She expected people to listen to agents they liked.
“What I found was that the more humanlike people rated the agent, the less they would change their answer, so, effectively, the less they would conform to what the agent said,” she noted.
Surprised, Scott-Sharoni studied moral judgments with an AI voice agent next. For example, participants decided how to handle being undercharged on a restaurant bill.
Once again, participants liked the humanlike agent better but listened to the robotic agent more. The unexpected pattern led Scott-Sharoni to explore why people behave this way.
Bias Breakthrough
Why the gap? Scott-Sharoni’s findings point to automation bias — the tendency to see machines as more objective than humans.
Scott-Sharoni continued to test this with a third experiment focused on the prisoner’s dilemma, where participants cooperate with or retaliate against authority. In her task, participants played a game against an AI agent.
“I hypothesized that people would retaliate against the humanlike agent if it didn’t cooperate,” she said. “That’s what I found: Participants interacting with the humanlike agent became less likely to cooperate over time, while those with the robotic agent stayed steady.”
The final study, a self-driving car simulation, was the most realistic and troubling for safety concerns. Participants didn’t consistently obey either agent type, but across all experiments, humanlike AI proved less effective at influencing behavior.
Designing the Right AI
The implications are pivotal for AI engineers. As AI grows, designers may cater to user preferences — but what people want isn’t always best.
“Many people develop a trusting relationship with an AI agent,” said Bruce Walker, a professor of psychology and interactive computing and Scott-Sharoni’s Ph.D. advisor. “So, it’s important that developers understand what role AI plays in the social fabric and design technical systems that ultimately make humans better. Sidney's work makes a critical contribution to that ultimate goal.”
When safety and compliance are the point, robotic beats relatable.
News Contact
Tess Malone, Senior Research Writer/Editor
tess.malone@gatech.edu
Pagination
- Page 1
- Next page