Jan. 22, 2026
An AI-powered tool is changing how researchers study disasters and how students learn from them.
In the International Disaster Reconnaissance (IDR) course, students now use Filio, a platform built by School of Computing Instruction Senior Lecturer Max Mahdi Roozbahani, to capture immersive 360° media, photos, and video that transform real disaster sites in India and Nepal into living digital classrooms.
Offered by the School of Civil and Environmental Engineering and taught by IDR director and Regents’ Professor David Frost, the course pairs traditional fieldwork with Roozbahani’s expertise in immersive technology and data-driven learning, transforming on-the-ground observations into reusable, interactive educational resources.
How Computing Can Capture Data
Disasters are not only physical events; they are also information events, Roozbahani says. Effective response and long-term resilience depend on the ability to observe, record, and communicate critical data under pressure. Georgia Tech’s IDR course pairs structured on-campus preparation with international field experiences, enabling students to study the cascading effects of major disasters, including how local building practices, governance, and culture shape damage and recovery.
“When students step into a disaster zone, they learn quickly that resilience is a systems problem: physical, social, and informational. Our job in computing is to help them capture and reason about that system responsibly,” Roozbahani said.
Learning from the 2025 Himalayas Expedition
During spring break last year, the cohort traveled along the Teesta River corridor in Sikkim, India. The region is shaped by steep terrain, fast-moving water, and critical infrastructure in narrow valleys.
The visit followed the October 2023 glacial lake outburst flood from South Lhonak Lake, which destroyed the Teesta III hydropower dam and impacted downstream towns, including Dikchu and Rangpo. Field stops across India included Lachung, Chungthang, Dikchu, Rangpo, Gangtok, and New Delhi.
Students explored both upstream and downstream consequences.
Upstream, the team examined how steep terrain and river confinement amplify flood forces, creating cascading risks for infrastructure. Using Filio’s interactive 360° media, students captured conditions in Lachung and Chungthang, allowing viewers to explore the landscape through a 360° photo and 360° video that reveal how topography and river dynamics intensify disaster impacts.
They studied community-scale effects downstream, including damaged buildings, disrupted access, and prolonged recovery timelines.
Rangpo offered a glimpse of recovery in motion, with materials staged for rebuilding bridges and roads essential to commerce and emergency response.
Using Immersive Media as a Learning Tool
Students documented their field experience using Filio, an AI-powered visual reporting platform developed by Roozbahani through Georgia Tech’s CREATE-X ecosystem. Filio captures high-resolution photos, video, and 360° immersive media, preserving both the facts and the context of disaster sites; what the site felt like, what was lost, and what communities prioritized in recovery.
“A 360° capture lets students return months later and ask better questions. That second look is where learning accelerates,” Roozbahani said.
Supported by alumni and faculty mentors, including Tech alumnus Chris Klaus and Georgia Tech mentor Bill Higginbotham, the platform is evolving into a reusable educational library for future courses on immersive technology, responsible AI, and global resilience.
Kathmandu: The Context of Culture
The course concluded in Kathmandu, Nepal, where students examined how heritage, governance, and the everyday use of public space shape resilience.
Through Filio’s immersive documentation — including a 360° photo and 360° video from Kathmandu — the focus broadened from hazard impacts to cultural context, highlighting how recovery is not only about rebuilding structures, but also about preserving identity, memory, and community.
Looking Ahead: A Growing Resource for All Students
Frost and Roozbahani envision the IDR immersive media library as a reusable resource for students even when they cannot travel, supporting future courses on immersive technology, responsible AI, and global resilience. Spring 2026 cohorts will continue to build on this foundation by documenting, analyzing, and sharing insights that can improve education and real-world disaster response.
Jan. 20, 2026
Ever since ChatGPT’s debut in 2023, concerns about artificial intelligence (AI) potentially wiping out humanity have dominated headlines. New research from Georgia Tech suggests that those anxieties are misplaced.
“Computer scientists often aren’t good judges of the social and political implications of technology,” said Milton Mueller, a professor in the Jimmy and Rosalynn Carter School of Public Policy. “They are so focused on the AI’s mechanisms and are overwhelmed by its success, but they are not very good at placing it into a social and historical context.”
In the four decades Mueller has studied information technology policy, he has never seen any technology hailed as a harbinger of doom — until now. So, in a Journal of Cyber Policy paper published late last year, he researched whether the existential AI threat was a real possibility.
What Mueller found is that deciding how far AI can go, and its limitations, is something society shapes. How policymakers get involved depends on the specific AI application.
Defining Intelligence
The AI sparking all this alarm is called artificial general intelligence (AGI) — a “superintelligence” that would be all-powerful and fully autonomous. Part of the debate, Mueller realized, is that no one could agree on the definition of what artificial general intelligence is.
Some computer scientists claim AGI would match human intelligence, while others argue it could surpass it. Both assumptions hinge on what “human intelligence” really means. Today’s AI is already better than humans at performing thousands of calculations in an instant, but that doesn’t make it creative or capable of complex problem-solving.
Understanding Independence
Deciding on the definition isn’t the only issue. Many computer scientists assume that as computing power grows, AI could eventually overtake humans and act autonomously.
Mueller argued that this assumption is misguided. AI is always directed or trained toward a goal and doesn’t act autonomously right now. Think of the prompt you type into ChatGPT to start a conversation.
When AI seems to disregard instructions, it’s caused by inconsistencies in its instructions, not by the machine coming alive. For example, in a boat race video game Mueller studied, the AI discovered it could get more points by circling the course instead of winning the race against other challengers. This was a glitch in the system’s reward structure, not AGI autonomy.
“Alignment gaps happen in all kinds of contexts, not just AI,” Mueller said. “I've studied so many regulatory systems where we try to regulate an industry, and some clever people discover ways that they can fulfill the rules but also do bad things. But if the machine is doing something wrong, computer scientists can reprogram it to fix the problem.”
Relying on Regulation
In its current form, even misaligned AI can be corrected. Misalignment also doesn’t mean the AI would snowball past the point where humans lose control of its outcomes. To do that, AI would need to have a physical capability, like robots, to do its bidding, and the power source and infrastructure to maintain itself. A mere data center couldn’t do that and would need human intervention to become omnipotent. Basic laws of physics — how big a machine can be, how much it can compute — would also prevent a super AI.
More importantly, AI is not one homogenous being. Mueller argued that different applications involve different laws, regulations, and social institutions. For example, the data scraping AI does is a copyright issue subject to copyright laws. AI used in medicine can be overseen by the Food and Drug Administration, regulated drug companies, and medical professionals. These are just a few areas where policymakers could intervene from a specific expertise level instead of trying to create universal AI regulations.
The real challenge isn’t stopping an AI apocalypse — it’s crafting smart, sector-specific policies that keep technology aligned with human values. To avoid being a victim of AI, humans can, and should, put up focused guardrails.
News Contact
Tess Malone, Senior Research Writer/Editor
tess.malone@gatech.edu
Jan. 15, 2026
People with autism seeking employment may soon have access to a new AI-based job-coaching tool thanks to a six-figure grant from the National Science Foundation (NSF).
Jennifer Kim and Mark Riedl recently received a $500,000 NSF grant to develop large language models (LLMs) that provide strength-based job coaching for autistic job seekers.
The two Georgia Tech researchers work with Heather Dicks, a career development advisor in Georgia Tech’s EXCEL program, and other nonprofit organizations to provide job-seeking resources to autistic people.
Dicks said the average job search for people with autism can take three to six months in a good economy. It can take up to 18 months in a bad one. However, the new LLMs from Georgia Tech could help to reduce stress and fast-track these job seekers into employment.
Kim is an assistant professor who specializes in human-computer interaction technology that benefits neurodivergent people. Riedl is a professor and an expert in the development of artificial intelligence (AI) and machine learning technologies.
The team’s goal is to identify job-search pain points and understand how job coaches create better employment prospects for their autistic clients.
“Large-language models have an opportunity to support this kind of work if we can have more data about each different individual strength,” Kim said.
“We want to know what worked for them in specific settings at work, what didn’t work, and what kind of accommodations can better help them. That includes how they should prepare for interviews, how they can better represent their skills, how they can address accommodations they need, and how to write a cover letter. It’s a broad range.”
Dicks has advocated for neurodivergent people and helped them find employment for 20 years. She worked at the Center for the Visually Impaired in Atlanta before coming to Georgia Tech in 2017.
She said most nonprofits that support neurodivergent people offer career development programs and many contract job coaches, but limited coach availability often leads to long waitlists. However, LLMs could fill this availability gap to address the immediate needs of job seekers who may not have access to a job coach.
“These organizations often run at a slow pace, and there’s high turnover,” Dicks said. “An AI tool could get the job seeker quicker support. Maybe they don’t even need to wait on the government system.
“If they’re on a waitlist, it can help the user put together a resume and practice general interview questions. When the job coach is ready to work with them, they’re able to hit the ground running.”
Nailing the Interview
Dicks said the job interview is one of the biggest challenges for people with autism.
“They have trouble picking up on visual and nonverbal cues — the tone of the interview, figuring out the nuances that a question is hinting at,” she said. “They’re not giving the warm and fuzzy vibes that allow them to connect on a personal level.”
That’s why Kim wants the models to reflect a strength-based coaching approach. Strength-based coaching is particularly effective for individuals with autism. Many possess traits that employers value. These include:
- Close attention to detail
- Strong technical proficiency
- Unique problem-solving perspectives
“The issue is that they don’t know how these strengths can be applied in the workplace,” Kim said. “Once they understand this, they can communicate with employers about their strengths and the accommodations employers should provide to the job seeker so they can successfully apply their skills at work.”
Handling Rejection
Still, Kim understands that candidates will need to handle rejection to make it through the search process. She envisions LLMs that help them refocus their energy and regain their confidence after being turned down.
“When you get a lot of rejection emails, it’s easy to feel you’re not good enough,” she said. “Being constantly reminded about your strengths and their prior successes can get them through the stressful job-seeking process.”
Dicks said the models should also be able to provide feedback so that candidates don’t repeat mistakes.
“It can tell them what would’ve been a better answer or a better way to say it,” Dicks said. “It can also encourage them with reminders that you get 100 noes before you get a yes.”
You’re Hired, Now What?
Dicks said the role of a job coach doesn’t end the moment a client is hired. Government-contracted job coaches may work with their clients for up to 90 days after they start a new job to support their transition.
However, she said, sometimes that isn’t enough. Many companies have probationary periods exceeding three months. Autistic individuals may struggle with on-the-job training or communicating what accommodations they need from their new employer.
These are just a few gaps an AI tool can fill for these individuals after they’re hired.
“I could see these models evolving to being supportive at those critical junctures of the probationary period being over or the one-year job review or the annual evaluation that everyone dreads,” she said.
Dicks has an average caseload of 15 students, whom she assists in landing jobs and internships through the EXCEL program.
EXCEL provides a mentorship program for students with intellectual and developmental disabilities from the time they set foot on campus through graduation and beyond.
For more information and to apply, visit EXCEL’s website.
Jan. 15, 2026
It’s 1:47 a.m. in a Georgia Tech dorm room. A bleary-eyed student is staring down a homework problem that refuses to make sense. The professor is asleep. Classmates aren’t texting back. Even the caffeine has lost its jolt.
It’s the kind of late-night dead end that pushed the instructors of one particularly tough class to build their own backup: a custom artificial intelligence (AI) tutor created specifically for that course.
They call it the SMART Tutor, short for Scaffolded, Modular, Accessible, Relevant, and Targeted. It guides students through each problem step by step, checks their reasoning, references class notes, and flags mistakes. Instead of handing over solutions, it shows students how to work through them.
That distinction matters most to Ying Zhang, senior associate chair in the School of Electrical and Computer Engineering, who created the tool.
“Unlike ChatGPT, the tutor doesn’t just give answers,” Zhang said. “We want to teach students how to approach the problem, think critically, and become self-regulated learners.”
Born From One Infamously Tough Class
The idea for the SMART Tutor came from a course that had challenged students for years: Circuit Analysis (ECE 2040). It’s a foundational class for electrical engineering undergraduates and historically one of the most difficult in the curriculum.
Zhang saw the same pattern semester after semester. Students often needed help at the exact moment it wasn’t available.
“Many students study late into the evening,” she said. “They cannot really attend office hours during the day because of either class or work schedules. So, basically, when students work at night on their homework and get stuck, they have no one to go for help.”
Students were working late into the night; support wasn’t. Zhang and her colleagues set out to change that.
Office Hours, Upgraded
Their solution: The SMART Tutor which relies solely on course materials, NOT the open internet. When students upload their completed work, the tutor checks the calculations, the reasoning, and whether the solution holds up in practice, not just on paper. It also provides constructive feedback and shares insights with instructors, helping them identify common misconceptions and adjust in-class instruction.
Students select a homework problem and watch the system break it down step by step. It also answers broader conceptual questions using lectures and notes.
“The students, the SMART Tutor, and the instructor work as a team to help students learn,” Zhang said.
Student-Tested, Professor-Approved
During a semester-long pilot with 50 students, Zhang did not require anyone to use the tutor. But nearly everyone did.
“Most students felt the AI tutor helped them learn more effectively and at their own pace,” she said. “They valued the immediate feedback and the chance to learn from mistakes in real time.”
Nidhi Krishna, a computer engineering major, used the tutor as a sounding board when she got stuck.
“What helped most was being able to show my work and ask, ‘Where did I go wrong?’” Krishna said.
She approached it like she would a teaching assistant, working through problems independently and asking for guidance rather than solutions. Students also valued something else: help that showed up at the right moment.
Teaching Students to Think
What stood out to Zhang wasn’t improved grades. It was what the tutor revealed about how students learn.
By analyzing interaction data, she saw two patterns: students who asked questions to understand, and those who used the system to confirm answers. The difference revealed a deeper gap in learning strategies.
“Some students, especially those who need help most, lack strong learning skills,” Zhang said. “Students with lower academic preparation were more likely to ask guess-and-check questions instead of seeking deeper explanations.”
That insight is already shaping the next version of the tutor.
The SMART Tutor is now part of a broader vision called NEAT: Next-Generation Engineering Education with AI Tutoring. Zhang plans to expand the NEAT framework across Georgia Tech’s College of Engineering and eventually to partner institutions.
One factor fueling that growth is affordability. The system costs about $300 per semester for a class of 50 students, a price Zhang believes most programs can absorb. The academic return, she said, far outweighs the cost.
Always Awake, Always Ready
There will always be a 1:47 a.m. somewhere on campus.
When everything stops making sense, students won’t have to give up or wait for the next day’s office hours. The SMART Tutor won’t solve the problem for them, but it will remind them they can solve it themselves.
After midnight, that may be far more useful than another cup of coffee.
News Contact
Michelle Azriel, Sr. Writer Editor
Jan. 05, 2026
University research drives U.S. innovation, and Georgia Institute of Technology is leading the way.
The latest Higher Education Research and Development (HERD) Survey from the National Science Foundation (NSF) places Georgia Tech as No. 2 nationally for federally sponsored research expenditures in 2024. This is Georgia Tech’s highest-ever ranking from the NSF HERD survey and a 70% increase over the Institute's 2019 numbers.
In total expenditures from all externally funded dollars (including the federal government, foundations, industry, etc.), Georgia Tech is ranked at No. 6.
Tech remains ranked No. 1 among universities without a medical school — a major accomplishment, as medical schools account for a quarter of all research expenditures nationally.
“Georgia Tech’s rise to No. 2 in federally sponsored research expenditures reflects the extraordinary talent and commitment of our faculty, staff, students, and partners. This achievement demonstrates the confidence federal agencies have in our ability to deliver transformative research that addresses the nation’s most critical challenges,” said Tim Lieuwen, executive vice president for Research.
Overall, the state of Georgia maintained its No. 8 position in university research and development, and for the first time, the state topped the $4 billion mark in research expenditures. Georgia Tech provides $1.5 billion, the largest state university contribution. In the last five years, federal funding for higher education research in the state of Georgia has grown an astounding 46% — 10 points higher than the U.S. rate.
Lieuwen said, “Georgia Tech is proud to lead the state in research contributions, helping Georgia surpass the $4 billion mark for the first time. Our work doesn’t just advance knowledge — it saves lives, creates jobs, and strengthens national security. This growth reflects our commitment to drive innovation that benefits Georgia, our country, and the world.”
About the NSF HERD Survey
The NSF HERD Survey is an annual census of U.S. colleges and universities that expended at least $150,000 in separately accounted for research and development (R&D) in the fiscal year. The survey collects information on R&D expenditures by field of research and source of funds and also gathers information on types of research, expenses, and headcounts of R&D personnel.
About Georgia Tech's Research Enterprise
The research enterprise at Georgia Tech is led by the Executive Vice President for Research, Tim Lieuwen, and directs a portfolio of research, development, and sponsored activities. This includes leadership of the Georgia Tech Research Institute (GTRI), the Enterprise Innovation Institute, 11 interdisciplinary research institutes (IRIs), Office of Commercialization, Office of Corporate Engagement, plus research centers, and related research administrative support units. Georgia Tech routinely ranks among the top U.S. universities in volume of research conducted.
News Contact
Angela Ayers
Assistant Vice President of Research Communications
Georgia Tech
Dec. 17, 2025
Would you follow a chatbot’s advice more if it sounded friendly?
That question matters as artificial intelligence (AI) spreads into everything from customer service to self-driving cars. These autonomous agents often have human names — Alexa or Claude, for example — and speak conversationally, but too much familiarity can backfire. Earlier this year, OpenAI scaled down its “sycophantic” ChatGPT model, which could cause problems for users with mental health issues.
New research from Georgia Tech suggests that users may like more personable AI, but they are more likely to obey AI that sounds robotic. While following orders from Siri may not be critical, many AI systems, such as robotic guide dogs, require human compliance for safety reasons.
These surprising findings are from research by Sidney Scott-Sharoni, who recently received her Ph.D. from the School of Psychology. Despite years of previous research suggesting people would be socially influenced by AI they liked, Scott-Sharoni’s research showed the opposite.
“Even though people rated humanistic agents better, that didn't line up with their behavior,” she said.
Likability vs. Reliability
Scott-Sharoni ran four experiments. In the first, participants answered trivia questions, saw the AI’s response, and decided whether to change their answer. She expected people to listen to agents they liked.
“What I found was that the more humanlike people rated the agent, the less they would change their answer, so, effectively, the less they would conform to what the agent said,” she noted.
Surprised, Scott-Sharoni studied moral judgments with an AI voice agent next. For example, participants decided how to handle being undercharged on a restaurant bill.
Once again, participants liked the humanlike agent better but listened to the robotic agent more. The unexpected pattern led Scott-Sharoni to explore why people behave this way.
Bias Breakthrough
Why the gap? Scott-Sharoni’s findings point to automation bias — the tendency to see machines as more objective than humans.
Scott-Sharoni continued to test this with a third experiment focused on the prisoner’s dilemma, where participants cooperate with or retaliate against authority. In her task, participants played a game against an AI agent.
“I hypothesized that people would retaliate against the humanlike agent if it didn’t cooperate,” she said. “That’s what I found: Participants interacting with the humanlike agent became less likely to cooperate over time, while those with the robotic agent stayed steady.”
The final study, a self-driving car simulation, was the most realistic and troubling for safety concerns. Participants didn’t consistently obey either agent type, but across all experiments, humanlike AI proved less effective at influencing behavior.
Designing the Right AI
The implications are pivotal for AI engineers. As AI grows, designers may cater to user preferences — but what people want isn’t always best.
“Many people develop a trusting relationship with an AI agent,” said Bruce Walker, a professor of psychology and interactive computing and Scott-Sharoni’s Ph.D. advisor. “So, it’s important that developers understand what role AI plays in the social fabric and design technical systems that ultimately make humans better. Sidney's work makes a critical contribution to that ultimate goal.”
When safety and compliance are the point, robotic beats relatable.
News Contact
Tess Malone, Senior Research Writer/Editor
tess.malone@gatech.edu
Dec. 10, 2025
Pascal Van Hentenryck, A. Russell Chandler III Chair and Professor in the H. Milton Stewart School of Industrial and Systems Engineering (ISyE) at Georgia Tech, director of Tech AI, and director of NSF AI4OPT, was a keynote speaker at AI Festival 2025, held December 1–3 at TU Wien Informatics in Vienna, Austria.
The three-day international festival convened leading researchers, industry experts, and members of the public to explore how artificial intelligence is shaping science, technology, and society. Through keynote talks, panels, and interactive sessions, the event fostered dialogue around emerging AI research, real-world applications, and societal impact.
Van Hentenryck delivered a keynote on “AI for Engineering Optimization” during Day 1: Research, which focused on recent advances in foundational and applied AI. His talk highlighted how AI and optimization methods can be integrated to address complex engineering challenges, with implications for domains such as energy systems, mobility, and large-scale decision-making.
The session was chaired by Nysret Musliu of TU Wien and the Cluster of Excellence Bilateral AI (BilAI).
The research-focused first day of the festival featured discussions on topics including neurosymbolic AI, large language models, explainable AI, AI in science, and automated problem solving and decision-making. Van Hentenryck’s keynote contributed to these conversations by emphasizing the role of AI-driven optimization in advancing engineering design and operational efficiency.
AI Festival 2025 was co-organized by TU Wien, the Center for Artificial Intelligence and Machine Learning (CAIML), BilAI—funded by the Austrian Science Fund (FWF)—the Vienna Science and Technology Fund (WWTF), and TU Austria. The event underscored the importance of international collaboration across academia and industry in advancing responsible and impactful AI research.
Van Hentenryck’s participation reflects Georgia Tech’s leadership in artificial intelligence, as well as the missions of Tech AI and AI4OPT to advance AI-enabled optimization and decision-making for complex, real-world systems.
Dec. 16, 2025
From zero to working prototype in just four months, students in the College of Computing’s new entrepreneurial Junior Design Capstone tackle real-world problems with guidance from startup mentors.
Led by School of Computing Instruction faculty member and Georgia Tech alumna Jennifer Whitlow, the course gives students a founder’s perspective on building technology that meets real user needs.
A Startup Approach to Junior Design
Unlike the traditional CS Junior Design course where teams work with sponsors, students in the entrepreneurial track act as their own clients. They begin the semester with no predetermined problem and follow a structured process, which is anchored by deliverables that reflect professional expectations.
“Students come in with nothing,” Whitlow said. “They identify a problem, conduct customer discovery, realize which assumptions were wrong, refine their direction, figure out what to build and then build it. And they own it 100 percent.”
Customer-discovery interviews ensure every idea is grounded in real user needs, and the semester culminates in a fully functioning prototype paired with a written justification of the decisions behind it. This combination of development and reflection gives students a framework that mirrors startup practices.
Expert Alumni Coached and AI-Driven Development
To further simulate a startup environment, Whitlow recruited alumni coaches with startup or executive experience. Coaches were paired with teams based on their areas of expertise, advising anywhere from one to four groups. The roster includes a former chief technology officer and longtime startup advisor, along with alumni startup founders.
Students also incorporate AI tools into development, accelerating early prototype work while still making critical decisions themselves.
“AI can accelerate the early stages,” Whitlow said. “But students have to understand their design well enough to guide it. AI doesn’t replace their decision-making.”
Top Teams Earn CREATE-X Acceptance
Sixteen teams completed the entrepreneurial capstone this fall.
The top two scoring projects earned automatic acceptance into CREATE-X Launch, Georgia Tech’s startup accelerator:
- CodeOrbit
- Sonara
These teams showcase the program’s ability to quickly bring student ideas to a level that’s ready for real-world startup incubation.
Putting the Process into Action: Lunchbox
One team that exemplifies how the capstone’s structure supports innovation is LunchBox. Created by computational media major Abigail Rhea and her teammates, LunchBox helps parents and caregivers of neurodivergent children navigate limited safe-food options.
The idea evolved after early customer discovery revealed that the original concept had too much competition, so the team narrowed its focus.
“During research, one of our teammates came across a testimonial from the mother of an autistic child,” Rhea said. “It spoke to all of us and helped us shift toward a truly underserved demographic.”
The team conducted more than 20 interviews with caregivers and special education teachers, reshaping its approach. “We realized families didn’t need another daily task,” Rhea said. “They needed personalized guidance that runs in the background. Everything we built came directly from those conversations.”
The team's biggest technical challenge was engineering a dynamic, emotionally supportive roadmap for food-exposure therapy. While AI accelerated development of SwiftUI code, all core decisions remained human-driven.
At the Capstone Expo, attendees connected strongly with the project. “So many people told us how applicable LunchBox is to their lives,” Rhea said. “Most joined the waitlist. We couldn’t be more excited for what’s next.”
Looking Ahead
Whitlow sees the pilot already fulfilling its purpose: giving students the tools and confidence to turn ideas into real ventures. Teams can continue work by applying to CREATE-X programs or building on their prototypes after the semester.
“This course shows students they can create something real,” Whitlow said. “That’s the goal: empowering them to innovate.”
A Startup Approach to Junior DA Startup Approach to Junior DesiUnlike the traditional CS Junior Design course where teams work with sponsors, students in the entrepreneurial track act as their own clients. They begin the semester with no predetermined problem and follow a structured process, which is anchored by deliverables that reflect professional expectatio
Dec. 16, 2025
Supply chain management is poised to enter a new era. The Harvard Business Review has published a groundbreaking article co-authored by Andre Calmon, associate professor of operations management, alongside Flavio Calmon, Harvard University; Carol Long, Harvard University; and David Simchi-Levi, Massachusetts Institute of Technology. “The Age of Autonomous Supply Chains Has Arrived” explores how generative AI is transforming supply chain management from automated systems to truly autonomous operations.
Based on data collected at the Scheller College of Business, Calmon’s research demonstrates how AI models like Llama 4 Maverick 17B—equipped with optimized prompts, data-sharing rules, and guardrails—can outperform human teams in managing complex supply chains. Using the classic MIT Beer Distribution Game as a testbed, the authors benchmarked AI agents against more than 100 Georgia Tech students. The results were striking: AI-driven systems reduced total supply chain costs by up to 67% compared to human performance.
Traditional automated systems rely on rigid, human-designed rules. Calmon and his co-authors employed autonomous agents that learn, adapt, and coordinate across functions in real time. The study highlights four critical factors for success: selecting capable reasoning models, implementing guardrails to prevent costly errors, curating data through orchestration, and refining prompts for optimal performance.
“This breakthrough positions the Scheller College of Business as a thought leader at the intersection of AI and supply chain innovation,” said Calmon. “World-class supply chain management is becoming a plug-and-play capability. Businesses that understand how to guide generative AI agents with the right data and policies will gain a decisive competitive edge.”
The implications extend beyond cost savings. By delegating operational decisions to autonomous systems, human managers can focus on strategic priorities such as network design and supplier relationships. In an era of global volatility, this research emphasizes how future supply chain success depends on the strategic use of AI-driven technology.
News Contact
Kristin Lowe (She/Her)
Content Strategist
Georgia Institute of Technology | Scheller College of Business
kristin.lowe@scheller.gatech.edu
Dec. 16, 2025
The AI4Science Center has announced the first recipients of its semiannual seed grant competition. Supported by the Schools of Chemistry and Biochemistry, Physics, and Psychology, the seed grant aims to support the development of research projects centered on innovation and collaboration.
“The selection committee received more than a dozen proposals that push the boundaries of AI-enabled science and encourage collaboration across units. I look forward to seeing the great science, strong results, and successful future external funding enabled by these seed grants,” says Dimitrios Psaltis, professor in the School of Physics and director of the AI4Science Center.
Launched earlier this semester, the center promotes cross-disciplinary research on AI tools that address scientific challenges. The following three proposals were selected by the center based on their scientific goals, extent of interdisciplinary collaboration, and potential for outside funding:
Spring 2026 AI4Science Center Seed Grant Recipients
Graph Foundation Models for Protein Conformational Dynamics | School of Chemistry and Biochemistry
- PIs: Professor Peter Kasson, School of Chemistry and Biochemistry; Professor JC Gumbart, School of Physics; Assistant Professor Amirali Aghazadeh, School of Electrical and Computer Engineering
- Graduate student: Jeffy Jeffy
- Team statement: “The AI4Science Center’s seed funding will allow us to complete and test a prototype of our new deep learning architecture for protein dynamics. We're super excited about the project and happy that this gives us support to pursue our new idea.”
Combinations of Verified AI and Domain Knowledge for New Insights in Theoretical Physics | School of Physics
- PIs: Assistant Professor Aishik Ghosh, School of Physics; Professor Vijay Ganesh, School of Computer Science
- Graduate student: Piyush Jha
- Team statement: “This seed funding gives us an opportunity to connect two fields in a way that could transform our approach to certain problems in theoretical physics.”
Harnessing the Manifold Geometry of Neural Representations for Robust LLM Safety | School of Psychology
- PIs: Assistant Professor Audrey Sederberg, School of Psychology; Assistant Professor Pan Li, School of Electrical and Computer Engineering
- Graduate student: Ruixuan Deng
- Team statement: “Our project injects insights from human neuroscience directly into AI safety algorithm design, allowing us to move beyond black-box approaches toward more interpretable and principled safety mechanisms. By closing the loop, these computational models will also provide new feedback and insights for neuroscience.”
News Contact
Writer: Lindsay C. Vidal
Pagination
- Previous page
- Page 3
- Next page