Oct. 06, 2025
Electric vehicles (EVs) can be environmentally friendly and more cost-effective — until drivers plan a road trip. Charging stations aren’t as prevalent as traditional gas stations, and even if they can be found along the route, they may not be functioning or may already be occupied by other cars.
While EV charging locator apps can show drivers where the nearest charger is, they aren’t always accurate enough to show real-time status, such as whether a charger is working and available. How are drivers supposed to hit the road when they aren’t sure where their next charge is coming from? This uncertainty can be enough to deter drivers from purchasing an EV altogether.
New research from Georgia Tech, Harvard University, and Massachusetts Institute of Technology suggests that state governments should step in to help. The right policy could inspire data transparency by station hosts, ensuring that EV drivers have reliable networks — and thus encourage EV ownership. The researchers presented their findings in the paper, “Charger Data Transparency: Curing Range Anxiety, Powering EV Adoption,” in September’s Brookings.
Data Deserts
The researchers conducted a field experiment to discover the extent of the problem. This analysis showed that just 34% of EV charging stations provide real-time status updates across six major interstates in 40 U.S. states. The researchers found 150 to 350-mile stretches without real-time charger availability, longer than the stated range of many EV models. This leaves thousands of miles of highways in a data desert.
“We just don't have real-time data infrastructure necessary to build confidence in the reliability of charging, especially in communities along transit corridors,” said Omar Asensio, an associate professor in the Jimmy and Rosalynn Carter School of Public Policy. “It's not that the capability isn’t there. It's that there aren't clear incentives to encourage EV charging station operators to do the right thing and share the data.”
Charging Transparency
Government regulation is necessary to improve charging reliability, according to the researchers. State governments could offer funding for charging stations only if the station host agrees to data transparency. A simpler policy proposal would be for all fast chargers on highways to post their real-time status to an application programming interface, where software developers could access it. This approach would provide reliable information on whether a public charger is operational, and it can make government spending more efficient by leveraging network effects. The research team is already collaborating with state governments from Massachusetts to Georgia to discuss how to make this government regulation a reality.
State governments will also benefit, as EVs can help them close the gap on decreasing carbon emissions.
“Electric vehicles are a key strategy for decarbonizing the transportation sector and delivering public health co-benefits, but consumers need to trust that public chargers will work when they need them,” Asensio said. “Until real-time data disclosure standards are addressed, reliable, widespread adoption will be hard. A data-centric approach can enhance the efficiency of existing transportation investments.”
Many states, including Georgia, have also supported EV manufacturing. EV brand Rivian recently broke ground on an assembly plant outside Atlanta. More widespread EV adoption is paramount to making these plants economic successes. Data transparency regulations could be a start toward finally making EVs the ideal road trip vehicle.
News Contact
Tess Malone, Senior Research Writer/Editor
tess.malone@gatech.edu
Sep. 26, 2025
Two Georgia Tech Ph.D. students created a student-run, faculty-graded, fully-accredited course that links math, engineering and machine learning.
Andrew Rosemberg, with assistance from Michael Klamkin, both student researchers with the U.S. National Science Foundation AI Research Institute for Advances in Optimization (AI4OPT), designed the course to bridge gaps they saw in existing classrooms.
“While Georgia Tech offers excellent courses on optimization, control, and learning, we found no single class that connected all these fields in a cohesive way,” Rosemberg said. “In our research, it was clear these topics are deeply interconnected.”
Problem-driven learning
The course starts with fundamental problems and works backward to the methods required to solve them. Rosemberg said this approach was intentional. He said that courses often center around methods in isolation rather than showing how the methods contribute to the larger context. This keeps the course focused on problem-driven discovery.
The class also serves as a way for Rosemberg and Klamkin to strengthen their own teaching and mentoring skills.
Goals and structure
The primary goal of the course is to help students build a clear understanding of how mathematical programming, classical optimal control, and machine learning techniques such as reinforcement learning connect to one another. Students are also working to produce a structured book by the end of the semester.
“The hope is that this resource will not only solidify our own learning but also serve as a guide for other students who want to approach these problems in the future,” Rosemberg said.
Responsibilities are distributed across participants, with each student delivering lectures, reviewing peers’ work, and contributing to collective discussions. Rosemberg and Klamkin provide additional support where needed, while faculty mentor and director of AI4OPT, Pascal Van Hentenryck, ensures the class stays aligned with broader academic objectives.
Student ownership and collaboration
Rosemberg noted that the student-led model gives students a deeper sense of ownership, making them responsible for their own learning, and having a stronger impact. This model allows students to determine what to learn and why, which promotes critical thinking.
The course uses GitHub as its primary workflow platform. Rosemberg said adds transparency and prepares students for real-world research practices.
“GitHub functions much like university systems such as Canvas or Piazza. It also has the added benefit of making all contributions visible to the world,” Rosemberg explained. “This helps students take pride and ownership of their work, while also introducing them to Git, an essential tool for software development and modern STEM research.”
Emerging insights and challenges
Students have begun aligning their research with course themes, including shaping qualifying exam topics around the intersections of operations research, optimal control and reinforcement learning. Rosemberg said exploring the comparative strengths of these fields side by side has been one of the most rewarding outcomes.
Balancing independence with guidance has proven to be the greatest challenge. He said they have been evolving alongside the students in real time and have learned to emphasize mutual responsibility to promote the collective progress of the class.
Looking ahead
Rosemberg said future iterations of the course may place more emphasis on setting expectations early, given the effort required to deliver a lecture in this format.
His advice for others who may want to replicate the model is to focus on building a committed core team.
“Start with a small, motivated group,” Rosemberg said. “Like a startup, success depends less on the structure and more on the dedication of the people involved.”
News Contact
Jaci Bjorne
Sep. 03, 2025
Artificial intelligence is growing fast, and so are the number of computers that power it. Behind the scenes, this rapid growth is putting a huge strain on the data centers that run AI models. These facilities are using more energy than ever.
AI models are getting larger and more complex. Today’s most advanced systems have billions of parameters, the numerical values derived from training data, and run across thousands of computer chips. To keep up, companies have responded by adding more hardware, more chips, more memory and more powerful networks. This brute force approach has helped AI make big leaps, but it’s also created a new challenge: Data centers are becoming energy-hungry giants.
Some tech companies are responding by looking to power data centers on their own with fossil fuel and nuclear power plants. AI energy demand has also spurred efforts to make more efficient computer chips.
I’m a computer engineer and a professor at Georgia Tech who specializes in high-performance computing. I see another path to curbing AI’s energy appetite: Make data centers more resource aware and efficient.
Energy and Heat
Modern AI data centers can use as much electricity as a small city. And it’s not just the computing that eats up power. Memory and cooling systems are major contributors, too. As AI models grow, they need more storage and faster access to data, which generates more heat. Also, as the chips become more powerful, removing heat becomes a central challenge.
Data centers house thousands of interconnected computers. Alberto Ortega/Europa Press via Getty Images
Cooling isn’t just a technical detail; it’s a major part of the energy bill. Traditional cooling is done with specialized air conditioning systems that remove heat from server racks. New methods like liquid cooling are helping, but they also require careful planning and water management. Without smarter solutions, the energy requirements and costs of AI could become unsustainable.
Even with all this advanced equipment, many data centers aren’t running efficiently. That’s because different parts of the system don’t always talk to each other. For example, scheduling software might not know that a chip is overheating or that a network connection is clogged. As a result, some servers sit idle while others struggle to keep up. This lack of coordination can lead to wasted energy and underused resources.
A Smarter Way Forward
Addressing this challenge requires rethinking how to design and manage the systems that support AI. That means moving away from brute-force scaling and toward smarter, more specialized infrastructure.
Here are three key ideas:
Address variability in hardware. Not all chips are the same. Even within the same generation, chips vary in how fast they operate and how much heat they can tolerate, leading to heterogeneity in both performance and energy efficiency. Computer systems in data centers should recognize differences among chips in performance, heat tolerance and energy use, and adjust accordingly.
Adapt to changing conditions. AI workloads vary over time. For instance, thermal hotspots on chips can trigger the chips to slow down, fluctuating grid supply can cap the peak power that centers can draw, and bursts of data between chips can create congestion in the network that connects them. Systems should be designed to respond in real time to things like temperature, power availability and data traffic.
Break down silos. Engineers who design chips, software and data centers should work together. When these teams collaborate, they can find new ways to save energy and improve performance. To that end, my colleagues, students and I at Georgia Tech’s AI Makerspace, a high-performance AI data center, are exploring these challenges hands-on. We’re working across disciplines, from hardware to software to energy systems, to build and test AI systems that are efficient, scalable and sustainable.
Scaling With Intelligence
AI has the potential to transform science, medicine, education and more, but risks hitting limits on performance, energy and cost. The future of AI depends not only on better models, but also on better infrastructure.
To keep AI growing in a way that benefits society, I believe it’s important to shift from scaling by force to scaling with intelligence.![]()
This article is republished from The Conversation under a Creative Commons license. Read the original article.
News Contact
Author:
Divya Mahajan, assistant professor of Computer Engineering, Georgia Institute of Technology
Media Contact:
Shelley Wunder-Smith
shelley.wunder-smith@research.gatech.edu
Sep. 04, 2025
Electric vehicles. Rooftop solar. Cycling to work. Knowing where to start when reducing your personal carbon footprint can be daunting. But a new tool from Georgia Tech makes it easier for anyone to figure out how they can help address climate change.
The Drawdown Georgia Solutions Tracker is a digital dashboard that enables everyday Georgians to see how effective various technologies could be for each county. The tracker analyzes public data for 16 solutions — from planting trees to public transit — that can lower greenhouse gas emissions. The tracker is equally essential for policymakers and business leaders, enabling them to identify opportunities to propose legislation or adjust operations to reduce carbon emissions.
To use the tracker, viewers click on a solution to see its impact. Then, they specify a particular county, and the data is tailored to the most relevant metric. For example, if someone picks “plant-based diet” as a solution, they can see how many vegan restaurants are already in their county. The tracker also contrasts the climate solution with a relevant area that might benefit if the solution is implemented. For the plant-based example, the tracker compares it to urban density.
This tracker is one of the many initiatives of Drawdown Georgia, one of the Ray C. Anderson Foundation’s key funding initiatives based on research conducted by Georgia Tech, Georgia State University, the University of Georgia, and Emory University. Drawdown Georgia's goal is to reduce Georgia’s carbon impact by 57% by 2030 and to accelerate Georgia’s progress toward net-zero greenhouse emissions.
Drawdown Georgia also developed a carbon emissions tracker that shows carbon emission levels by county. The dashboard was a success, but the Drawdown Georgia team wanted to create a more proactive tool. The Solutions Tracker was designed so that anyone could make smalldaily changes to improve the climate — not just track it.
“We began the Drawdown Georgia project with the goal of cutting state pollution significantly,” said Marilyn Brown, Regents' Professor and the Brook Byers Professor of Sustainable Systems in the Jimmy and Rosalynn Carter School of Public Policy. "To get Georgians involved, we decided to focus on local and regional opportunities to reduce emissions.”
Drawdown Data
The data combines federal and state sources from the U.S. Energy Information Administration, the National Renewable Energy Laboratory, and the Department of Agriculture. Some solutions may seem obvious, like planting trees, but others are more niche. For example, decomposing trash often produces methane gas, which means that landfills contribute to greenhouse gas emissions — important information for policymakers to consider when developing carbon reduction strategies.
The researchers hope everyone will use the tracker. Politicians and policymakers can find new ideas for legislation or the adoption of these solutions. Business leaders can find opportunities to hit their decarbonization goals. Georgians can use the tracker to figure out which solutions are most sustainable for their lives. Even scientists can learn which methods to home in on for their research. Since the tracker is available via Creative Commons, anyone can use the data to build their own tools or models.
The tracker is already having a real-world impact. Brown and the Drawdown Georgia team have collaborated with the state of Georgia and the 29-county metro Atlanta area on their carbon action plans. They’ve also partnered with 75 businesses on carbon action plans and other solutions through the Drawdown Georgia Business Compact, managed by the Ray C. Anderson Center for Sustainable Business in the Scheller College of Business. As these stakeholders ask questions about different climate solution impacts, the team has expanded the tracker accordingly. They’ve also recently redesigned the user interface to make it even more accessible for everyday users.
From improved public health to business opportunities, the state requires reduced greenhouse gases, and Georgia Tech is not only tracking emissions but helping to fix the problem, too.
News Contact
Tess Malone, Senior Research Writer/Editor
tess.malone@gatech.edu
Sep. 02, 2025
A new version of Georgia Tech’s virtual teaching assistant, Jill Watson, has demonstrated that artificial intelligence can significantly improve the online classroom experience. Developed by the Design Intelligence Laboratory (DILab) and the U.S. National Science Foundation AI Institute for Adult Learning and Online Education (AI-ALOE), the latest version of Jill Watson integrates OpenAI’s ChatGPT and is outperforming OpenAI’s own assistant in real-world educational settings.
Jill Watson not only answers student questions with high accuracy. It also improves teaching presence and correlates with better academic performance. Researchers believe this is the first documented instance of a chatbot enhancing teaching presence in online learning for adult students.
How Jill Watson Shaped Intelligent Teaching Assistants
First introduced in 2016 using IBM’s Watson platform, Jill Watson was the first AI-powered teaching assistant deployed in real classes. It began by responding to student questions on discussion forums like Piazza using course syllabi and a curated knowledge base of past Q&As. Widely covered by major media outlets including The Chronicle of Higher Education, The Wall Street Journal, and The New York Times, the original Jill pioneered new territory in AI-supported learning.
Subsequent iterations addressed early biases in the training data and transitioned to more flexible platforms like Google’s BERT in 2019, allowing Jill to work across learning management systems such as EdStem and Canvas. With the rise of generative AI, the latest version now uses ChatGPT to engage in extended, context-rich dialogue with students using information drawn directly from courseware, textbooks, video transcripts, and more.
Future of Personalized, AI-Powered Learning
Designed around the Community of Inquiry (CoI) framework, Jill Watson aims to enhance “teaching presence,” one of three key factors in effective online learning, alongside cognitive and social presence. Teaching presence includes both the design of course materials and facilitation of instruction. Jill supports this by providing accurate, personalized answers while reinforcing the structure and goals of the course.
The system architecture includes a preprocessed knowledge base, a MongoDB-powered memory for storing conversation history, and a pipeline that classifies questions, retrieves contextually relevant content, and moderates responses. Jill is built to avoid generating harmful content and only responds when sufficient verified course material is available.
Field-Tested in Georgia and Beyond
The first AI-powered teaching assistant was developed for Georgia Tech’s Online Master of Science in Computer Science (OMSCS) program. By fall 2023, Jill Watson was deployed in Georgia Tech’s OMSCS artificial intelligence course, serving more than 600 students, as well as in an English course at Wiregrass Georgia Technical College, part of the Technical College System of Georgia (TCSG).
A controlled A/B experiment in the OMSCS course allowed researchers to compare outcomes between students with and without access to Jill Watson, even though all students could use ChatGPT. The findings are striking:
- Jill Watson’s accuracy on synthetic test sets ranged from 75% to 97%, depending on the content source. It consistently outperformed OpenAI’s Assistant, which scored around 30%.
- Students with access to Jill Watson showed stronger perceptions of teaching presence, particularly in course design and organization, as well as higher social presence.
- Academic performance also improved slightly: students with Jill saw more A grades (66% vs. 62%) and fewer C grades (3% vs. 7%).
A Smarter, Safer Chatbot
While Jill Watson uses ChatGPT for natural language generation, it restricts outputs to validated course material and verifies each response using textual entailment. According to a study by Taneja et al. (2024), Jill not only delivers more accurate answers than OpenAI’s Assistant but also avoids producing confusing or harmful content at significantly lower rates.
Compared to OpenAI’s Assistant, Jill Watson (ChatGPT) not only achieves higher accuracy but also produces confusing or harmful content at significantly lower rates. Jill Watson answers correctly 78.7% of the time, with only 2.7% of its errors categorized as harmful and 54.0% as confusing. In contrast, OpenAI’s Assistant demonstrates a much lower accuracy of 30.7%, with harmful failures occurring 14.4% of the time and confusing failures rising to 69.2%. Additionally, Jill Watson has a lower retrieval failure rate of 43.2%, compared to 68.3% for the OpenAI Assistant.
What’s Next for Jill
The team plans to expand testing across introductory computing courses at Georgia Tech and technical colleges. They also aim to explore Jill Watson’s potential to improve cognitive presence, particularly critical thinking and concept application. Although quantitative results for cognitive presence are still inconclusive, anecdotal feedback from students has been positive. One OMSCS student wrote:
“The Jill Watson upgrade is a leap forward. With persistent prompting I managed to coax it from explicit knowledge to tacit knowledge. Kudos to the team!”
The researchers also expect Jill to reduce instructional workload by handling routine questions and enabling more focus on complex student needs.
Additionally, AI-ALOE is collaborating with the publishing company John Wiley & Sons, Inc., to develop a Jill Watson virtual teaching assistant for one of their courses, with the instructor and university chosen by Wiley. If successful, this initiative could potentially scale to hundreds or even thousands of classes across the country and around the world, transforming the way students interact with course content and receive support.
A Georgia Tech-Led Collaboration
The Jill Watson project is supported by Georgia Tech, the US National Science Foundation’s AI-ALOE Institute (Grants #2112523 and #2247790), and the Bill & Melinda Gates Foundation.
Core team members are Saptrishi Basu, Jihou Chen, Jake Finnegan, Isaac Lo, JunSoo Park, Ahamad Shapiro and Karan Taneja, under the direction of professor Ashok Goel and Sandeep Kakar. The team works under Beyond Question LLC, an AI-based educational technology startup.
News Contact
Breon Martin
Aug. 19, 2025
When Hurricane Katrina struck in 2005, it wasn’t just another storm — it was one of the deadliest hurricanes in U.S. history. Entire neighborhoods disappeared, families were scattered, and lives were split into “before” and “after.” Nearly 20 years later, the haunting images of submerged rooftops and boat rescues remain vivid.
The Surge That Shattered New Orleans
On Aug. 29, 2005, early reports claimed New Orleans had “dodged the bullet.” But offshore winds funneled water into the city’s canals, triggering multiple catastrophic levee failures. The Lower Ninth Ward, where most fatalities occurred, was devastated as many residents, misled by comparisons to Hurricane Camille, chose not to evacuate.
“Katrina’s storm surge was exceptional,” says Hermann Fritz, a civil engineering professor at Georgia Tech. “In some areas, we saw water levels over 27 feet — that’s like a three-story building.”
While much attention focused on New Orleans’ levee failures, Fritz points out that the surge’s sheer height and energy would have overwhelmed even more robust defenses in some areas. “Katrina showed us that nature can produce forces beyond our engineering designs,” he says.
A Disaster of Inequality
The storm didn’t strike evenly; it exposed and deepened existing social and economic inequalities. “The disaster hit lower-income Black neighborhoods hardest,” says Allen Hyde, associate professor of history and sociology. He notes how years of segregation, disinvestment, and discriminatory housing policies left these communities uniquely vulnerable. Hyde continues, “Many homes were in low-lying, flood-prone areas, and residents often lacked access to reliable transportation, making evacuation difficult or impossible.”
Georgia’s Changing Landscape: Migration and Impact
Katrina displaced hundreds of thousands and claimed a staggering toll of more than 1,800 lives. Georgia quickly absorbed many evacuees, reshaping its demographics and infrastructure. “Hurricane Katrina led to one of the largest displacements of people due to a natural disaster,” says Shatakshee Dhongde, a professor of economics. “It changed the demographics of Georgia in measurable ways, from school enrollment to the labor market.”
The U.S. Census Bureau tracked this migration, noting spikes in Louisiana-born residents in metro Atlanta. Local school districts enrolled hundreds of new students almost overnight, while housing markets saw increased demand from families looking for permanent homes. The arrival of so many displaced residents didn’t just strain schools and housing — it reshaped the state’s economy. Dhongde notes that evacuees often brought new skills, business ideas, and networks. At the same time, the state and local governments faced the financial burden of expanding social services, healthcare, and housing assistance.
Dhongde adds, “The impact of a disaster doesn’t stop at the water’s edge. It travels with people, and those effects can last for years.” While the influx strained services, it also enriched Georgia’s cultural and economic fabric.
Hyde notes, “Gentrification made many neighborhoods unaffordable for former residents,” and adds that many Black evacuees didn’t return to New Orleans due to economic barriers and post-Katrina gentrification. Cultural communities scattered across cities like Atlanta, Houston, and Baton Rouge.
Lessons the Levees Still Teach
For Fritz, Katrina remains a wake-up call for coastal preparedness. “We can’t stop hurricanes,” he says, “but we can improve how we design and maintain our defenses, and how we evacuate people before it’s too late.” He warns that climate change, with its potential to intensify storms, makes those improvements even more urgent.
Dhongde sees a parallel need for social and economic planning. “Disaster preparedness isn’t just about sandbags and levees,” she says. “It’s also about ensuring the communities receiving evacuees have the resources and support systems to integrate them successfully.”
Finally, Hyde stresses the importance of engaging youth and communities in preparedness efforts. “Youth advocacy programs, like those we’re piloting in Georgia, empower young people in marginalized neighborhoods with knowledge and agency to build long-term resilience. Disaster planning must be a community effort, inclusive and forward-looking.”
Jul. 25, 2025
As Georgia positions itself as a hub for digital infrastructure, communities across the state are facing a growing challenge: how to welcome the economic benefits of data centers while managing their significant environmental and infrastructure impacts. These facilities, essential for powering artificial intelligence, cloud computing, and everyday internet use, are also among the most resource-intensive buildings in the modern economy.
While companies like Microsoft and Google have pledged to reach net-zero emissions, experts say more transparency and smarter policy are needed to ensure that data center development aligns with community and environmental priorities. That means ensuring adequate energy infrastructure, investing in renewables, training local workers, and mitigating water and carbon impacts through innovation.
A New Kind of Energy Crunch
The rapid rise of AI is fueling explosive demand for computing power — and in turn, energy.
“The proliferation of AI workloads has significantly increased data center energy requirements,” says Divya Mahajan, assistant professor in the School of Electrical and Computer Engineering. “Large-scale AI training, especially for language models, leads to elevated and sustained power draw, often nearing the thermal and power envelopes of graphics processing units systems.”
This sustained demand is particularly challenging in hot, humid regions like Georgia, where cooling systems must work harder. “Training these models can cause thermal instability that directly affects cooling efficiency and power provisioning,” Mahajan explains. “This amplifies reliance on external cooling infrastructure, increasing water consumption and grid strain.”
Environmental and Economic Pressure
“Each new data center could lead to greenhouse gas emissions equivalent to a small town,” says Marilyn Brown, Regents’ and Brook Byers Professor of Sustainable Systems in the School of Public Policy. “In Georgia, the growth of data centers has already led to plans for new gas plants and the extension of aging coal plants.”
There’s an environmental cost to this growth: electricity and water. A single large data center can consume up to 5 million gallons of water per day.
Rising demand has a price. “It’s simple supply and demand,” says Ahmed Saeed, assistant professor at the School of Computer Science. “As overall power demand increases, if supply doesn’t keep up, costs will rise and the most affected will be lower-income consumers.”
Still, experts are optimistic that policy and technology can help mitigate these impacts.
Innovation May Hold the Key
Despite the challenges, experts see opportunities for innovation. “Technologies like direct-to-chip cooling and liquid cooling are promising,” says Mahajan. “But they’re not yet widespread.”
Saeed notes that some companies are experimenting with radical ideas, like Microsoft’s underwater Project Natick or locating data centers in Nordic countries where ambient air can be used for cooling. These approaches challenge conventional infrastructure norms by placing servers underwater or in remote, cold regions. “These are exciting, but we need scalable solutions that work in places like Georgia,” he emphasizes.
What Communities Should Ask For
As communities compete to attract data centers, experts say they should push for commitments that go beyond job creation.
“Communities should ensure that their power infrastructure can handle the added load without compromising resilience or increasing costs,” Saeed advises. “They should also require that data centers use renewable energy or invest in local clean energy projects.”
Training and hiring local workers is another key benefit communities can demand. “Deployment and maintenance of data centers require skilled workers,” Saeed adds. “Operators should invest in technical training and hire locally.”
Policy Can Make the Difference
Stronger policy frameworks can ensure growth doesn’t come at the expense of Georgia’s most vulnerable communities. “We need more transparency from companies about their energy and water use,” says Brown. “And we need policies that prevent the costs of supporting large consumers from being passed on to residential ratepayers.”
Some states are already taking action. Texas passed a bill to give regulators more control over large power consumers. In Georgia, a bill that would have paused tax breaks for data centers until their community impact was assessed was vetoed — but experts say the conversation is far from over.
“Data centers are here to stay,” says Saeed. “The question is whether we can make them sustainable — before their footprint becomes too large to manage.”
Jul. 16, 2025
The National Science Foundation (NSF) has awarded Georgia Tech and its partners $20 million to build a powerful new supercomputer that will use artificial intelligence (AI) to accelerate scientific breakthroughs.
Called Nexus, the system will be one of the most advanced AI-focused research tools in the U.S. Nexus will help scientists tackle urgent challenges such as developing new medicines, advancing clean energy, understanding how the brain works, and driving manufacturing innovations.
“Georgia Tech is proud to be one of the nation’s leading sources of the AI talent and technologies that are powering a revolution in our economy,” said Ángel Cabrera, president of Georgia Tech. “It’s fitting we’ve been selected to host this new supercomputer, which will support a new wave of AI-centered innovation across the nation. We’re grateful to the NSF, and we are excited to get to work.”
Designed from the ground up for AI, Nexus will give researchers across the country access to advanced computing tools through a simple, user-friendly interface. It will support work in many fields, including climate science, health, aerospace, and robotics.
“The Nexus system's novel approach combining support for persistent scientific services with more traditional high-performance computing will enable new science and AI workflows that will accelerate the time to scientific discovery,” said Katie Antypas, National Science Foundation director of the Office of Advanced Cyberinfrastructure. “We look forward to adding Nexus to NSF's portfolio of advanced computing capabilities for the research community.”
Nexus Supercomputer — In Simple Terms
- Built for the future of science: Nexus is designed to power the most demanding AI research — from curing diseases, to understanding how the brain works, to engineering quantum materials.
- Blazing fast: Nexus can crank out over 400 quadrillion operations per second — the equivalent of everyone in the world continuously performing 50 million calculations every second.
- Massive brain plus memory: Nexus combines the power of AI and high-performance computing with 330 trillion bytes of memory to handle complex problems and giant datasets.
- Storage: Nexus will feature 10 quadrillion bytes of flash storage, equivalent to about 10 billion reams of paper. Stacked, that’s a column reaching 500,000 km high — enough to stretch from Earth to the moon and a third of the way back.
- Supercharged connections: Nexus will have lightning-fast connections to move data almost instantaneously, so researchers do not waste time waiting.
- Open to U.S. researchers: Scientists from any U.S. institution can apply to use Nexus.
Why Now?
AI is rapidly changing how science is investigated. Researchers use AI to analyze massive datasets, model complex systems, and test ideas faster than ever before. But these tools require powerful computing resources that — until now — have been inaccessible to many institutions.
This is where Nexus comes in. It will make state-of-the-art AI infrastructure available to scientists all across the country, not just those at top tech hubs.
“This supercomputer will help level the playing field,” said Suresh Marru, principal investigator of the Nexus project and director of Georgia Tech’s new Center for AI in Science and Engineering (ARTISAN). “It’s designed to make powerful AI tools easier to use and available to more researchers in more places.”
Srinivas Aluru, Regents’ Professor and senior associate dean in the College of Computing, said, “With Nexus, Georgia Tech joins the league of academic supercomputing centers. This is the culmination of years of planning, including building the state-of-the-art CODA data center and Nexus’ precursor supercomputer project, HIVE."
Like Nexus, HIVE was supported by NSF funding. Both Nexus and HIVE are supported by a partnership between Georgia Tech’s research and information technology units.
A National Collaboration
Georgia Tech is building Nexus in partnership with the National Center for Supercomputing Applications at the University of Illinois Urbana-Champaign, which runs several of the country’s top academic supercomputers. The two institutions will link their systems through a new high-speed network, creating a national research infrastructure.
“Nexus is more than a supercomputer — it’s a symbol of what’s possible when leading institutions work together to advance science,” said Charles Isbell, chancellor of the University of Illinois and former dean of Georgia Tech’s College of Computing. “I'm proud that my two academic homes have partnered on this project that will move science, and society, forward.”
What’s Next
Georgia Tech will begin building Nexus this year, with its expected completion in spring 2026. Once Nexus is finished, researchers can apply for access through an NSF review process. Georgia Tech will manage the system, provide support, and reserve up to 10% of its capacity for its own campus research.
“This is a big step for Georgia Tech and for the scientific community,” said Vivek Sarkar, the John P. Imlay Dean of Computing. “Nexus will help researchers make faster progress on today’s toughest problems — and open the door to discoveries we haven’t even imagined yet.”
News Contact
Siobhan Rodriguez
Senior Media Relations Representative
Institute Communications
Jul. 15, 2025
The National Science Foundation (NSF) has awarded Georgia Tech and its partners $20 million to build a powerful new supercomputer that will use artificial intelligence (AI) to accelerate scientific breakthroughs.
Called Nexus, the system will be one of the most advanced AI-focused research tools in the U.S. Nexus will help scientists tackle urgent challenges such as developing new medicines, advancing clean energy, understanding how the brain works, and driving manufacturing innovations.
“Georgia Tech is proud to be one of the nation’s leading sources of the AI talent and technologies that are powering a revolution in our economy,” said Ángel Cabrera, president of Georgia Tech. “It’s fitting we’ve been selected to host this new supercomputer, which will support a new wave of AI-centered innovation across the nation. We’re grateful to the NSF, and we are excited to get to work.”
Designed from the ground up for AI, Nexus will give researchers across the country access to advanced computing tools through a simple, user-friendly interface. It will support work in many fields, including climate science, health, aerospace, and robotics.
“The Nexus system's novel approach combining support for persistent scientific services with more traditional high-performance computing will enable new science and AI workflows that will accelerate the time to scientific discovery,” said Katie Antypas, National Science Foundation director of the Office of Advanced Cyberinfrastructure. “We look forward to adding Nexus to NSF's portfolio of advanced computing capabilities for the research community.”
Nexus Supercomputer — In Simple Terms
- Built for the future of science: Nexus is designed to power the most demanding AI research — from curing diseases, to understanding how the brain works, to engineering quantum materials.
- Blazing fast: Nexus can crank out over 400 quadrillion operations per second — the equivalent of everyone in the world continuously performing 50 million calculations every second.
- Massive brain plus memory: Nexus combines the power of AI and high-performance computing with 330 trillion bytes of memory to handle complex problems and giant datasets.
- Storage: Nexus will feature 10 quadrillion bytes of flash storage, equivalent to about 10 billion reams of paper. Stacked, that’s a column reaching 500,000 km high — enough to stretch from Earth to the moon and a third of the way back.
- Supercharged connections: Nexus will have lightning-fast connections to move data almost instantaneously, so researchers do not waste time waiting.
- Open to U.S. researchers: Scientists from any U.S. institution can apply to use Nexus.
Why Now?
AI is rapidly changing how science is investigated. Researchers use AI to analyze massive datasets, model complex systems, and test ideas faster than ever before. But these tools require powerful computing resources that — until now — have been inaccessible to many institutions.
This is where Nexus comes in. It will make state-of-the-art AI infrastructure available to scientists all across the country, not just those at top tech hubs.
“This supercomputer will help level the playing field,” said Suresh Marru, principal investigator of the Nexus project and director of Georgia Tech’s new Center for AI in Science and Engineering (ARTISAN). “It’s designed to make powerful AI tools easier to use and available to more researchers in more places.”
Srinivas Aluru, Regents’ Professor and senior associate dean in the College of Computing, said, “With Nexus, Georgia Tech joins the league of academic supercomputing centers. This is the culmination of years of planning, including building the state-of-the-art CODA data center and Nexus’ precursor supercomputer project, HIVE."
Like Nexus, HIVE was supported by NSF funding. Both Nexus and HIVE are supported by a partnership between Georgia Tech’s research and information technology units.
A National Collaboration
Georgia Tech is building Nexus in partnership with the National Center for Supercomputing Applications at the University of Illinois Urbana-Champaign, which runs several of the country’s top academic supercomputers. The two institutions will link their systems through a new high-speed network, creating a national research infrastructure.
“Nexus is more than a supercomputer — it’s a symbol of what’s possible when leading institutions work together to advance science,” said Charles Isbell, chancellor of the University of Illinois and former dean of Georgia Tech’s College of Computing. “I'm proud that my two academic homes have partnered on this project that will move science, and society, forward.”
What’s Next
Georgia Tech will begin building Nexus this year, with its expected completion in spring 2026. Once Nexus is finished, researchers can apply for access through an NSF review process. Georgia Tech will manage the system, provide support, and reserve up to 10% of its capacity for its own campus research.
“This is a big step for Georgia Tech and for the scientific community,” said Vivek Sarkar, the John P. Imlay Dean of Computing. “Nexus will help researchers make faster progress on today’s toughest problems — and open the door to discoveries we haven’t even imagined yet.”
News Contact
Siobhan Rodriguez
Senior Media Relations Representative
Institute Communications
Jul. 10, 2025
Giga, a global initiative focused on expanding internet connectivity to schools, launched its new tech and innovation event series “Giga Talks” on June 19 with a keynote address from Pascal Van Hentenryck, a leading artificial intelligence expert from the Georgia Institute of Technology.
Van Hentenryck serves as the A. Russell Chandler III Chair and Professor in Georgia Tech’s H. Milton Stewart School of Industrial and Systems Engineering. He is also the director of Tech AI, Georgia Tech’s new strategic hub for artificial intelligence, and the U.S. National Science Foundation AI Institute for Advances in Optimization (AI4OPT), which operates under Tech AI’s umbrella.
In his talk, “AI for Social Good,” Van Hentenryck showcased how AI technologies can drive impact across key sectors—including mobility, education, healthcare, disaster response, and e-commerce. Drawing from ongoing research and real-world deployments, he emphasized the critical role of human-centered design and interdisciplinary collaboration in developing AI that benefits society at large.
“AI has tremendous potential to serve the public good when guided by ethics, equity, and purpose-driven innovation,” said Van Hentenryck. “At Georgia Tech, our work aims to harness this potential to create meaningful change in people’s lives.”
The event marked the debut of Giga Talks, a new speaker series designed to convene global thought leaders, engineers, and policymakers around timely issues in technology and innovation. The initiative supports Giga’s broader mission to connect every school in the world to the internet and unlock digital opportunities for children everywhere.
A video recording of Van Hentenryck’s talk is available on here.
News Contact
Breon Martin
AI Marketing Communications Manager
Pagination
- Previous page
- Page 3
- Next page
