Sep. 11, 2025
Graphic Representation of networked system: Adobe Stock

Graphic Representation of networked system: Adobe Stock

A recently awarded $20 million NSF Nexus Supercomputer grant to Georgia Tech and partner institutes promises to bring incredible computing power to the CODA building. But what makes this supercomputer different and how will it impact research in labs on campus, across disciplinary units, and across institutions? 

Purpose Built for AI Discovery

Nexus is Georgia Tech’s next-generation supercomputer, replacing the HIVE. Most operational high-performance computing systems utilized for research were designed before the explosion in Machine Learning and AI. This revolution has already shown successes for scientific research and data analysis in many domains, but the compute power, complex connectivity, and data storage needs for these systems have limited their access to the academic research community. The Nexus supercomputer design process retained a robust HPC system as a base while integrating artificial intelligence, machine learning and large-scale data science analysis from the ground up.

Expert Support for Faculty and Researchers 

The Institute for Data Engineering and Science (IDEaS) and the College of Computing house the Center for Artificial Intelligence in Science and Engineering (ARTISAN) group. This team has collective experience in working with national computational, cloud, commercial and institutional resources for computational activities, and decades of experience in scientific tools that aid in assisting both teaching and research faculty. Nexus is the next logical step, bringing together everything they’ve learned to build a national resource optimized for the future of AI-driven science.

Principal Research Scientist for the ARTISAN team, Suresh Marru, highlighted the need for this new resource, “AI is a core part of the Nexus vision. Today, researchers often spend more time setting up experiments, managing data, or figuring out how to run jobs on remote clusters than doing science. With Nexus, we’re flipping that script. By embedding AI into the platform, we help automate routine tasks, suggest optimal ways to run simulations, and even assist in generating input or analyzing results. This means researchers can move faster from question to insight. Instead of wrestling with infrastructure, they can focus on discovery.”

An Accessible AI Resource for GT & US Scientific Research

90% of Nexus capacity will be made available to the national research community through the NSF Advanced Computing Systems & Services (ACSS) program. Researchers from across the country, at universities, labs, and institutions of all sizes, will have access to this next-generation AI-ready supercomputer. For Georgia Tech research faculty and staff, the new system has multiple benefits:

  • 10% of the time on the machine will be available for use by Georgia Tech researchers
  • Nexus will allow GT researchers a chance to try out the latest hardware for AI computing
  • Thanks to cyberinfrastructure tools from the ARTISAN group, Nexus will be easier to access than previous NSF supercomputers


Interim Executive Director of IDEaS and Regents' Professor David Sherrill notes, "Nexus brings Georgia Tech's leadership in research computing to a whole new level. It will be the first NSF Category I Supercomputer hosted on Georgia Tech's campus. The Nexus hardware and software will boost research in the foundations of AI, and applications of AI in science and engineering."

Sep. 03, 2025
These ‘chillers’ on the roof of a data center in Germany, seen from above, work to cool the equipment inside the building.

These ‘chillers’ on the roof of a data center in Germany, seen from above, work to cool the equipment inside the building. AP Photo/Michael Probst

Artificial intelligence is growing fast, and so are the number of computers that power it. Behind the scenes, this rapid growth is putting a huge strain on the data centers that run AI models. These facilities are using more energy than ever.

AI models are getting larger and more complex. Today’s most advanced systems have billions of parameters, the numerical values derived from training data, and run across thousands of computer chips. To keep up, companies have responded by adding more hardware, more chips, more memory and more powerful networks. This brute force approach has helped AI make big leaps, but it’s also created a new challenge: Data centers are becoming energy-hungry giants.

Some tech companies are responding by looking to power data centers on their own with fossil fuel and nuclear power plants. AI energy demand has also spurred efforts to make more efficient computer chips.

I’m a computer engineer and a professor at Georgia Tech who specializes in high-performance computing. I see another path to curbing AI’s energy appetite: Make data centers more resource aware and efficient.

Energy and Heat

Modern AI data centers can use as much electricity as a small city. And it’s not just the computing that eats up power. Memory and cooling systems are major contributors, too. As AI models grow, they need more storage and faster access to data, which generates more heat. Also, as the chips become more powerful, removing heat becomes a central challenge.

Small blue and green lights arranged in columns glow behind black mesh screens

Data centers house thousands of interconnected computers. Alberto Ortega/Europa Press via Getty Images

Cooling isn’t just a technical detail; it’s a major part of the energy bill. Traditional cooling is done with specialized air conditioning systems that remove heat from server racks. New methods like liquid cooling are helping, but they also require careful planning and water management. Without smarter solutions, the energy requirements and costs of AI could become unsustainable.

Even with all this advanced equipment, many data centers aren’t running efficiently. That’s because different parts of the system don’t always talk to each other. For example, scheduling software might not know that a chip is overheating or that a network connection is clogged. As a result, some servers sit idle while others struggle to keep up. This lack of coordination can lead to wasted energy and underused resources.

A Smarter Way Forward

Addressing this challenge requires rethinking how to design and manage the systems that support AI. That means moving away from brute-force scaling and toward smarter, more specialized infrastructure.

Here are three key ideas:

Address variability in hardware. Not all chips are the same. Even within the same generation, chips vary in how fast they operate and how much heat they can tolerate, leading to heterogeneity in both performance and energy efficiency. Computer systems in data centers should recognize differences among chips in performance, heat tolerance and energy use, and adjust accordingly.

Adapt to changing conditions. AI workloads vary over time. For instance, thermal hotspots on chips can trigger the chips to slow down, fluctuating grid supply can cap the peak power that centers can draw, and bursts of data between chips can create congestion in the network that connects them. Systems should be designed to respond in real time to things like temperature, power availability and data traffic.

Break down silos. Engineers who design chips, software and data centers should work together. When these teams collaborate, they can find new ways to save energy and improve performance. To that end, my colleagues, students and I at Georgia Tech’s AI Makerspace, a high-performance AI data center, are exploring these challenges hands-on. We’re working across disciplines, from hardware to software to energy systems, to build and test AI systems that are efficient, scalable and sustainable.

Scaling With Intelligence

AI has the potential to transform science, medicine, education and more, but risks hitting limits on performance, energy and cost. The future of AI depends not only on better models, but also on better infrastructure.

To keep AI growing in a way that benefits society, I believe it’s important to shift from scaling by force to scaling with intelligence.The Conversation

 

This article is republished from The Conversation under a Creative Commons license. Read the original article.

News Contact

Author:

, assistant professor of Computer Engineering, Georgia Institute of Technology

Media Contact:

Shelley Wunder-Smith
shelley.wunder-smith@research.gatech.edu

Sep. 02, 2025
A doctor on a computer working with an AI-powered health device

An illustration representing a doctor working with an AI-powered health device.

In the morning, before you even open your eyes, your wearable device has already checked your vitals. By the time you brush your teeth, it has scanned your sleep patterns, flagged a slight irregularity, and adjusted your health plan. As you take your first sip of coffee, it’s already predicted your risks for the week ahead.

Georgia Tech researchers warn that this version of AI healthcare imagines a patient who is "affluent, able-bodied, tech-savvy, and always available." Those who don’t fit that mold, they argue, risk becoming invisible in the healthcare system.

The Ideal Future

In their study, published in the Proceedings of the ACM Conference on Human Factors in Computing Systems, the researchers analyzed 21 AI-driven health tools, ranging from fertility apps and wearable devices to diagnostic platforms and chatbots. They used sociological theory to understand the vision of the future these tools promote — and the patients they leave out.

“These systems envision care that is seamless, automatic, and always on,” said Catherine Wieczorek, a Ph.D. student in human-centered computing in the School of Interactive Computing and lead author of the study. “But they also flatten the messy realities of illness, disability, and socioeconomic complexity.”

Four Futures, One Narrow Lens

During their analysis, the researchers discovered four recurring narratives in AI-powered healthcare:

  1. Care that never sleeps. Devices track your heart rate, glucose levels, and fertility signals — all in real time. You are always being watched, because that’s framed as “care.”
  2. Efficiency as empathy. AI is faster, more objective, and more accurate. Unlike humans, it doesn’t get tired or biased. This pitch downplays the value of human judgment and connection.
  3. Prevention as perfection. A world where illness is avoided through early detection if you have the right sensors, the right app, and the right lifestyle.
  4. The optimized body. You’re not just healthy, you’re high-performing. The tech isn’t just treating you; it’s upgrading you.

“It’s like healthcare is becoming a productivity tool,” Wieczorek said. “You’re not just a patient anymore. You’re a project.”

Not Just a Tool, But a Teammate

This study also points to a critical transformation in which AI is no longer just a diagnostic tool; it’s a decision-maker. Described by the researchers as “both an agent and a gatekeeper,” AI now plays an active role in how care is delivered.

In some cases, AI systems are even named and personified, like Chloe, an IVF decision-support tool. “Chloe equips clinicians with the power of AI to work better and faster,” its promotional materials state. By framing AI this way — as a collaborator rather than just software — these systems subtly redefine who, or what, gets to be treated.

“When you give AI names, personalities, or decision-making roles, you’re doing more than programming. You’re shifting accountability and agency. That has consequences,” said Shaowen Bardzell, chair of Georgia Tech’s School of Interactive Computing and co-author of the study.

“It blurs the boundaries,” Wieczorek noted. “When AI takes on these roles, it’s reshaping how decisions are made and who holds authority in care.”

Calculated Care

Many AI tools promise early detection, hyper-efficiency, and optimized outcomes. But the study found that these systems risk sidelining patients with chronic illness, disabilities, or complex medical needs — the very people who rely most on healthcare.

“These technologies are selling worldviews,” Wieczorek explained. “They’re quietly defining who healthcare is for, and who it isn’t.”

By prioritizing predictive algorithms and automation, AI can strip away the context and humanity that real-world care requires. 

“Algorithms don’t see nuance. It’s difficult for a model to understand how a patient might be juggling multiple diagnoses or understand what it means to manage illness, while also navigating other important concerns like financial insecurity or caregiving. They are predetermined inputs and outputs,” Wieczorek said. “While these systems claim to streamline care, they are also encoding assumptions about who matters and how care should work. And when those assumptions go unchallenged, the most vulnerable patients are often the ones left out.” 

AI for ALL

The researchers argue that future AI systems must be developed in collaboration with those who don’t fit in the vision of a “perfect patient.” 

“Innovation without ethics risks reinforcing existing inequalities. It’s about better tech and better outcomes for real people,” Bardzell said. “We’re not anti-innovation. But technological progress isn’t just about what we can do. It’s about what we should do — and for whom.”

Wieczorek and Bardzell aren’t trying to stop AI from entering healthcare. They’re asking AI developers to understand who they’re really serving.

 

Funding:
This work was supported by the National Science Foundation (Grant #2418059). 

 

News Contact

Michelle Azriel, Sr. Writer-Editor

Sep. 02, 2025
Georgia Tech’s Jill Watson Outperforms ChatGPT in Real Classrooms

A new version of Georgia Tech’s virtual teaching assistant, Jill Watson, has demonstrated that artificial intelligence can significantly improve the online classroom experience. Developed by the Design Intelligence Laboratory (DILab) and the U.S. National Science Foundation AI Institute for Adult Learning and Online Education (AI-ALOE), the latest version of Jill Watson integrates OpenAI’s ChatGPT and is outperforming OpenAI’s own assistant in real-world educational settings.

Jill Watson not only answers student questions with high accuracy. It also improves teaching presence and correlates with better academic performance. Researchers believe this is the first documented instance of a chatbot enhancing teaching presence in online learning for adult students.

How Jill Watson Shaped Intelligent Teaching Assistants

First introduced in 2016 using IBM’s Watson platform, Jill Watson was the first AI-powered teaching assistant deployed in real classes. It began by responding to student questions on discussion forums like Piazza using course syllabi and a curated knowledge base of past Q&As. Widely covered by major media outlets including The Chronicle of Higher Education, The Wall Street Journal, and The New York Times, the original Jill pioneered new territory in AI-supported learning.

Subsequent iterations addressed early biases in the training data and transitioned to more flexible platforms like Google’s BERT in 2019, allowing Jill to work across learning management systems such as EdStem and Canvas. With the rise of generative AI, the latest version now uses ChatGPT to engage in extended, context-rich dialogue with students using information drawn directly from courseware, textbooks, video transcripts, and more.

Future of Personalized, AI-Powered Learning

Designed around the Community of Inquiry (CoI) framework, Jill Watson aims to enhance “teaching presence,” one of three key factors in effective online learning, alongside cognitive and social presence. Teaching presence includes both the design of course materials and facilitation of instruction. Jill supports this by providing accurate, personalized answers while reinforcing the structure and goals of the course.

The system architecture includes a preprocessed knowledge base, a MongoDB-powered memory for storing conversation history, and a pipeline that classifies questions, retrieves contextually relevant content, and moderates responses. Jill is built to avoid generating harmful content and only responds when sufficient verified course material is available.

Field-Tested in Georgia and Beyond

The first AI-powered teaching assistant was developed for Georgia Tech’s Online Master of Science in Computer Science (OMSCS) program. By fall 2023, Jill Watson was deployed in Georgia Tech’s OMSCS artificial intelligence course, serving more than 600 students, as well as in an English course at Wiregrass Georgia Technical College, part of the Technical College System of Georgia (TCSG).

A controlled A/B experiment in the OMSCS course allowed researchers to compare outcomes between students with and without access to Jill Watson, even though all students could use ChatGPT. The findings are striking:

  • Jill Watson’s accuracy on synthetic test sets ranged from 75% to 97%, depending on the content source. It consistently outperformed OpenAI’s Assistant, which scored around 30%.
  • Students with access to Jill Watson showed stronger perceptions of teaching presence, particularly in course design and organization, as well as higher social presence.
  • Academic performance also improved slightly: students with Jill saw more A grades (66% vs. 62%) and fewer C grades (3% vs. 7%).

A Smarter, Safer Chatbot

While Jill Watson uses ChatGPT for natural language generation, it restricts outputs to validated course material and verifies each response using textual entailment. According to a study by Taneja et al. (2024), Jill not only delivers more accurate answers than OpenAI’s Assistant but also avoids producing confusing or harmful content at significantly lower rates.

Compared to OpenAI’s Assistant, Jill Watson (ChatGPT) not only achieves higher accuracy but also produces confusing or harmful content at significantly lower rates. Jill Watson answers correctly 78.7% of the time, with only 2.7% of its errors categorized as harmful and 54.0% as confusing. In contrast, OpenAI’s Assistant demonstrates a much lower accuracy of 30.7%, with harmful failures occurring 14.4% of the time and confusing failures rising to 69.2%. Additionally, Jill Watson has a lower retrieval failure rate of 43.2%, compared to 68.3% for the OpenAI Assistant.

What’s Next for Jill

The team plans to expand testing across introductory computing courses at Georgia Tech and technical colleges. They also aim to explore Jill Watson’s potential to improve cognitive presence, particularly critical thinking and concept application. Although quantitative results for cognitive presence are still inconclusive, anecdotal feedback from students has been positive. One OMSCS student wrote:

“The Jill Watson upgrade is a leap forward. With persistent prompting I managed to coax it from explicit knowledge to tacit knowledge. Kudos to the team!”

The researchers also expect Jill to reduce instructional workload by handling routine questions and enabling more focus on complex student needs.

Additionally, AI-ALOE is collaborating with the publishing company John Wiley & Sons, Inc., to develop a Jill Watson virtual teaching assistant for one of their courses, with the instructor and university chosen by Wiley. If successful, this initiative could potentially scale to hundreds or even thousands of classes across the country and around the world, transforming the way students interact with course content and receive support.

A Georgia Tech-Led Collaboration

The Jill Watson project is supported by Georgia Tech, the US National Science Foundation’s AI-ALOE Institute (Grants #2112523 and #2247790), and the Bill & Melinda Gates Foundation.

Core team members are Saptrishi Basu, Jihou Chen, Jake Finnegan, Isaac Lo, JunSoo Park, Ahamad Shapiro and Karan Taneja, under the direction of professor Ashok Goel and Sandeep Kakar. The team works under Beyond Question LLC, an AI-based educational technology startup.

News Contact

Breon Martin

 

Aug. 20, 2025
Daniel Yue, assistant professor of IT Management

Daniel Yue, assistant professor of IT Management

Daniel Yue, assistant professor of IT Management at the Scheller College of Business, has been awarded the prestigious Best Dissertation Award by the Technology and Innovation Management Division of the Academy of Management. The recognition celebrates the most impactful doctoral research in the field of business and innovation.

Yue’s dissertation, developed during his Ph.D. at Harvard Business School, explores a paradox at the heart of the AI industry: why do firms openly share their innovations, like scientific knowledge, software, and models, despite the apparent lack of direct financial return? His work sheds light on the strategic and economic mechanisms that drive this openness, offering new frameworks for understanding how firms contribute to and benefit from shared technological progress.

“We typically think of firms as trying to capture value from their innovations,” Yue explained. “But in AI, we see companies freely publishing research and releasing open-source software. My dissertation investigates why this happens and what firms gain from it.”

Read More

News Contact

Kristin Lowe (She/Her)
Content Strategist
Georgia Institute of Technology | Scheller College of Business
kristin.lowe@scheller.gatech.edu

Aug. 25, 2025
Michael Galarnyk pictured next to Veer Kejriwal, Agam Shah, and Sudheer Chava

Michael Galarnyk, Ph.D. Machine Learning ’28; Veer Kejriwal, B.S. Computer Science ’25; Agam Shah, Ph.D. Machine Learning ’26; and Sudheer Chava, Alton M. Costley Chair and professor of Finance at Georgia Tech

Georgia Tech researchers have designed the first benchmark that tests how well existing AI tools can interpret advice from YouTube financial influencers, also known as finfluencers.

Lead author Michael Galarnyk, Ph.D. Machine Learning ’28, joined lead authors Veer Kejriwal, B.S. Computer Science ’25, and Agam Shah, Ph.D. Machine Learning ’26, along with co-authors Yash Bhardwaj, École Polytechnique, M.S. Trustworthy and Responsible AI ‘27; Nicholas Meyer, B.S. Electrical and Computer Engineering ’22 and Quantitative and Computational Finance ’24; Anand Krishnan, Stanford University, B.S. Computer Science ‘27; and, Sudheer Chava, Alton M. Costley Chair and professor of Finance at Georgia Tech.

Aptly named VideoConviction, the multimodal benchmark included hundreds of video clips. Experts labelled each clip with the influencer’s recommendation (buy, sell, or hold) and how strongly the influencer seemed to believe in their advice, based on tone, delivery, and facial expressions. The goal? To see how accurately AI can pick up on both the message and the conviction behind it.

“Our work shows that financial reasoning remains a challenge for even the most advanced models,” said Michael Galarnyk, lead author. “Multimodal inputs bring some improvement, but performance often breaks down on harder tasks that require distinguishing between casual discussion and meaningful analysis. Understanding where these models fail is a first step toward building systems that can reason more reliably in high stakes domains.”

Read More

News Contact

Kristin Lowe (She/Her)
Content Strategist
Georgia Institute of Technology | Scheller College of Business
kristin.lowe@scheller.gatech.edu

Aug. 21, 2025
Dean Gaudelli speaks to the College of Lifetime Learning in his first town hall.

In the first town hall with its new Dean, College of Lifetime Learning colleagues came together to explore a central question: what does it mean to learn, and how can that spirit shape the way we work?

Bill Gaudelli, Ed.D., joined the College Aug. 1 as the inaugural dean. He brings more than 35 years of experience as an educator, researcher, and academic administrator to this role. 

Rather than beginning with charts or plans, the Dean opened with two polls. The first asked: What did you learn? What did you notice about your learning? How did you feel before, during, and after? The second posed a broader challenge: What is a learning organization? Colleagues shared learning experiences that ran along a fairly common path: anticipation, uncertainty, frustration, and, ultimately, accomplishment. 

“Not one of you said I had no emotional response to the learning. Not a person. There was joy. There was a lot of laughter. And everyone had something to share because that is how fundamental learning is,” Dean Gaudelli observed. “And so, as a learning organization. We've got to think about how we meet the moment and the learner in a context that's totally new. We've got to figure that out in a new space, using new tools, recognizing that the desire to learn is permanent in humans.”

With these shared experiences in mind, Gaudelli introduced the concept of a learning organization, drawing from Peter Senge’s landmark work The Fifth Discipline. He outlined the five disciplines (personal mastery, mental models, shared vision, team learning, and systems thinking) and invited colleagues to see them not as abstract theory, but as a practical framework for how the College might operate.

Becoming a learning organization, Dean Gaudelli said, is not a label but a way of working: embracing curiosity, being adaptable, questioning assumptions, and understanding that the whole is stronger than its individual parts. “If we’re going to promote learning in the world, then we have to be learning ourselves,” he noted. That means committing to continuous improvement, viewing mistakes as opportunities, and aligning every role with a shared purpose.

This vision brings to life the College’s mission to support learning across the lifespan and positions the College to respond to a rapidly changing educational landscape. By building systems and culture that make learning continuous, collaborative, and transformative, Gaudelli sees an opportunity to lead not just in what the College teaches, but in how it works together.

Dr. Roslyn Martin, Director of Professional Education Programs for the College and GTPE , later reflected on the meeting. “It was powerful to reflect on the learning journey and experience the process organically to deepen our understanding,” she shared. “And I’m excited about this pivotal chapter for Georgia Tech, as the College creates more impactful learning experiences and pathways to transformative education for communities around the globe!”

In the months ahead, the College will begin crafting a new strategic plan rooted in these ideas. Gaudelli encouraged everyone to take an active role in shaping the future. His closing challenge: learn something new in the coming month, and not for the skill alone, but for the insight into how you learn. That awareness, he said, is the foundation for building a true learning organization.

Aug. 11, 2025
Team Atlanta stands on the dark DefCon stage during the convention's closing ceremony.

Team Atlanta, a group of Georgia Tech students, faculty, and alumni, achieved international fame on Friday when they won DARPA’s AI Cyber Challenge (AIxCC) and its $4 million grand prize.

AIxCC was a two-year long competition to create an artificial intelligence (AI) enabled cyber reasoning system capable of autonomously finding and patching vulnerabilities.

“This is a once in a generation competition organized by DARPA about how to utilize recent advancements in AI to use in security related tasks,” said Georgia Tech Professor Taesoo Kim.

“As hackers we started this competition as AI skeptics, but now we truly believe in the potential of adopting large language models (LLM) when solving security problems."

The Atlantis system was Team Atlanta’s submission. Atlantis is a fuzzer- or an automated software that finds vulnerabilities or bugs- and enhanced it with several different types of LLMs.

While developing the system, Team Atlanta reported the heat put out by the GPU rack was hot enough to roast marshmallows.

The team was comprised of hackers, engineers, and cybersecurity researchers. The Georgia Tech alumni on the team also represented their employers which include KAIST, POSTECH, and Samsung Research. Kim is also the vice president of Samsung Research. 

News Contact

John Popham

Communications Officer II at the School of Cybersecurity and Privacy

 

Aug. 13, 2025
Juba Ziani

Juba Ziani is on a mission to change how the world thinks about data in artificial intelligence. An assistant professor in Georgia Tech’s H. Milton Stewart School of Industrial and Systems Engineering (ISyE), Ziani has secured a $425,000 National Science Foundation (NSF) grant to explore how smart incentives can lead to higher-quality, more widely shared datasets. His work forms part of a $1 million NSF collaboration with Columbia University computer science professor Augustin Chaintreau and senior personnel Daniel Björkegren, aiming to challenge outdated ideas and shape a more reliable future for AI.

Artificial intelligence (AI) increasingly shapes critical decisions in everyday life, from who sees a job posting or qualifies for a loan, to who is granted bail in the criminal justice system. These systems rely on historical data to learn patterns and make predictions. For example, an applicant might be approved for a loan because an AI system recognizes that previous borrowers with similar credit histories successfully repaid. But when the underlying training data is incomplete or low-quality, the consequences can be serious, disproportionately affecting those from groups historically excluded from such opportunities.

Ziani's research will explore how the economic value of data, combined with the effects of data markets and network dynamics, can lead to incentives that naturally improve dataset robustness. By identifying the conditions under which the supposed efficiency trade-off disappears, Ziani and his collaborators hope to open the door to more reliable and equitable AI systems.

Traditionally, researchers have assumed that making AI-assisted decision-making more robust and representative comes at the expense of efficiency. This assumption treats training data as fixed and unchangeable, which can place limits on the potential of AI systems. But as large-scale data platforms grow and the exchange of data becomes more accessible, the conventional trade-off between robustness and efficiency may no longer apply.

“Our project demonstrates how carefully designing incentives—both for data producers and data buyers—can enhance the quality and robustness of datasets without compromising performance,” said Ziani. “This has the potential to fundamentally reshape the way AI systems are trained and how data is collected, shared, and valued.”

With this work, Ziani aims to advance both the theory and practice of AI and data economics, ensuring that as AI continues to transform society, and does so in a way that is fair, accurate, and trustworthy.

News Contact

Erin Whitlock Brown, Communications Manager II

Aug. 08, 2025
Graphic of person using an assistive device thinking about how a robot could hep learn riding a unicycle

Research into tailored assistive and rehabilitative devices has seen recent advancements but the goal remains out of reach due to the sparsity of data on how humans learn complex balance tasks. To address this gap, a collaborating team of interdisciplinary faculty from Florida State University and Georgia Tech have been awarded ~$798,000 by the NSF to launch a study to better understand human motor learning as well as gain greater understanding into human robot interaction dynamics during the learning process.

 Led by PI: Taylor Higgins, Assistant Professor, FAMU-FSU Department of Mechanical Engineering, partnering with Co-PIs Shreyas Kousik, Assistant Professor, Georgia Tech, George W. Woodruff School of Mechanical Engineering, and Brady DeCouto, Assistant Professor, FSU Anne Spencer Daves College of Education, Health, and Human Sciences, the research will use the acquisition of unicycle riding skill by participants to gain a better grasp on human motor learning in tasks requiring balance and complex movement in space. Although it might sound a bit odd, the fact that most people don’t know how to ride a unicycle, and the fact that it requires balance, mean that the data will cover the learning process from novice to skilled across the participant pool.

Using data acquired from human participants, the team will develop a “robotics assistive unicycle” that will be used in the training of the next pool of novice unicycle riders.  This is to gauge if, and how rapidly, human motor learning outcomes improve with the assistive unicycle. The participants that engage with the robotic unicycle will also give valuable insight into developing effective human-robot collaboration strategies.

The fact that deciding to get on a unicycle requires a bit of bravery might not be great for the participants, but it’s great for the research team. The project will also allow exploration into the interconnection between anxiety and human motor learning to discover possible alleviation strategies, thus increasing the likelihood of positive outcomes for future patients and consumers of these devices.

 

Author
-Christa M. Ernst

This Article Refers to NSF Award # 2449160

News Contact

Christa M. Ernst
Research Communications Program Manager
Klaus Advance Computing Building 1120E | 266 Ferst Drive | Atlanta GA | 30332
Topic Expertise: Robotics | Data Sciences | Semiconductor Design & Fab
christa.ernst@research.gatech.edu
Subscribe to Artificial Intelligence at Georgia Tech