Mar. 31, 2026
While people use search engines, chatbots, and generative artificial intelligence tools every day, most don’t know how they work. This sets unrealistic expectations for AI and leads to misuse. It also slows progress toward building new AI applications.
Georgia Tech researchers are making AI easier to understand through their work on Transformer Explainer. The free, online tool shows non-experts how ChatGPT, Claude, and other large language models (LLMs) process language.
Transformer Explainer is easy to use and runs on any web browser. It quickly went viral after its debut, reaching 150,000 users in its first three months. More than 563,000 people worldwide have used the tool so far.
Global interest in Transformer Explainer continues when the team presents the tool at the 2026 Conference on Human Factors in Computing Systems (CHI 2026). CHI, the world’s most prestigious conference on human-computer interaction, will take place in Barcelona, April 13-17.
“There are moments when LLMs can seem almost like a person with their own will and personality, and that misperception has real consequences. For example, there have been cases where teenagers have made poor decisions based on conversations with LLMs,” said Ph.D. student Aeree Cho.
“Understanding that an LLM is fundamentally a model that predicts the probability distribution of the next token helps users avoid taking its outputs as absolute. What you put in shapes what comes out, and that understanding helps people engage with AI more carefully and critically.”
A transformer is a neural network architecture that changes data input sequence into an output. Text, audio, and images are forms of processed data, which is why transformers are common in generative AI models. They do this by learning context and tracking mathematical relationships between sequence components.
Transformer Explainer demystifies how transformers work. The platform uses visualization and interaction to show, step by step, how text flows through a model and produces predictions.
Using this approach, Transformer Explainer impacts the AI landscape in four main ways:
- It counters hype and misconceptions surrounding AI by showing how transformers work.
- It improves AI literacy among users by removing technical barriers and lowering the entry for learning about AI.
- It expands AI education by helping instructors teach AI mechanisms without extensive setup or computing resources.
- It influences future development of AI tools and educational techniques by providing a blueprint for interpretable AI systems.
“When I first learned about transformers, I felt overwhelmed. A transformer model has many parts, each with its own complex math. Existing resources typically present all this information at once, making it difficult to see how everything fits together,” said Grace Kim, a dual B.S./M.S. computer science student.
“By leveraging interactive visualization, we use levels of abstraction to first show the big picture of the entire model. Then users click into individual parts to reveal the underlying details and math. This way, Transformer Explainer makes learning far less intimidating.”
Many users don’t know what transformers are or how they work. The Georgia Tech team found that people often misunderstand AI. Some label AI with human-like characteristics, such as creativity. Others even describe it as working like magic.
Furthermore, barriers make it hard for students interested in transformers to start learning. Tutorials tend to be too technical and overwhelm beginners with math and code. While visualization tools exist, these often target more advanced AI experts.
Transformer Explainer overcomes these obstacles through its interactive, user-focused platform. It runs a familiar GPT model directly in any web browser, requiring no installation or special hardware.
Users can enter their own text and watch the model predict the next word in real time. Sankey-style diagrams show how information moves through embeddings, attention heads, and transformer blocks.
The platform also lets users switch between high-level concepts and detailed math. By adjusting temperature settings, users can see how randomness affects predictions. This reveals how probabilities drive AI outputs, rather than creativity.
“Millions of people around the world interact with transformer-driven AI. We believe that it is crucial to bridge the gap between day-to-day user experience and the models' technical reality, ensuring these tools are not misinterpreted as human-like or seen as sentient,” said Ph.D. student Alex Karpekov.
“Explaining the architecture helps users recognize that language generated by models is a product of computation, leading to a more grounded engagement with the technology.”
Cho, Karpekov, and Kim led the development of Transformer Explainer. Ph.D. students Alex Helbling, Seongmin Lee, Ben Hoover, and alumnus Zijie (Jay) Wang assisted on the project.
Professor Polo Chau supervised the group and their work. His lab focuses on data science, human-centered AI, and visualization for social good.
Acceptance at CHI 2026 stems from the team winning the best poster award at the 2024 IEEE Visualization Conference. This recognition from one of the top venues in visualization research highlights Transformer Explainer’s effectiveness in teaching how transformers work.
“Transformer Explainer has reached over half a million learners worldwide,” said Chau, a faculty member in the School of Computational Science and Engineering.
“I'm thrilled to see it extend Georgia Tech's mission of expanding access to higher education, now to anyone with a web browser.”
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Mar. 31, 2026
Voice-activated, conversational artificial intelligence (AI) agents must provide clear explanations for their suggestions, or older adults aren’t likely to trust them.
That’s one of the main findings from a study by AI Caring on what older adults expect from explainable AI (XAI).
AI Caring is one of three AI Institutions led by Georgia Tech and funded by the National Science Foundation (NSF). The institution supports AI research that benefits older adults and their caregivers.
Niharika Mathur, a Ph.D. candidate in the School of Interactive Computing, was the lead author of a paper based on the study. The paper will be presented in April at the 2026 ACM Conference on Human Factors in Computing Systems (CHI) in Barcelona.
Mathur worked with the Cognitive Empowerment Program at Emory University to interview 23 older adults who live alone and use voice-activated AI assistants like Amazon’s Alexa and Google Home.
Many of them told her they feel excluded from the design of these products.
“The assumption is that all people want interactions the same way and across all kinds of situations, but that isn’t true,” Mathur said. “How older people use AI and what they want from it are different from what younger people prefer.”
One example she gave is that young people tend to be informal when talking with AI. Older people, on the other hand, talk to the agent like they would a person.
“If Older adults are talking to their family members about Alexa, they usually refer to Alexa as ‘she’ instead of ‘it,’” Mathur said. “They tend to humanize these systems a lot more than young people.”
Good Explanations
The study evaluated AI explanations that drew information from four sources of data:
- User history (past conversations with the agent)
- Environmental data (indoor temperature or the weather forecast)
- Activity data (how much time a user spends in different areas of the home)
- Internal reasoning (mathematical probabilities and likely outcomes)
Mathur said older users trust the agent more when it bases its explanations on data from the first three sources. However, internal reasoning creates skepticism.
Internal reasoning means the AI doesn’t have enough data from the other sources to give an explanation. It provides a percentage to reflect its confidence based on what it knows.
“The overwhelming response was negative toward confidence scores,” Mathur said. “If the AI says it’s 92% confident, older adults want to know what that’s based on.”
This is another example that Mathur said points to generational preferences.
“There’s a lot of explainable AI research that shows younger people like to see numbers in explanations, and they also tend to rely too much on explanations that contain numerical confidence. Older adults are the opposite. It makes them trust it less.”
Knowing the Context
Mathur said that AI agents interacting with older adults should serve a dual purpose. They should provide users with companionship and support independence while reducing the caretaking burden often placed on family members.
Some studies have shown that engineers have tended to favor caretakers in the design of these tools. They prioritize daily tasks and routines, leaving some older adults to feel like they are merely a box to be checked.
She discovered that in urgent situations, older users prefer the AI to be straightforward, while in casual settings, they desire more conversation.
“How people interact with technological systems is grounded in what the stakes of the situation are,” she said. “If it had anything to do with their immediate sense of safety, they did not want conversational elaboration. They want the AI to be very direct and factual.”
Not Just Checking Boxes
Mathur said AI agents that interact with older adults are ideally constructed with a dual purpose. They should provide companionship and autonomy for the users while alleviating the burden of caretaking that is often placed on their family members.
Some studies have shown that engineers have strayed toward favoring caretakers in the design of these tools. They prioritize daily tasks and routines, leaving some older adults to feel like they are a box to be checked.
“They’re not being thought of as consumers,” Mathur said. “A lot of products are being made for them but not with them.”
She also said psychological well-being is one of the most important outcomes these tools should produce.
Showing older adults that they are listened to can significantly help in gaining their trust. Some interviewees told Mathur they want agents who are deliberate about understanding their preferences and don’t dismiss their questions.
Meeting these needs reduces the likelihood of protesting and creating conflict with family members.
“It highlights just how important well-designed explanations are,” she said. “We must go beyond a transparency checklist.”
Mar. 25, 2026
Georgia Tech has announced the recipients of the 2026 Institute Research Awards, honoring faculty, staff, and research teams whose work has made significant scientific, technological, and societal impact. Presented by the Office of the Executive Vice President for Research, the awards recognize excellence across six categories spanning innovation, mentorship, collaboration, engagement, and research program development and impact. This year’s honorees reflect the breadth of Georgia Tech’s research enterprise — from foundational discovery to commercialization and community partnerships — and will be recognized at the Faculty and Staff Honors Luncheon on April 24.
Mar. 18, 2026
Five Georgia Tech computer science (CS) students have been named Squarepoint Foundation Scholars, receiving merit- and need-based scholarships for their undergraduate studies. The Squarepoint Foundation is providing $100,000 to fund the awards, which offer $10,000 per year for two years to rising third-year students.
Now in its second year of supporting the College of Computing, the Squarepoint Foundation continues to expand opportunities, enabling students to focus fully on their studies and pursue activities outside the classroom.
A selection committee led by Mary Hudachek-Buswell, interim chair of the School of Computing Instruction (SCI), chose this year’s cohort.
“These students exemplify the curiosity, talent, and determination we strive to cultivate in computer science,” Hudachek-Buswell said. “The Squarepoint Foundation Scholarships will give them the opportunity to focus fully on their studies while pursuing research and projects that have the potential to make a real-world impact.”
The scholars have demonstrated strong leadership across campus, with all five serving as teaching assistants (TAs) and earning faculty honors. The cohort is also engaged in research and study abroad opportunities.
Founded in 2021, the Squarepoint Foundation supports STEM education and research while partnering with organizations worldwide to expand opportunity and access.
“We are proud to continue our partnership with Georgia Tech, as we extend our support to a number of students working towards achieving their academic goals,” said Allison Henry, Squarepoint Foundation manager.
“The Squarepoint Foundation aims to increase access to education, ensuring that all individuals have the opportunity to pursue the degree of their choice, no matter their circumstances. We wish these talented students the best of luck as they undertake their studies and recognize them for their hard work and dedication to the STEM field."
Meet the Scholars
Maria Cymbalyuk
Cymbalyuk studies Cybersecurity and Information Internetwork threads, focusing on how technical systems shape who is protected or exposed in digital environments. She’s interested in supporting public defenders and improving access to justice through technology.
“This scholarship made this semester feel less financially stressful and more like I can focus on building the skills and experiences I care about,” Cymbalyuk said. “I want to use my skills to build tools and do research that supports public interest organizations.”
Marziah Islam
Islam concentrates on the People and Intelligence threads, exploring how humans interact with technology. She is developing a sign-language learning mobile app through a Vertically Integrated Project and hopes to build accessible, reliable systems in healthcare technology.
“I am fascinated by the intersection of humans and computing, and I want to design technology that better supports real people,” Islam said.
Sahadev Bharath
Bharath studies Architecture and Information Internetworks threads, with interests in low-level programming, operating systems, and large-scale systems. He plans to begin his career in software engineering, focusing on distributed systems and AI infrastructure.
“Coming from India, being able to afford out-of-state tuition has been a challenge. This scholarship relieves financial stress and gives me more time to focus on my academics and career,” Bharath said.
“I am passionate about teaching and sharing my knowledge with fellow students. Being a TA has been extremely fulfilling and motivates me to continue contributing to education.”
Joie Yeung
Yeung studies Information Internetworks and Intelligence threads, with a focus on data and artificial intelligence. She has received the President’s Volunteer Service Award for completing more than 100 service hours in one year. In addition to pursuing a career in software engineering, she is passionate about mentoring younger girls and addressing the gender gap in STEM.
“I want to create meaningful and impactful technology while giving back to my communities. I also aim to show younger girls that they can succeed in computing despite the gender gap,” Yeung said.
Jun Hong Wang
Wang studies system architecture and intelligence with a minor in mathematics, concentrating on computer architecture and low-level optimization. He is considering careers in software engineering, research, or entrepreneurship at the intersection of hardware and software.
“I’m especially interested in how hardware and software intersect, and I hope to use my work to create solutions that are meaningful and helpful for the world,” Wang said.
The scholarships offer vital support as these students keep advancing research, leadership, and influence in computing.
News Contact
Emily Smith
College of Computing
Georgia Tech
Mar. 10, 2026
Hospital stays can be long and arduous; they can also cause serious complications. When a person lies in one position too long and begins to sweat, painful sores called pressure injuries (PIs) can form on the body, leading to infection or even death. A patient can develop a PI in a few days — or even a few hours. And once present, a PI is hard to treat. To address this issue, researchers at Georgia Tech have developed a new, flexible, sensor-filled fabric to monitor areas at risk of PIs and alert hospital staff when a patient needs to be turned.
Read more about Georgia Tech’s research on preventing pressure injuries »
Mar. 06, 2026
Georgia Tech researcher Nick Housley is developing a drug‑delivery system designed to send cancer treatments directly to tumors while minimizing damage to healthy tissue. His team’s approach uses self‑assembling nanohydrogels (SANGs) that circulate through the body, remain inactive in healthy environments, and release their drug payload only when they encounter the unique chemical conditions created by tumors. This “cancer‑agnostic” strategy avoids the pitfalls of traditional targeted therapies, which can lose effectiveness as tumors evolve, and aims to reduce the harsh side effects patients often endure. Early preclinical results show that the nanohydrogels successfully concentrated drugs at tumor sites, and Housley’s team is now preparing for broader testing to move the technology toward clinical trials.
Feb. 24, 2026
Two research teams within the College of Lifetime Learning are piloting new approaches to online education that integrate artificial intelligence and immersive virtual reality with thoughtful instructional design. More than technology experiments, these projects show how the College refines learning innovations before scaling them across programs.
Research Scientists Eunhye Grace Flavin, Abeera Rehmat, and Jeonghyun (Jonna) Lee are developing an AI-assisted course titled Design of Learning Environments. The course is being piloted within the College to gather feedback and data before broader implementation.
“We want to study how AI can meaningfully support learning,” Flavin said, “and how it can deepen engagement and enhance instructional design rather than distract from it.”
Faculty and staff are contributing in two ways: some are enrolling in the course and participating in AI-supported activities and surveys, while others are reviewing instructional models and providing feedback. Insights from both groups will guide refinements before future rollout.
Meanwhile, Research Scientists Meryem Yılmaz Soylu and Jeonghyun (Jonna) Lee, along with Research Associate Eric Sembrat, are piloting an immersive VR module within the Online Master of Science in Analytics (OMSA) program. The module features case-based scenarios with a virtual agent, enabling students to practice leadership and workplace decision-making in realistic environments.
“Technical expertise alone is no longer enough. Our students need opportunities to practice leadership, navigate conflict, and communicate across stakeholders in realistic settings. Virtual reality allows us to create emotionally resonant, high-stakes scenarios in a safe environment where students can experiment, reflect, and grow,” Yılmaz Soylu said.
The VR experience uses branching 360° scenarios in which students’ communication choices and strategic decisions influence virtual stakeholders’ responses in real time. Insights from the pilot will inform refinements to strengthen usability, instructional alignment, and scalability before broader implementation.
“In many ways, we are building the future of online learning. We’re asking what works and what supports learning. It’s incredibly exciting to be part of a college that embraces this sort of thoughtful experimentation. Innovation like this can help us responsibly design courses for the individuals we serve,” Flavin said.
The VR module is being developed in collaboration with Lifetime Learning colleagues in instructional design, media production, and technology, as well as partners across Georgia Tech, including OMSA leadership and faculty collaborators.
Together, these initiatives reflect the College’s approach to innovation: integrating research, technology, and delivery to improve learning systems. By piloting and refining new models before scaling, the College strengthens its capacity to expand access while preserving quality and meaningful outcomes for learners across career stages.
News Contact
Yelena M. Rivera-Vale (she/her(s)/ella)
Communications Program Manager
C21U, College of Lifetime Learning
Feb. 02, 2026
Every year, hundreds of Georgia Tech students take a leap that changes their careers forever: They decide to spend their summer building a startup.
That opportunity is here again. Applications for the 2026 Summer Startup Launch cohort are now open.
If you’ve identified a meaningful problem, have begun talking to real users, or feel a pull to build something bigger than a class project, this is your moment. Startup Launch gives you the structure, support, and ecosystem to take your idea further than you ever thought possible.
A Launchpad With a Proven Track Record
In the past year alone, CREATE‑X founders have:
- Led their startup to successful acquisitions.
- Raised six-figure funding rounds.
- Gained acceptance into highly selective Y Combinator.
- Built products used by customers, communities, and companies across industries.
The ability to identify a problem, validate real user needs, build something that works, and communicate that value — that combination makes students stand out in a competitive job market. Employers notice it. Graduate programs notice it. And investors notice it.
This is why Startup Launch isn’t just a summer project.
It becomes a defining career asset.
What You Get in Startup Launch
Startup Launch is intentionally built to give students every advantage while they build their venture. This year, we’ve expanded support even further.
Participants receive:
- $200,000 in-kind services like accounting and cloud credits.
- Dedicated coaching and mentorship from experienced founders and startup experts.
- Exclusive workshops and founder-focused programming.
- Access to the CREATE-X network, a community of builders, investors, and potential customers.
You’ll spend the summer fully immersed in your startup, surrounded by peers also tackling ambitious problems.
And you’ll leave with something real to show for it.
Applications for the Summer 2026 cohort close March 17. Apply to Startup Launch today.
News Contact
Breanna Durham
Marketing Strategist
Jan. 15, 2026
People with autism seeking employment may soon have access to a new AI-based job-coaching tool thanks to a six-figure grant from the National Science Foundation (NSF).
Jennifer Kim and Mark Riedl recently received a $500,000 NSF grant to develop large language models (LLMs) that provide strength-based job coaching for autistic job seekers.
The two Georgia Tech researchers work with Heather Dicks, a career development advisor in Georgia Tech’s EXCEL program, and other nonprofit organizations to provide job-seeking resources to autistic people.
Dicks said the average job search for people with autism can take three to six months in a good economy. It can take up to 18 months in a bad one. However, the new LLMs from Georgia Tech could help to reduce stress and fast-track these job seekers into employment.
Kim is an assistant professor who specializes in human-computer interaction technology that benefits neurodivergent people. Riedl is a professor and an expert in the development of artificial intelligence (AI) and machine learning technologies.
The team’s goal is to identify job-search pain points and understand how job coaches create better employment prospects for their autistic clients.
“Large-language models have an opportunity to support this kind of work if we can have more data about each different individual strength,” Kim said.
“We want to know what worked for them in specific settings at work, what didn’t work, and what kind of accommodations can better help them. That includes how they should prepare for interviews, how they can better represent their skills, how they can address accommodations they need, and how to write a cover letter. It’s a broad range.”
Dicks has advocated for neurodivergent people and helped them find employment for 20 years. She worked at the Center for the Visually Impaired in Atlanta before coming to Georgia Tech in 2017.
She said most nonprofits that support neurodivergent people offer career development programs and many contract job coaches, but limited coach availability often leads to long waitlists. However, LLMs could fill this availability gap to address the immediate needs of job seekers who may not have access to a job coach.
“These organizations often run at a slow pace, and there’s high turnover,” Dicks said. “An AI tool could get the job seeker quicker support. Maybe they don’t even need to wait on the government system.
“If they’re on a waitlist, it can help the user put together a resume and practice general interview questions. When the job coach is ready to work with them, they’re able to hit the ground running.”
Nailing the Interview
Dicks said the job interview is one of the biggest challenges for people with autism.
“They have trouble picking up on visual and nonverbal cues — the tone of the interview, figuring out the nuances that a question is hinting at,” she said. “They’re not giving the warm and fuzzy vibes that allow them to connect on a personal level.”
That’s why Kim wants the models to reflect a strength-based coaching approach. Strength-based coaching is particularly effective for individuals with autism. Many possess traits that employers value. These include:
- Close attention to detail
- Strong technical proficiency
- Unique problem-solving perspectives
“The issue is that they don’t know how these strengths can be applied in the workplace,” Kim said. “Once they understand this, they can communicate with employers about their strengths and the accommodations employers should provide to the job seeker so they can successfully apply their skills at work.”
Handling Rejection
Still, Kim understands that candidates will need to handle rejection to make it through the search process. She envisions LLMs that help them refocus their energy and regain their confidence after being turned down.
“When you get a lot of rejection emails, it’s easy to feel you’re not good enough,” she said. “Being constantly reminded about your strengths and their prior successes can get them through the stressful job-seeking process.”
Dicks said the models should also be able to provide feedback so that candidates don’t repeat mistakes.
“It can tell them what would’ve been a better answer or a better way to say it,” Dicks said. “It can also encourage them with reminders that you get 100 noes before you get a yes.”
You’re Hired, Now What?
Dicks said the role of a job coach doesn’t end the moment a client is hired. Government-contracted job coaches may work with their clients for up to 90 days after they start a new job to support their transition.
However, she said, sometimes that isn’t enough. Many companies have probationary periods exceeding three months. Autistic individuals may struggle with on-the-job training or communicating what accommodations they need from their new employer.
These are just a few gaps an AI tool can fill for these individuals after they’re hired.
“I could see these models evolving to being supportive at those critical junctures of the probationary period being over or the one-year job review or the annual evaluation that everyone dreads,” she said.
Dicks has an average caseload of 15 students, whom she assists in landing jobs and internships through the EXCEL program.
EXCEL provides a mentorship program for students with intellectual and developmental disabilities from the time they set foot on campus through graduation and beyond.
For more information and to apply, visit EXCEL’s website.
Jan. 06, 2026
The Institute for People and Technology (IPaT) and the College of Design (CoD) awarded a seed grant to Christian Coles, lecturer in the School of Architecture; Moinak Choudhury, Ph.D., lecturer in the School of Literature, Media, and Communication (LMC); and Janelle Wright, environmental justice programs manager, at the West Atlanta Watershed Alliance (WAWA). Coles will serve as the principal investigator with Choudhury and Wright serving as the co-principal investigators.
Their project, “Designing Futures: Afrofuturist Co-Creation with AI for Community-Led Facade Design” will be realized during a 16-week design studio (ARCH 4016) class that will take place during fall 2026 and serve senior undergraduate architecture students. Participants from diverse majors will join through the Building for Equity and Sustainability Vertically Integrated Project (VIP) team, in partnership with the Center for Sustainable Communities Research and Education (SCoRE). Pre-planning tasks will occur spring semester in preparation for the fall studio class.
The studio class will collaborate with Moinak Choudhury and students in LMC 3403, who bring expertise in technical communication, responsible AI use, and community-based learning to co-create engagement materials and public-facing documentation that strengthen the project’s interdisciplinary links between design, sustainability, and communication.
The final result of the project encompasses students who will design and install a modular, solar-powered façade panel system for the outdoor classroom on WAWA’s campus. This project extends work done by a previous Georgia Tech VIP team.
The panels will serve multiple functions: participatory community engagement, artistic expression, and climate regulation. This project will advance the classroom toward its intended vision as an Afrofuturist learning space with technological nods to the Keneda Building on Georgia Tech’s campus. With the help of this seed grant, interdisciplinary team members will delve into design, engineering, computing, communication, and community partnership.
News Contact
Walter Rich
Pagination
- Page 1
- Next page