May. 05, 2025
Imagine a future where robotic guide dogs lead the visually impaired, flying cars navigate the skies, and electric self-driving vehicles communicate effortlessly with pedestrians.
That future is being shaped today at Georgia Tech’s Center for Human-AI-Robot Teaming (CHART). Led by Bruce Walker, a professor in the School of Psychology and the School of Interactive Computing, the newly launched Center aims to transform how humans, artificial intelligence, and robots work together. By focusing on the dynamic partnership between humans and intelligent systems, CHART will explore how humans can collaborate more effectively with artificial intelligence systems and robots to solve critical scientific and societal challenges.
“There are wonderful Georgia Tech units like the Institute for People and Technology and the Institute for Robotics and Machines that do an incredible job focusing on using and creating intelligent systems and technology,” says Walker. “CHART adds value to this ecosystem with our emphasis on the interactive partnership between humans, AI technology, and robots and machines with agency.”
Based in the School of Psychology, CHART has built an international and interdisciplinary consortium of researchers and innovators from academia and industry. Its impressive membership includes researchers from five Georgia Tech colleges, 18 universities worldwide, industry, public policy organizations, cities, and NASA.
“With expertise encompassing psychology, design, interactive computing, robotics, aerospace engineering, mechanical engineering, public policy, and business, CHART leverages a wealth of knowledge to help us tackle multifaceted challenges — and we’re adding new members every week,” says Walker.
To help shepherd this growth, CHART’s Steering Committee includes School of Psychology Professor Christopher Wiese and Assistant Professor Mengyao Li and School of Mechanical Engineering Assistant Professor Ye Zhao.
Tomorrow’s technology
Several research programs already underway at CHART showcase its vision of deeply transformative, human-centered research:
Robotic guide dogs
Walker co-leads this research with Sehoon Ha, an assistant professor in the School of Interactive Computing. The project explores the partnership between a robotic guide dog robot and a human as they navigate the physical and social environment. Key concerns include trust, communication, sharing of responsibilities, and how the human-robot team integrates into social settings. The project also addresses practical design issues like ensuring the robot operates quietly to avoid interfering with auditory cues critical for blind users.
Flying cars
This project investigates how humans will interact with emerging flying vehicle technologies. It explores user interfaces, control systems, and human-machine interaction design, including whether traditional steering controls might evolve into joystick-like mechanisms. Broader issues include how flying cars will fit into current infrastructure, impacts on pilot licensing policy and regulation, and the psychology of adopting futuristic technologies.
Pedestrians and self-driving cars
Researchers are exploring how driverless electric vehicles and pedestrians can communicate to keep our future streets safe, including how vehicles signal their intentions to pedestrians. Teams are also implications for safety and public policy, including accident liability and the quiet nature of electric vehicles.
Generative AI in Education
This project examines how students use generative AI like ChatGPT as collaborators in learning. The research explores its effects on outcomes, education policy, and curriculum development.
Meet CHART Founding Director Bruce Walker
Walker is excited about CHART’s future and its role in improving the world.
“We’ve got an ambitious plan and with the caliber of researchers we have assembled from around the world, the possibilities are limitless,” says Walker. “I see Georgia Tech leading the way as a center of gravity in this space.”
His background renders him well-suited to the interdisciplinary nature of the Center. Walker brings a wealth of experience in psychology, human-computer interaction, and related fields, with research interests spanning sonification and auditory displays, trust in automation, technology adoption, human-AI-robot teaming, and assistive technologies. In addition to CHART, he's the director of the Georgia Tech Sonification Lab.
Walker’s academic research has resulted in more than 250 journal articles and proceedings, and he has consulted for NASA, state and federal governments, private companies, and the military. He is also an active entrepreneur, founding startups and working on projects related to COVID diagnosis, skin cancer detection, mental health monitoring, gun safety, and digital scent technology.
Reflecting on the journey ahead, Walker says, “We’ve come out of the gate strong. I look forward to the innovations ahead and continuing to cultivate a community of future leaders in this field.”
News Contact
Laura S. Smith, writer
Apr. 23, 2025
Georgia Tech professors Michelle LaPlaca and W. Hong Yeo have been selected as recipients of Peterson Professorships with the Children’s Healthcare of Atlanta Pediatric Technology Center (PTC) at Georgia Tech. The professorships, supported by the G.P. “Bud” Peterson and Valerie H. Peterson Faculty Endowment Fund, are meant to further energize the Georgia Tech and Children’s partnership by engaging and empowering researchers involved in pediatrics.
In a joint statement, PTC co-directors Wilbur Lam and Stanislav Emelianov said, “The appointment of Dr. LaPlaca and Dr. Yeo as Peterson Professors exemplifies the vision of Bud and Valerie Peterson — advancing innovation and collaboration through the Pediatric Technology Center to bring breakthrough ideas from the lab to the bedside, improving the lives of children and transforming healthcare.”
LaPlaca is a professor and associate chair for Faculty Development in the Department of Biomedical Engineering, a joint department between Georgia Tech and Emory University. Her research is focused on traumatic brain injury and concussion, concentrating on sources of heterogeneity and clinical translation. Specifically, she is working on biomarker discovery, the role of the glymphatic system, and novel virtual reality neurological assessments.
“I am thrilled to be chosen as one of the Peterson Professors and appreciate Bud and Valerie Peterson’s dedication to pediatric research,” she said. “The professorship will allow me to broaden research in pediatric concussion assessment and college student concussion awareness, as well as to identify biomarkers in experimental models of brain injury.”
In addition to the research lab, LaPlaca will work with an undergraduate research class called Concussion Connect, which is part of the Vertically Integrated Projects program at Georgia Tech.
“Through the PTC, Georgia Tech and Children’s will positively impact brain health in Georgia’s pediatric population,” said LaPlaca.
Yeo is the Harris Saunders, Jr. Professor in the George W. Woodruff School of Mechanical Engineering and the director of the Wearable Intelligent Systems and Healthcare Center at Georgia Tech. His research focuses on nanomanufacturing and membrane electronics to develop soft biomedical devices aimed at improving disease diagnostics, therapeutics, and rehabilitation.
“I am truly honored to be awarded the Peterson Professorship from the Children’s PTC at Georgia Tech,” he said. “This recognition will greatly enhance my research efforts in developing soft bioelectronics aimed at advancing pediatric healthcare, as well as expand education opportunities for the next generation of undergraduate and graduate students interested in creating innovative medical devices that align seamlessly with the recent NSF Research Traineeship grant I received. I am eager to contribute to the dynamic partnership between Georgia Tech and Children’s Healthcare of Atlanta and to empower innovative solutions that will improve the lives of children.”
The Peterson Professorships honor the former Georgia Tech President and First Lady, whose vision for the importance of research in improving pediatric healthcare has had an enormous positive impact on the care of pediatric patients in our state and region.
The Children’s PTC at Georgia Tech brings clinical experts from Children’s together with Georgia Tech scientists and engineers to develop technological solutions to problems in the health and care of children. Children’s PTC provides extraordinary opportunities for interdisciplinary collaboration in pediatrics, creating breakthrough discoveries that often can only be found at the intersection of multiple disciplines. These collaborations also allow us to bring discoveries to the clinic and the bedside, thereby enhancing the lives of children and young adults. The mission of the PTC is to establish the world’s leading program in the development of technological solutions for children’s health, focused on three strategic areas that will have a lasting impact on Georgia’s kids and beyond.
Feb. 17, 2025
Men and women in California put their lives on the line when battling wildfires every year, but there is a future where machines powered by artificial intelligence are on the front lines, not firefighters.
However, this new generation of self-thinking robots would need security protocols to ensure they aren’t susceptible to hackers. To integrate such robots into society, they must come with assurances that they will behave safely around humans.
It begs the question: can you guarantee the safety of something that doesn’t exist yet? It’s something Assistant Professor Glen Chou hopes to accomplish by developing algorithms that will enable autonomous systems to learn and adapt while acting with safety and security assurances.
He plans to launch research initiatives, in collaboration with the School of Cybersecurity and Privacy and the Daniel Guggenheim School of Aerospace Engineering, to secure this new technological frontier as it develops.
“To operate in uncertain real-world environments, robots and other autonomous systems need to leverage and adapt a complex network of perception and control algorithms to turn sensor data into actions,” he said. “To obtain realistic assurances, we must do a joint safety and security analysis on these sensors and algorithms simultaneously, rather than one at a time.”
This end-to-end method would proactively look for flaws in the robot’s systems rather than wait for them to be exploited. This would lead to intrinsically robust robotic systems that can recover from failures.
Chou said this research will be useful in other domains, including advanced space exploration. If a space rover is sent to one of Saturn’s moons, for example, it needs to be able to act and think independently of scientists on Earth.
Aside from fighting fires and exploring space, this technology could perform maintenance in nuclear reactors, automatically maintain the power grid, and make autonomous surgery safer. It could also bring assistive robots into the home, enabling higher standards of care.
This is a challenging domain where safety, security, and privacy concerns are paramount due to frequent, close contact with humans.
This will start in the newly established Trustworthy Robotics Lab at Georgia Tech, which Chou directs. He and his Ph.D. students will design principled algorithms that enable general-purpose robots and autonomous systems to operate capably, safely, and securely with humans while remaining resilient to real-world failures and uncertainty.
Chou earned dual bachelor’s degrees in electrical engineering and computer sciences as well as mechanical engineering from University of California Berkeley in 2017, a master’s and Ph.D. in electrical and computer engineering from the University of Michigan in 2019 and 2022, respectively. He was a postdoc at MIT Computer Science & Artificial Intelligence Laboratory prior to joining Georgia Tech in November 2024. He is a recipient of the National Defense Science and Engineering Graduate fellowship program, NSF Graduate Research fellowships, and was named a Robotics: Science and Systems Pioneer in 2022.
News Contact
John (JP) Popham
Communications Officer II
College of Computing | School of Cybersecurity and Privacy
Dec. 18, 2024
As we go through our daily routines of work, chores, errands and leisure pursuits, most of us take our mobility for granted. Conversely, many people suffer from permanent or temporary mobility issues due to neurological disorders, stroke, injury, and age-related causes. Research in the field of robotic exoskeletons has shown significant potential to provide assistive support for patients with permanent mobility constraints, as well as an effective additional tool for rehabilitation and recovery after injury.
Though the field has made great progress in the hardware and devices for these assistive technologies, there are limitations in ease of use and in the ability to move from walking to running, from flat ground to slopes and stairs, and across different terrains. Recent developments to create exoskeleton controllers that are more responsive to the user’s environment via user-based variables such as gait and slope calculations provide rapid yet imprecise outputs. More recent inquiry into data-driven improvements such as vision-based labeling and classification are extremely promising additions in the goal to develop a true synchronous user and device interface. A major hindrance to this data-driven approach is the need for burdensome mounted cameras and on-board computing to allow for real-time in use adjustments to the environmental terrain encountered.
In order to address these barriers, Aaron Young, Associate Professor in the Woodruff School of Mechanical Engineering and Director of the Exoskeleton and Prosthetic Intelligent Controls (EPIC) Lab, and Dawit Lee, Postdoctoral Scholar at Stanford, have created an artificial intelligence (AI)-based universal exoskeleton controller that uses information from onboard mechanical sensors without the added weight and complexity of mounted vision based systems. The new work, published in Science Advances (Link to Be Added), presents a controller that holistically captures the major variations encountered during community walking in real-time. The team combined data from the Americans with Disabilities Act (ADA) building guidelines that characterize ambulatory terrains in slope level degrees with a gait phase estimator to achieve dynamic switching of assistance types between multiple terrains and slopes and delivery to the user with little to no delay.
In this work, we have created a new, open-source knee exoskeleton design that is intended to support community mobility. Knee assist devices have tremendous value in activities such as sit-to-stand, stairs, and ramps where we use our biological knees substantially to accomplish these tasks. The neat accomplishment in this work is that by leveraging AI, we avoid the need to classify these different modes discretely but rather have a single continuous variable (in this case rise over run of the surface) to enable continuous and unified control over common ambulatory tasks such as walking, stairs, and ramps. We demonstrate that on novel users of the device, we can track both the environment and the user’s gait state with very high accuracy out of the lab in community settings. It is an exciting time in the field as we see more studies, such as this one, showing promise in tackling real-world mobility challenges
The assistance approach using our intelligent controller, presented in this work, provides users with support at the right timing and with a magnitude that closely matches the varying biomechanical effort they produce as they move through the community. Our assistance approach was preferred for community navigation and was more effective in reducing the user’s energy consumption compared to conventional methods. We also open-sourced the design of the robotic knee exoskeleton hardware and the dataset used to train the models with this publication which allows other researchers to build upon our developments and further advance the field. This work demonstrates an exciting example of AI integration into a wearable robotic system, showcasing its successful outcomes and significant potential.
- Dawit Lee; Postdoctoral Scholar, Stanford
Using this combination of a universal slope estimator and a gait phase estimator, the team achieved results in the dynamic modulation of exoskeleton assistance that have never been achieved by previous approaches and moves the field closer to creating an adaptive and effective assistive technology that seamlessly integrates into the daily lives of individuals, promoting enhanced mobility and overall well-being. This work also has the potential to enable a mode-specific assistance approach tailored to the user’s specific biomechanical needs.
- Christa M. Ernst; Research Communications Program Manager
Original Publication
Dawit Lee, Sanghyub Lee, and Aaron J. Young, “AI-Driven Universal Lower-Limb Exoskeleton System for Community Ambulation,” Science Advances
Prior Related Work
D. Lee, I. Kang, D. D. Molinaro, A. Yu, A. J. Young, Real-time user-independent slope prediction using deep learning for modulation of robotic knee exoskeleton assistance. IEEE Robot. Autom. Lett. 6, 3995–4000 (2021).
Funding Provided by
NIH Director’s New Innovator Award DP2-HD111709
Oct. 21, 2024
If you’ve ever watched a large flock of birds on the wing, moving across the sky like a cloud with various shapes and directional changes appearing from seeming chaos, or the maneuvers of an ant colony forming bridges and rafts to escape floods, you’ve been observing what scientists call self-organization. What may not be as obvious is that self-organization occurs throughout the natural world, including bacterial colonies, protein complexes, and hybrid materials. Understanding and predicting self-organization, especially in systems that are out of equilibrium, like living things, is an enduring goal of statistical physics.
This goal is the motivation behind a recently introduced principle of physics called rattling, which posits that systems with sufficiently “messy” dynamics organize into what researchers refer to as low rattling states. Although the principle has proved accurate for systems of robot swarms, it has been too vague to be more broadly tested, and it has been unclear exactly why it works and to what other systems it should apply.
Dana Randall, a professor in the School of Computer Science, and Jacob Calvert, a postdoctoral fellow at the Institute for Data Engineering and Science, have formulated a theory of rattling that answers these fundamental questions. Their paper, “A Local-Global Principle for Nonequilibrium Steady States,” published last week in Proceedings of the National Academy of Sciences, characterizes how rattling is related to the amount of time that a system spends in a state. Their theory further identifies the classes of systems for which rattling explains self-organization.
When we first heard about rattling from physicists, it was very hard to believe it could be true. Our work grew out of a desire to understand it ourselves. We found that the idea at its core is surprisingly simple and holds even more broadly than the physicists guessed.
Dana Randall Professor, School of Computer Science & Adjunct Professor, School of Mathematics
Georgia Institute of Technology
Beyond its basic scientific importance, the work can be put to immediate use to analyze models of phenomena across scientific domains. Additionally, experimentalists seeking organization within a nonequilibrium system may be able to induce low rattling states to achieve their desired goal. The duo thinks the work will be valuable in designing microparticles, robotic swarms, and new materials. It may also provide new ways to analyze and predict collective behaviors in biological systems at the micro and nanoscale.
The preceding material is based on work supported by the Army Research Office under award ARO MURI Award W911NF-19-1-0233 and by the National Science Foundation under grant CCF-2106687. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsoring agencies.
Jacob Calvert and Dana Randall. A local-global principle for nonequilibrium steady states. Proceedings of the National Academy of Sciences, 121(42):e2411731121, 2024.
Oct. 01, 2024
The Institute for Robotics and Intelligent Machines (IRIM) launched a new initiatives program, starting with several winning proposals, with corresponding initiative leads that will broaden the scope of IRIM’s research beyond its traditional core strengths. A major goal is to stimulate collaboration across areas not typically considered as technical robotics, such as policy, education, and the humanities, as well as open new inter-university and inter-agency collaboration routes. In addition to guiding their specific initiatives, these leads will serve as an informal internal advisory body for IRIM. Initiative leads will be announced annually, with existing initiative leaders considered for renewal based on their progress in achieving community building and research goals. We hope that initiative leads will act as the “faculty face” of IRIM and communicate IRIM’s vision and activities to audiences both within and outside of Georgia Tech.
Meet 2024 IRIM Initiative Leads
Stephen Balakirsky; Regents' Researcher, Georgia Tech Research Institute & Panagiotis Tsiotras; David & Andrew Lewis Endowed Chair, Daniel Guggenheim School of Aerospace Engineering | Proximity Operations for Autonomous Servicing
Why It Matters: Proximity operations in space refer to the intricate and precise maneuvers and activities that spacecraft or satellites perform when they are in close proximity to each other, such as docking, rendezvous, or station-keeping. These operations are essential for a variety of space missions, including crewed spaceflights, satellite servicing, space exploration, and maintaining satellite constellations. While this is a very broad field, this initiative will concentrate on robotic servicing and associated challenges. In this context, robotic servicing is composed of proximity operations that are used for servicing and repairing satellites in space. In robotic servicing, robotic arms and tools perform maintenance tasks such as refueling, replacing components, or providing operation enhancements to extend a satellite's operational life or increase a satellite’s capabilities.
Our Approach: By forming an initiative in this important area, IRIM will open opportunities within the rapidly evolving space community. This will allow us to create proposals for organizations ranging from NASA and the Defense Advanced Research Projects Agency to the U.S. Air Force and U.S. Space Force. This will also position us to become national leaders in this area. While several universities have a robust robotics program and quite a few have a strong space engineering program, there are only a handful of academic units with the breadth of expertise to tackle this problem. Also, even fewer universities have the benefit of an experienced applied research partner, such as the Georgia Tech Research Institute (GTRI), to undertake large-scale demonstrations. Georgia Tech, having world-renowned programs in aerospace engineering and robotics, is uniquely positioned to be a leader in this field. In addition, creating a workshop in proximity operations for autonomous servicing will allow the GTRI and Georgia Tech space robotics communities to come together and better understand strengths and opportunities for improvement in our abilities.
Matthew Gombolay; Assistant Professor, Interactive Computing | Human-Robot Society in 2125: IRIM Leading the Way
Why It Matters: The coming robot “apocalypse” and foundation models captured the zeitgeist in 2023 with “ChatGPT” becoming a topic at the dinner table and the probability occurrence of various scenarios of AI driven technological doom being a hotly debated topic on social media. Futuristic visions of ubiquitous embodied Artificial Intelligence (AI) and robotics have become tangible. The proliferation and effectiveness of first-person view drones in the Russo-Ukrainian War, autonomous taxi services along with their failures, and inexpensive robots (e.g., Tesla’s Optimus and Unitree’s G1) have made it seem like children alive today may have robots embedded in their everyday lives. Yet, there is a lack of trust in the public leadership bringing us into this future to ensure that robots are developed and deployed with beneficence.
Our Approach: This proposal seeks to assemble a team of bright, savvy operators across academia, government, media, nonprofits, industry, and community stakeholders to develop a roadmap for how we can be the most trusted voice to guide the public in the next 100 years of innovation in robotics here at the IRIM. We propose to carry out specific activities that include conducting the activities necessary to develop a roadmap about Robots in 2125: Altruistic and Integrated Human-Robot Society. We also aim to build partnerships to promulgate these outcomes across Georgia Tech’s campus and internationally.
Gregory Sawicki; Joseph Anderer Faculty Fellow, School of Mechanical Engineering & Aaron Young; Associate Professor, Mechanical Engineering | Wearable Robotic Augmentation for Human Resilience
Why It Matters: The field of robotics continues to evolve beyond rigid, precision-controlled machines for amplifying production on manufacturing assembly lines toward soft, wearable systems that can mediate the interface between human users and their natural and built environments. Recent advances in materials science have made it possible to construct flexible garments with embedded sensors and actuators (e.g., exosuits). In parallel, computers continue to get smaller and more powerful, and state-of-the art machine learning algorithms can extract useful information from more extensive volumes of input data in real time. Now is the time to embed lean, powerful, sensorimotor elements alongside high-speed and efficient data processing systems in a continuous wearable device.
Our Approach: The mission of the Wearable Robotic Augmentation for Human Resilience (WeRoAHR) initiative is to merge modern advances in sensing, actuation, and computing technology to imagine and create adaptive, wearable augmentation technology that can improve human resilience and longevity across the physiological spectrum — from behavioral to cellular scales. The near-term effort (~2-3 years) will draw on Georgia Tech’s existing ecosystem of basic scientists and engineers to develop WeRoAHR systems that will focus on key targets of opportunity to increase human resilience (e.g., improved balance, dexterity, and stamina). These initial efforts will establish seeds for growth intended to help launch larger-scale, center-level efforts (>5 years).
Panagiotis Tsiotras; David & Andrew Lewis Endowed Chair, Daniel Guggenheim School of Aerospace Engineering & Sam Coogan; Demetrius T. Paris Junior Professor, School of Electrical and Computer Engineering | Initiative on Reliable, Safe, and Secure Autonomous Robotics
Why It Matters: The design and operation of reliable systems is primarily an integration issue that involves not only each component (software, hardware) being safe and reliable but also the whole system being reliable (including the human operator). The necessity for reliable autonomous systems (including AI agents) is more pronounced for “safety-critical” applications, where the result of a wrong decision can be catastrophic. This is quite a different landscape from many other autonomous decision systems (e.g., recommender systems) where a wrong or imprecise decision is inconsequential.
Our Approach: This new initiative will investigate the development of protocols, techniques, methodologies, theories, and practices for designing, building, and operating safe and reliable AI and autonomous engineering systems and contribute toward promoting a culture of safety and accountability grounded in rigorous objective metrics and methodologies for AI/autonomous and intelligent machines designers and operators, to allow the widespread adoption of such systems in safety-critical areas with confidence. The proposed new initiative aims to establish Tech as the leader in the design of autonomous, reliable engineering robotic systems and investigate the opportunity for a federally funded or industry-funded research center (National Science Foundation (NSF) Science and Technology Centers/Engineering Research Centers) in this area.
Colin Usher; Robotics Systems and Technology Branch Head, GTRI | Opportunities for Agricultural Robotics and New Collaborations
Why It Matters: The concepts for how robotics might be incorporated more broadly in agriculture vary widely, ranging from large-scale systems to teams of small systems operating in farms, enabling new possibilities. In addition, there are several application areas in agriculture, ranging from planting, weeding, crop scouting, and general growing through harvesting. Georgia Tech is not a land-grant university, making our ability to capture some of the opportunities in agricultural research more challenging. By partnering with a land-grant university such as the University of Georgia (UGA), we can leverage this relationship to go after these opportunities that, historically, were not available.
Our Approach: We plan to build collaborations first by leveraging relationships we have already formed within GTRI, Georgia Tech, and UGA. We will achieve this through a significant level of networking, supported by workshops and/or seminars with which to recruit faculty and form a roadmap for research within the respective universities. Our goal is to identify and pursue multiple opportunities for robotics-related research in both row-crop and animal-based agriculture. We believe that we have a strong opportunity, starting with formalizing a program with the partners we have worked with before, with the potential to improve and grow the research area by incorporating new faculty and staff with a unified vision of ubiquitous robotics systems in agriculture. We plan to achieve this through scheduled visits with interested faculty, attendance at relevant conferences, and ultimately hosting a workshop to formalize and define a research roadmap.
Ye Zhao; Assistant Professor, School of Mechanical Engineering | Safe, Social, & Scalable Human-Robot Teaming: Interaction, Synergy, & Augmentation
Why It Matters: Collaborative robots in unstructured environments such as construction and warehouse sites show great promise in working with humans on repetitive and dangerous tasks to improve efficiency and productivity. However, pre-programmed and nonflexible interaction behaviors of existing robots lower the naturalness and flexibility of the collaboration process. Therefore, it is crucial to improve physical interaction behaviors of the collaborative human-robot teaming.
Our Approach: This proposal will advance the understanding of the bi-directional influence and interaction of human-robot teaming for complex physical activities in dynamic environments by developing new methods to predict worker intention via multi-modal wearable sensing, reasoning about complex human-robot-workspace interaction, and adaptively planning the robot’s motion considering both human teaming dynamics and physiological and cognitive states. More importantly, our team plans to prioritize efforts to (i) broaden the scope of IRIM’s autonomy research by incorporating psychology, cognitive, and manufacturing research not typically considered as technical robotics research areas; (ii) initiate new IRIM education, training, and outreach programs through collaboration with team members from various Georgia Tech educational and outreach programs (including Project ENGAGES, VIP, and CEISMC) as well as the AUCC (World’s largest consortia of African American private institutions of higher education) which comprises Clark Atlanta University, Morehouse College, & Spelman College; and (iii) aim for large governmental grants such as DOD MURI, NSF NRT, and NSF Future of Work programs.
-Christa M. Ernst
Jul. 15, 2024
Hepatic, or liver, disease affects more than 100 million people in the U.S. About 4.5 million adults (1.8%) have been diagnosed with liver disease, but it is estimated that between 80 and 100 million adults in the U.S. have undiagnosed fatty liver disease in varying stages. Over time, undiagnosed and untreated hepatic diseases can lead to cirrhosis, a severe scarring of the liver that cannot be reversed.
Most hepatic diseases are chronic conditions that will be present over the life of the patient, but early detection improves overall health and the ability to manage specific conditions over time. Additionally, assessing patients over time allows for effective treatments to be adjusted as necessary. The standard protocol for diagnosis, as well as follow-up tissue assessment, is a biopsy after the return of an abnormal blood test, but biopsies are time-consuming and pose risks for the patient. Several non-invasive imaging techniques have been developed to assess the stiffness of liver tissue, an indication of scarring, including magnetic resonance elastography (MRE).
MRE combines elements of ultrasound and MRI imaging to create a visual map showing gradients of stiffness throughout the liver and is increasingly used to diagnose hepatic issues. MRE exams, however, can fail for many reasons, including patient motion, patient physiology, imaging issues, and mechanical issues such as improper wave generation or propagation in the liver. Determining the success of MRE exams depends on visual inspection of technologists and radiologists. With increasing work demands and workforce shortages, providing an accurate, automated way to classify image quality will create a streamlined approach and reduce the need for repeat scans.
Professor Jun Ueda in the George W. Woodruff School of Mechanical Engineering and robotics Ph.D. student Heriberto Nieves, working with a team from the Icahn School of Medicine at Mount Sinai, have successfully applied deep learning techniques for accurate, automated quality control image assessment. The research, “Deep Learning-Enabled Automated Quality Control for Liver MR Elastography: Initial Results,” was published in the Journal of Magnetic Resonance Imaging.
Using five deep learning training models, an accuracy of 92% was achieved by the best-performing ensemble on retrospective MRE images of patients with varied liver stiffnesses. The team also achieved a return of the analyzed data within seconds. The rapidity of image quality return allows the technician to focus on adjusting hardware or patient orientation for re-scan in a single session, rather than requiring patients to return for costly and timely re-scans due to low-quality initial images.
This new research is a step toward streamlining the review pipeline for MRE using deep learning techniques, which have remained unexplored compared to other medical imaging modalities. The research also provides a helpful baseline for future avenues of inquiry, such as assessing the health of the spleen or kidneys. It may also be applied to automation for image quality control for monitoring non-hepatic conditions, such as breast cancer or muscular dystrophy, in which tissue stiffness is an indicator of initial health and disease progression. Ueda, Nieves, and their team hope to test these models on Siemens Healthineers magnetic resonance scanners within the next year.
Publication
Nieves-Vazquez, H.A., Ozkaya, E., Meinhold, W., Geahchan, A., Bane, O., Ueda, J. and Taouli, B. (2024), Deep Learning-Enabled Automated Quality Control for Liver MR Elastography: Initial Results. J Magn Reson Imaging. https://doi.org/10.1002/jmri.29490
Prior Work
Robotically Precise Diagnostics and Therapeutics for Degenerative Disc Disorder
Related Material
Editorial for “Deep Learning-Enabled Automated Quality Control for Liver MR Elastography: Initial Results”
News Contact
Christa M. Ernst |
Research Communications Program Manager |
Topic Expertise: Robotics, Data Sciences, Semiconductor Design & Fab |
Jun. 06, 2024
Ask a person to find a frying pan, and they will most likely go to the kitchen. Ask a robot to do the same, and you may get numerous responses, depending on how the robot is trained.
Since humans often associate objects in a home with the room they are in, Naoki Yokoyama thinks robots that navigate human environments to perform assistive tasks should mimic that reasoning.
Roboticists have employed natural language models to help robots mimic human reasoning over the past few years. However, Yokoyama, a Ph.D. student in robotics, said these models create a “bottleneck” that prevents agents from picking up on visual cues such as room type, size, décor, and lighting.
Yokoyama presented a new framework for semantic reasoning at the Institute of Electrical and Electronic Engineers (IEEE) International Conference on Robotics and Automation (ICRA) last month in Yokohama, Japan. ICRA is the world’s largest robotics conference.
Yokoyama earned a best paper award in the Cognitive Robotics category with his Vision-Language Frontier Maps (VLFM) proposal.
Assistant Professor Sehoon Ha and Associate Professor Dhruv Batra from the School of Interactive Computing advised Yokoyama on the paper. Yokoyama authored the paper while interning at the Boston Dynamics’ AI Institute.
“I think the cognitive robotic category represents a significant portion of submissions to ICRA nowadays,” said Yokoyama, whose family is from Japan. “I’m grateful that our work is being recognized among the best in this field.”
Instead of natural language models, Yokoyama used a renowned vision-language model called BLIP-2 and tested it on a Boston Dynamics “Spot” robot in home and office environments.
“We rely on models that have been trained on vast amounts of data collected from the web,” Yokoyama said. “That allows us to use models with common sense reasoning and world knowledge. It’s not limited to a typical robot learning environment.”
What is Blip-2?
BLIP-2 matches images to text by assigning a score that evaluates how well the user input text describes the content of an image. The model removes the need for the robot to use object detectors and language models.
Instead, the robot uses BLIP-2 to extract semantic values from RGB images with a text prompt that includes the target object.
BLIP-2 then teaches the robot to recognize the room type, distinguishing the living room from the bathroom and the kitchen. The robot learns to associate certain objects with specific rooms where it will likely find them.
From here, the robot creates a value map to determine the most likely locations for a target object, Yokoyama said.
Yokoyama said this is a step forward for intelligent home assistive robots, enabling users to find objects — like missing keys — in their homes without knowing an item’s location.
“If you’re looking for a pair of scissors, the robot can automatically figure out it should head to the kitchen or the office,” he said. “Even if the scissors are in an unusual place, it uses semantic reasoning to work through each room from most probable location to least likely.”
He added that the benefit of using a VLM instead of an object detector is that the robot will include visual cues in its reasoning.
“You can look at a room in an apartment, and there are so many things an object detector wouldn’t tell you about that room that would be informative,” he said. “You don’t want to limit yourself to a textual description or a list of object classes because you’re missing many semantic visual cues.”
While other VLMs exist, Yokoyama chose BLIP-2 because the model:
- Accepts any text length and isn’t limited to a small set of objects or categories.
- Allows the robot to be pre-trained on vast amounts of data collected from the internet.
- Has proven results that enable accurate image-to-text matching.
Home, Office, and Beyond
Yokoyama also tested the Spot robot to navigate a more challenging office environment. Office spaces tend to be more homogenous and harder to distinguish from one another than rooms in a home.
“We showed a few cases in which the robot will still work,” Yokoyama said. “We tell it to find a microwave, and it searches for the kitchen. We tell it to find a potted plant, and it moves toward an area with windows because, based on what it knows from BLIP-2, that’s the most likely place to find the plant.”
Yokoyama said as VLM models continue to improve, so will robot navigation. The increase in the number of VLM models has caused robot navigation to steer away from traditional physical simulations.
“It shows how important it is to keep an eye on the work being done in computer vision and natural language processing for getting robots to perform tasks more efficiently,” he said. “The current research direction in robot learning is moving toward more intelligent and higher-level reasoning. These foundation models are going to play a key role in that.”
Top photo by Kevin Beasley/College of Computing.
News Contact
Nathan Deen
Communications Officer
School of Interactive Computing
May. 10, 2024
Faculty from the George W. Woodruff School of Mechanical Engineering, including Associate Professors Gregory Sawicki and Aaron Young, have been awarded a five-year, $2.6 million Research Project Grant (R01) from the National Institutes of Health (NIH).
“We are grateful to our NIH sponsor for this award to improve treatment of post-stroke individuals using advanced robotic solutions,” said Young, who is also affiliated with Georgia Tech's Neuro Next Initiative.
The R01 will support a project focused on using optimization and artificial intelligence to personalize exoskeleton assistance for individuals with symptoms resulting from stroke. Sawicki and Young will collaborate with researchers from the Emory Rehabilitation Hospital including Associate Professor Trisha Kesar.
“As a stroke researcher, I am eagerly looking forward to making progress on this project, and paving the way for leading-edge technologies and technology-driven treatment strategies that maximize functional independence and quality of life of people with neuro-pathologies," said Kesar.
The intervention for study participants will include a training therapy program that will use biofeedback to increase the efficiency of exosuits for wearers.
Kinsey Herrin, senior research scientist in the Woodruff School and Neuro Next Initiative affiliate, explained the extended benefits of the study, including being able to increase safety for stroke patients who are moving outdoors. “One aspect of this project is testing our technologies on stroke survivors as they're walking outside. Being outside is a small thing that many of us take for granted, but a devastating loss for many following a stroke.”
Sawicki, who is also an associate professor in the School of Biological Sciences and core faculty in Georgia Tech's Institute for Robotics and Intelligent Machines, is also looking forward to the project. "This new project is truly a tour de force that leverages a highly talented interdisciplinary team of engineers, clinical scientists, and prosthetics/orthotics experts who all bring key elements needed to build assistive technology that can work in real-world scenarios."
News Contact
Chloe Arrington
Communications Officer II
George W. Woodruff School of Mechanical Engineering
Mar. 04, 2024
Hyundai Motor Group Innovation Center Singapore hosted the Meta-Factory Conference Jan. 23 – 24. It brought together academic leaders, industry experts, and manufacturing companies to discuss technology and the next generation of integrated manufacturing facilities.
Seth Hutchinson, executive director of the Institute for Robotics and Intelligent Machines at Georgia Tech, delivered a keynote lecture on “The Impacts of Today’s Robotics Innovation on the Relationship Between Robots and Their Human Co-Workers in Manufacturing Applications” — an overview of current state-of-the-art robotic technologies and future research trends for developing robotics aimed at interactions with human workers in manufacturing.
In addition to the keynote, Hutchinson also participated in the Hyundai Motor Group's Smart Factory Executive Technology Advisory Committee (E-TAC) panel on comprehensive future manufacturing directions and toured the new Hyundai Meta-Factory to observe how digital-twin technology is being applied in their human-robot collaborative manufacturing environment.
Hutchinson is a professor in the School of Interactive Computing. He received his Ph.D. from Purdue University in 1988, and in 1990 joined the University of Illinois Urbana-Champaign, where he was professor of electrical and computer engineering until 2017 and is currently professor emeritus. He has served on the Hyundai Motor Group's Smart Factory E-TAC since 2022.
Hyundai Motor Group Innovation Center Singapore is Hyundai Motor Group’s open innovation hub to support research and development of human-centered smart manufacturing processes using advanced technologies such as artificial intelligence, the Internet of Things, and robotics.
- Christa M. Ernst
Related Links
- Hyundai Newsroom Article: Link
- Event Link: https://mfc2024.com/
- Keynote Speakers: https://mfc2024.com/keynotes/
News Contact
Christa M. Ernst - Research Communications Program Manager
christa.ernst@research.gatech.edu
Pagination
- 1 Page 1
- Next page