Naiya Salinas is one of a half-dozen students enrolled in the new AI Enhanced Robotic Manufacturing program at the Georgia Veterans Education Career Transition Resource (VECTR) Center, which is setting a new standard for technology-focused careers.

Naiya Salinas is one of a half-dozen students enrolled in the new AI Enhanced Robotic Manufacturing program at the Georgia Veterans Education Career Transition Resource (VECTR) Center, which is setting a new standard for technology-focused careers.

Naiya Salinas and her instructor, Deryk Stoops, looked back and forth between the large screen on the wall and a hand-held monitor.

Tracing between the lines of code, Salinas made a discovery: A character was missing.

The lesson was an important, real-world example of the problem-solving skills required when working in robotics. Salinas is one of a half-dozen students enrolled in the new AI Enhanced Robotic Manufacturing program at the Georgia Veterans Education Career Transition Resource (VECTR) Center, which is setting a new standard for technology-focused careers.

The set-up of the lab was intentional, said Stoops, who designed the course modules and worked with local industry to determine their manufacturing needs. Then, with funding from the Georgia Tech Manufacturing Institute's (GTMI) Georgia Artificial Intelligence in Manufacturing (Georgia AIM) project, Stoops worked with administrators at Central Georgia Technical College to purchase robotics and other cutting-edge manufacturing tools.

As a result, the VECTR Center’s AI-Enhanced Robotic Manufacturing Studio trains veterans in industry-standard robotics, manufacturing modules, cameras, and network systems. This equipment gives students experience in a variety of robotics-based manufacturing applications. Graduates can also finish the 17-credit course with two certifications that carry some weight in the manufacturing world.

“After getting the Georgia AIM grant, we pulled together a roundtable with industry. And then we did site visits to see how they pulled AI and robotics into the space,” said Stoops. “All the equipment in here is the direct result of industry feedback.”

Statewide Strategic Effort

Funded by a $65 million grant from the federal Economic Development Administration, Georgia AIM is a network of projects across the state born out of GTMI and led by Georgia Tech’s Enterprise Innovation Institute. These projects work to connect the manufacturing community with smart technologies and a ready workforce. Central Georgia received around $4 million as part of the initiative to advance innovation, workforce development and STEM education in support of local manufacturing and Robins Air Force Base.

Georgia AIM pulls together a host of regional partners all working toward a common goal of increasing STEM education, access to technology and enhancing AI among local manufacturers. This partnership includes Fort Valley State University, the Middle Georgia Innovation Project led by the Development Authority of Houston County, Central Georgia Technical College, which administers the VECTR Center, and the 21st Century Partnership.

“This grant will help us turn our vision for both the Middle Georgia Innovation Project and the Middle Georgia STEM Alliance, along with our partners, into reality, advancing this region and supporting the future of Robins AFB,” said Brig. Gen. John Kubinec, USAF (ret.), president and chief executive officer of the 21st Century Partnership.

Georgia AIM funding for Central Georgia Technical College and Fort Valley State focused on enhancing technology and purchasing new components to assist in education. At Fort Valley State, a mobile lab will launch later this year to take AI-enhanced technologies to underserved parts of the state, while Central Georgia Tech invested in an AI-enhanced robotics manufacturing lab at the VECTR Center.

“This funding will help bring emerging technology throughout our service area and beyond, to our students, economy, and Robins Air Force Base,” said Dr. Ivan Allen, president of Central Georgia Technical College. “Thanks to the power of this partnership, our faculty and students will have the opportunity to work directly with modern manufacturing technology, giving our students the experience and education needed to transition from the classroom to the workforce in an in-demand industry.”

New Gateway for Vets

The VECTR Center’s AI-Enhanced Robotics Manufacturing Studio includes FANUC robotic systems, Rockwell Automation programmable logic controllers, Cognex AI-enabled machine vision systems, smart sensor networks, and a MiR autonomous mobile robot.

The studio graduated its first cohort of students in February and celebrated its ribbon-cutting ceremony on April 17 with a host of local officials and dignitaries. It was also an opportunity to celebrate the students, who are transitioning from a military career to civilian life.

The new technologies at the VECTR Center lab are opening new doors to a growing, cutting-edge field.

“From being in this class, you really start to see how the world is going toward AI. Not just Chat GPT, but everything — the world is going toward AI for sure now,” said Jordan Leonard, who worked in logistics and as a vehicle mechanic in the U.S. Army. Now, he’s upskilling into robotics and looking forward to using his new skills in maintenance. “What I want to do is go to school for instrumentation and electrical technician. But since a lot of industrial plants are trying to get more robots, for me this will be a step up from my coworkers by knowing these things.”

News Contact

Kristen Morales
Marketing Strategist
Georgia AIM (Artificial Intelligence in Manufacturing)

Using what she learned from her PIN fellowship, Iesha Baldwin now serves as the inaugural sustainability coordinator for Spelman College.

Using what she learned from her PIN fellowship, Iesha Baldwin now serves as the inaugural sustainability coordinator for Spelman College.

Whether it’s typing an email or guiding travel from one destination to the next, artificial intelligence (AI) already plays a role in simplifying daily tasks.

But what if it could also help people live more efficiently — that is, more sustainably, with less waste?

It’s a concept that often runs through the mind of Iesha Baldwin, the inaugural Georgia AIM Fellow with the Partnership for Inclusive Innovation (PIN) at the Georgia Institute of Technology’s Enterprise Innovation Institute. Born out of the Georgia Tech Manufacturing Institute, the Georgia AIM (Artificial Intelligence in Manufacturing) project works with PIN fellows to advance the project's mission of equitably developing and deploying talent and innovation in AI for manufacturing throughout the state of Georgia.

When she accepted the PIN Fellowship for 2023, she saw an opportunity to learn more about the nexus of artificial intelligence, manufacturing, waste, and education. With a background in environmental studies and science, Baldwin studied methods for waste reduction, environmental protection, and science education.

“I took an interest in AI technology because I wanted to learn how it can be harnessed to solve the waste problem and create better science education opportunities for K-12 and higher education students,” said Baldwin.

This type of unique problem-solving is what defines the PIN Fellowship programs. Every year, a cohort of recent college graduates is selected, and each is paired with an industry that aligns with their expertise and career goals — specifically, cleantech, AI manufacturing, supply chain and logistics, and cybersecurity/information technology. Fellowships are one year, with fellows spending six months with a private company and then six months with a public organization.

Through the experience, fellows expand their professional network and drive connections between the public and private sectors. They also use the opportunity to work on special projects that involve using new technologies in their area of interest.

With a focus on artificial intelligence in manufacturing, Baldwin led an inventory management project at the Georgia manufacturer Freudenberg-NOK, where the objective was to create an inventory management system that reduced manufacturing downtime and, as a result, increased efficiency, and reduced waste.

She also worked in several capacities at Georgia Tech: supporting K-12 outreach programs at the Advanced Manufacturing Pilot Facility, assisting with energy research at the Marcus Nanotechnology Research Center, and auditing the infamous mechanical engineering course ME2110 to improve her design thinking and engineering skills.

“Learning about artificial intelligence is a process, and the knowledge gained was worth the academic adventure,” she said. “Because of the wonderful support at Georgia Tech, Freudenberg NOK, PIN, and Georgia AIM, I feel confident about connecting environmental sustainability and technology in a way that makes communities more resilient and sustainable.”

Since leaving the PIN Fellowship, Baldwin connected her love for education, science, and environmental sustainability through her new role as the inaugural sustainability coordinator for Spelman College, her alma mater.  In this role, she is responsible for supporting campus sustainability initiatives.

News Contact

Kristen Morales
Marketing Strategist
Georgia Artificial Intelligence in Manufacturing

Ankur Singh

Ankur Singh has developed a new way of programming T cells that retains their naïve state, making them better fighters. — Photo by Jerry Grillo



Nanowires and cell

This is an image of a T cell on a nanowire array. The arrow indicates where a nanowire has penetrated the cell, delivering therapeutic miRNA.

Adoptive T-cell therapy has revolutionized medicine. A patient’s T-cells — a type of white blood cell that is part of the body’s immune system — are extracted and modified in a lab and then infused back into the body, to seek and destroy infection, or cancer cells. 

Now Georgia Tech bioengineer Ankur Singh and his research team have developed a method to improve this pioneering immunotherapy. 

Their solution involves using nanowires to deliver therapeutic miRNA to T-cells. This new modification process retains the cells’ naïve state, which means they’ll be even better disease fighters when they’re infused back into a patient.

“By delivering miRNA in naïve T cells, we have basically prepared an infantry, ready to deploy,” Singh said. “And when these naïve cells are stimulated and activated in the presence of disease, it’s like they’ve been converted into samurais.”

Lean and Mean

Currently in adoptive T-cell therapy, the cells become stimulated and preactivated in the lab when they are modified, losing their naïve state. Singh’s new technique overcomes this limitation. The approach is described in a new study published in the journal Nature Nanotechnology.

“Naïve T-cells are more useful for immunotherapy because they have not yet been preactivated, which means they can be more easily manipulated to adopt desired therapeutic functions,” said Singh, the Carl Ring Family Professor in the Woodruff School of Mechanical Engineering and the Wallace H. Coulter Department of Biomedical Engineering

The raw recruits of the immune system, naïve T-cells are white blood cells that haven’t been tested in battle yet. But these cellular recruits are robust, impressionable, and adaptable — ready and eager for programming.

“This process creates a well-programmed naïve T-cell ideal for enhancing immune responses against specific targets, such as tumors or pathogens,” said Singh.

The precise programming naïve T-cells receive sets the foundational stage for a more successful disease fighting future, as compared to preactivated cells.

Giving Fighter Cells a Boost

Within the body, naïve T-cells become activated when they receive a danger signal from antigens, which are part of disease-causing pathogens, but they send a signal to T-cells that activate the immune system.

Adoptive T-cell therapy is used against aggressive diseases that overwhelm the body’s defense system. Scientists give the patient’s T-cells a therapeutic boost in the lab, loading them up with additional medicine and chemically preactivating them. 

That’s when the cells lose their naïve state. When infused back into the patient, these modified T-cells are an effective infantry against disease — but they are prone to becoming exhausted. They aren’t samurai. Naïve T-cells, though, being the young, programmable recruits that they are, could be.

The question for Singh and his team was: How do we give cells that therapeutic boost without preactivating them, thereby losing that pristine, highly suggestable naïve state? Their answer: Nanowires.

NanoPrecision: The Pointed Solution

Singh wanted to enhance naïve T-cells with a dose of miRNA. miRNA is a molecule that, when used as a therapeutic, works as a kind of volume knob for genes, turning their activity up or down to keep infection and cancer in check. The miRNA for this study was developed in part by the study’s co-author, Andrew Grimson of Cornell University.

“If we could find a way to forcibly enter the cells without damaging them, we could achieve our goal to deliver the miRNA into naïve T cells without preactivating them,” Singh explained.

Traditional modification in the lab involves binding immune receptors to T-cells, enabling the uptake of miRNA or any genetic material (which results in loss of the naïve state). “But nanowires do not engage receptors and thus do not activate cells, so they retain their naïve state,” Singh said.

The nanowires, silicon wafers made with specialized tools at Georgia Tech’s Institute for Electronics and Nanotechnology, form a fine needle bed. Cells are placed on the nanowires, which easily penetrate the cells and deliver their miRNA over several hours. Then the cells with miRNA are flushed out from the tops of the nanowires, activated, eventually infused back into the patient. These programmed cells can kill enemies efficiently over an extended time period.

“We believe this approach will be a real gamechanger for adoptive immunotherapies, because we now have the ability to produce T-cells with predictable fates,” says Brian Rudd, a professor of immunology at Cornell University, and co-senior author of the study with Singh.

The researchers tested their work in two separate infectious disease animal models at Cornell for this study, and Singh described the results as “a robust performance in infection control.”

In the next phase of study, the researchers will up the ante, moving from infectious disease to test their cellular super soldiers against cancer and move toward translation to the clinical setting.  New funding from the Georgia Clinical & Translational Science Alliance is supporting Singh’s research.

CITATION:  Kristel J. Yee Mon, Sungwoong Kim, Zhonghao Dai, Jessica D. West, Hongya Zhu5, Ritika Jain, Andrew Grimson, Brian D. Rudd, Ankur Singh. “Functionalized nanowires for miRNA-mediated therapeutic programming of naïve T cells,” Nature Nanotechnology.

FUNDING: Curci Foundation, NSF (EEC-1648035, ECCS-2025462, ECCS-1542081), NIH (5R01AI132738-06, 1R01CA266052-01, 1R01CA238745-01A1, U01CA280984-01, R01AI110613 and U01AI131348).

News Contact

Jerry Grillo

Three students kneeling around a spot robot

Ask a person to find a frying pan, and they will most likely go to the kitchen. Ask a robot to do the same, and you may get numerous responses, depending on how the robot is trained.

Since humans often associate objects in a home with the room they are in, Naoki Yokoyama thinks robots that navigate human environments to perform assistive tasks should mimic that reasoning.

Roboticists have employed natural language models to help robots mimic human reasoning over the past few years. However, Yokoyama, a Ph.D. student in robotics, said these models create a “bottleneck” that prevents agents from picking up on visual cues such as room type, size, décor, and lighting. 

Yokoyama presented a new framework for semantic reasoning at the Institute of Electrical and Electronic Engineers (IEEE) International Conference on Robotics and Automation (ICRA) last month in Yokohama, Japan. ICRA is the world’s largest robotics conference.

Yokoyama earned a best paper award in the Cognitive Robotics category with his Vision-Language Frontier Maps (VLFM) proposal.

Assistant Professor Sehoon Ha and Associate Professor Dhruv Batra from the School of Interactive Computing advised Yokoyama on the paper. Yokoyama authored the paper while interning at the Boston Dynamics’ AI Institute.

“I think the cognitive robotic category represents a significant portion of submissions to ICRA nowadays,” said Yokoyama, whose family is from Japan. “I’m grateful that our work is being recognized among the best in this field.”

Instead of natural language models, Yokoyama used a renowned vision-language model called BLIP-2 and tested it on a Boston Dynamics “Spot” robot in home and office environments.

“We rely on models that have been trained on vast amounts of data collected from the web,” Yokoyama said. “That allows us to use models with common sense reasoning and world knowledge. It’s not limited to a typical robot learning environment.”

What is Blip-2?

BLIP-2 matches images to text by assigning a score that evaluates how well the user input text describes the content of an image. The model removes the need for the robot to use object detectors and language models. 

Instead, the robot uses BLIP-2 to extract semantic values from RGB images with a text prompt that includes the target object. 

BLIP-2 then teaches the robot to recognize the room type, distinguishing the living room from the bathroom and the kitchen. The robot learns to associate certain objects with specific rooms where it will likely find them.

From here, the robot creates a value map to determine the most likely locations for a target object, Yokoyama said.

Yokoyama said this is a step forward for intelligent home assistive robots, enabling users to find objects — like missing keys — in their homes without knowing an item’s location. 

“If you’re looking for a pair of scissors, the robot can automatically figure out it should head to the kitchen or the office,” he said. “Even if the scissors are in an unusual place, it uses semantic reasoning to work through each room from most probable location to least likely.”

He added that the benefit of using a VLM instead of an object detector is that the robot will include visual cues in its reasoning.

“You can look at a room in an apartment, and there are so many things an object detector wouldn’t tell you about that room that would be informative,” he said. “You don’t want to limit yourself to a textual description or a list of object classes because you’re missing many semantic visual cues.”

While other VLMs exist, Yokoyama chose BLIP-2 because the model:

  • Accepts any text length and isn’t limited to a small set of objects or categories.
  • Allows the robot to be pre-trained on vast amounts of data collected from the internet.
  • Has proven results that enable accurate image-to-text matching.
Home, Office, and Beyond

Yokoyama also tested the Spot robot to navigate a more challenging office environment. Office spaces tend to be more homogenous and harder to distinguish from one another than rooms in a home. 

“We showed a few cases in which the robot will still work,” Yokoyama said. “We tell it to find a microwave, and it searches for the kitchen. We tell it to find a potted plant, and it moves toward an area with windows because, based on what it knows from BLIP-2, that’s the most likely place to find the plant.”

Yokoyama said as VLM models continue to improve, so will robot navigation. The increase in the number of VLM models has caused robot navigation to steer away from traditional physical simulations.

“It shows how important it is to keep an eye on the work being done in computer vision and natural language processing for getting robots to perform tasks more efficiently,” he said. “The current research direction in robot learning is moving toward more intelligent and higher-level reasoning. These foundation models are going to play a key role in that.”

Top photo by Kevin Beasley/College of Computing.

News Contact

Nathan Deen

Communications Officer

School of Interactive Computing

Mechanical Engineering Professors Shreyes Melkote (left) and Jerry Qi.

Mechanical Engineering Professors Shreyes Melkote (left) and Jerry Qi.

Two faculty members in the George W. Woodruff School of Mechanical Engineering will receive achievement awards from the American Society of Mechanical Engineers (ASME). Shreyes Melkote, who holds the Morris M. Bryan, Jr. Professorship in Mechanical Engineering, will receive the 2024 Milton C. Shaw Manufacturing Research Medal, and Professor Jerry Qi will receive the 2024 Warner T. Koiter Medal.

The Milton C. Shaw Manufacturing Research Medal, established in 2009, recognizes significant fundamental contributions to the science and technology of manufacturing processes.

"I am honored to receive this prestigious award. Milton C. Shaw was a giant in the manufacturing field, and to be recognized by an award named after him is very humbling," said Melkote, who also serves as the associate director for the Georgia Tech Manufacturing Institute.

The Warner T. Koiter Medal was established in 1996 and recognizes distinguished contributions to the field of solid mechanics with special emphasis on the effective blending of theoretical and applied elements of the discipline, as well as leadership in the international solid mechanics community.

Qi expressed his appreciation for his team upon learning of the award. “This award is really for my current and former students and postdoctoral scholars. It recognizes their work and innovations in a very special way," he said.

Qi's research is focused on the mechanics and 3D printing of soft active materials to enable 4D printing methods and the recycling of thermosetting polymers. He has developed several material models to describe the multiphysics and chemomechanical behaviors of soft active materials. He also pioneered several multimaterial 3D printing approaches that allow the integration of different polymers and functional materials into one system.

Melkote's primary area of research is manufacturing, and his secondary area of research is tribology, specifically in the science of precision material removal processes, new manufacturing process development including novel surface modification methods, the application of artificial intelligence and machine learning to solve complex problems in manufacturing, and advanced industrial robotics for precision manufacturing.

Melkote also credited the efforts and support of his students and colleagues. "This recognition would not have been possible without the high level of creativity and outstanding efforts of my graduate students and postdoctoral scholars, the support of my colleagues and mentors at Georgia Tech and beyond, and the opportunities and resources provided to me by the Woodruff School. I am truly grateful to all of them."

Both will be presented with their awards at upcoming ASME events. Melkote will receive his award at the ASME Manufacturing Science and Engineering Conference, June 17-21, in Knoxville, TN, and Qi will receive his at the ASME International Mechanical Engineering Congress and Exposition, November 17-21, in Portland, OR.

News Contact

Chloe Arrington
Communications Officer II
George W. Woodruff School of Mechanical Engineering

A researcher in lab coat, glasses, and gloves, positions electrodes above a small glass chamber. She's examining a small piece of stainless steel connected to one of the electrodes. (Photo: Candler Hobbs)

Postdoctoral scholar Anuja Tripathi examines a small sample of stainless steel after an electrochemical etching process she designed to create nano-scale needle-like structures on its surface. A second process deposits copper ions on the surface to create a dual antibacterial material. (Photo: Candler Hobbs)

An electrochemical process developed at Georgia Tech could offer new protection against bacterial infections without contributing to growing antibiotic resistance.

The approach capitalizes on the natural antibacterial properties of copper and creates incredibly small needle-like structures on the surface of stainless steel to kill harmful bacteria like E. coli and Staphylococcus. It’s convenient and inexpensive, and it could reduce the need for chemicals and antibiotics in hospitals, kitchens, and other settings where surface contamination can lead to serious illness.

It also could save lives: A global study of drug-resistant infections found they directly killed 1.27 million people in 2019 and contributed to nearly 5 million other deaths — making these infections one of the leading causes of death for every age group.

Researchers described the copper-stainless steel and its effectiveness May 20 in the journal Small.

Read the full story on the College of Engineering website.

News Contact

Joshua Stewart
College of Engineering

The Web Conference 2024
Mohit Chandra and Yiqiao (Ahren) Jin
The Web Conference 2024

Georgia Tech researchers say non-English speakers shouldn’t rely on chatbots like ChatGPT to provide valuable healthcare advice. 

A team of researchers from the College of Computing at Georgia Tech has developed a framework for assessing the capabilities of large language models (LLMs).

Ph.D. students Mohit Chandra and Yiqiao (Ahren) Jin are the co-lead authors of the paper Better to Ask in English: Cross-Lingual Evaluation of Large Language Models for Healthcare Queries. 

Their paper’s findings reveal a gap between LLMs and their ability to answer health-related questions. Chandra and Jin point out the limitations of LLMs for users and developers but also highlight their potential. 

Their XLingEval framework cautions non-English speakers from using chatbots as alternatives to doctors for advice. However, models can improve by deepening the data pool with multilingual source material such as their proposed XLingHealth benchmark.     

“For users, our research supports what ChatGPT’s website already states: chatbots make a lot of mistakes, so we should not rely on them for critical decision-making or for information that requires high accuracy,” Jin said.   

“Since we observed this language disparity in their performance, LLM developers should focus on improving accuracy, correctness, consistency, and reliability in other languages,” Jin said. 

Using XLingEval, the researchers found chatbots are less accurate in Spanish, Chinese, and Hindi compared to English. By focusing on correctness, consistency, and verifiability, they discovered: 

  • Correctness decreased by 18% when the same questions were asked in Spanish, Chinese, and Hindi. 
  • Answers in non-English were 29% less consistent than their English counterparts. 
  • Non-English responses were 13% overall less verifiable. 

XLingHealth contains question-answer pairs that chatbots can reference, which the group hopes will spark improvement within LLMs.  

The HealthQA dataset uses specialized healthcare articles from the popular healthcare website Patient. It includes 1,134 health-related question-answer pairs as excerpts from original articles.  

LiveQA is a second dataset containing 246 question-answer pairs constructed from frequently asked questions (FAQs) platforms associated with the U.S. National Institutes of Health (NIH).  

For drug-related questions, the group built a MedicationQA component. This dataset contains 690 questions extracted from anonymous consumer queries submitted to MedlinePlus. The answers are sourced from medical references, such as MedlinePlus and DailyMed.   

In their tests, the researchers asked over 2,000 medical-related questions to ChatGPT-3.5 and MedAlpaca. MedAlpaca is a healthcare question-answer chatbot trained in medical literature. Yet, more than 67% of its responses to non-English questions were irrelevant or contradictory.  

“We see far worse performance in the case of MedAlpaca than ChatGPT,” Chandra said. 

“The majority of the data for MedAlpaca is in English, so it struggled to answer queries in non-English languages. GPT also struggled, but it performed much better than MedAlpaca because it had some sort of training data in other languages.” 

Ph.D. student Gaurav Verma and postdoctoral researcher Yibo Hu co-authored the paper. 

Jin and Verma study under Srijan Kumar, an assistant professor in the School of Computational Science and Engineering, and Hu is a postdoc in Kumar’s lab. Chandra is advised by Munmun De Choudhury, an associate professor in the School of Interactive Computing. 
 
The team will present their paper at The Web Conference, occurring May 13-17 in Singapore. The annual conference focuses on the future direction of the internet. The group’s presentation is a complimentary match, considering the conference's location.  

English and Chinese are the most common languages in Singapore. The group tested Spanish, Chinese, and Hindi because they are the world’s most spoken languages after English. Personal curiosity and background played a part in inspiring the study. 

“ChatGPT was very popular when it launched in 2022, especially for us computer science students who are always exploring new technology,” said Jin. “Non-native English speakers, like Mohit and I, noticed early on that chatbots underperformed in our native languages.” 

School of Interactive Computing communications officer Nathan Deen and School of Computational Science and Engineering communications officer Bryant Wine contributed to this report.

News Contact

Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu

Nathan Deen, Communications Officer
ndeen6@cc.gatech.edu

Mechanical Engineering and Biological Sciences Associate Professor Gregory Sawicki (left) and Mechanical Engineering Associate Professor Aaron Young.

Mechanical Engineering and Biological Sciences Associate Professor Gregory Sawicki (left) and Mechanical Engineering Associate Professor Aaron Young.

Faculty from the George W. Woodruff School of Mechanical Engineering, including Associate Professors Gregory Sawicki and Aaron Young, have been awarded a five-year, $2.6 million Research Project Grant (R01) from the National Institutes of Health (NIH). 

“We are grateful to our NIH sponsor for this award to improve treatment of post-stroke individuals using advanced robotic solutions,” said Young, who is also affiliated with Georgia Tech's Neuro Next Initiative.

The R01 will support a project focused on using optimization and artificial intelligence to personalize exoskeleton assistance for individuals with symptoms resulting from stroke. Sawicki and Young will collaborate with researchers from the Emory Rehabilitation Hospital including Associate Professor Trisha Kesar.

“As a stroke researcher, I am eagerly looking forward to making progress on this project, and paving the way for leading-edge technologies and technology-driven treatment strategies that maximize functional independence and quality of life of people with neuro-pathologies," said Kesar.

The intervention for study participants will include a training therapy program that will use biofeedback to increase the efficiency of exosuits for wearers.   

Kinsey Herrin, senior research scientist in the Woodruff School and Neuro Next Initiative affiliate, explained the extended benefits of the study, including being able to increase safety for stroke patients who are moving outdoors. “One aspect of this project is testing our technologies on stroke survivors as they're walking outside. Being outside is a small thing that many of us take for granted, but a devastating loss for many following a stroke.”  

Sawicki, who is also an associate professor in the School of Biological Sciences and core faculty in Georgia Tech's Institute for Robotics and Intelligent Machines, is also looking forward to the project. "This new project is truly a tour de force that leverages a highly talented interdisciplinary team of engineers, clinical scientists, and prosthetics/orthotics experts who all bring key elements needed to build assistive technology that can work in real-world scenarios."

News Contact

Chloe Arrington
Communications Officer II
George W. Woodruff School of Mechanical Engineering

A pediatrician listens to a young patient's heartbeat with a stethoscope.

An Adobe Stock image of a pediatrician listening to a young patient's heartbeat with a stethoscope.

CHI 2024 ARCollab

Cardiologists and surgeons could soon have a new mobile augmented reality (AR) tool to improve collaboration in surgical planning.

ARCollab is an iOS AR application designed for doctors to interact with patient-specific 3D heart models in a shared environment. It is the first surgical planning tool that uses multi-user mobile AR in iOS.

The application’s collaborative feature overcomes limitations in traditional surgical modeling and planning methods. This offers patients better, personalized care from doctors who plan and collaborate with the tool.

Georgia Tech researchers partnered with Children’s Healthcare of Atlanta (CHOA) in ARCollab’s development. Pratham Mehta, a computer science major, led the group’s research.

“We have conducted two trips to CHOA for usability evaluations with cardiologists and surgeons. The overall feedback from ARCollab users has been positive,” Mehta said. 

“They all enjoyed experimenting with it and collaborating with other users. They also felt like it had the potential to be useful in surgical planning.”

ARCollab’s collaborative environment is the tool’s most novel feature. It allows surgical teams to study and plan together in a virtual workspace, regardless of location.

ARCollab supports a toolbox of features for doctors to inspect and interact with their patients' AR heart models. With a few finger gestures, users can scale and rotate, “slice” into the model, and modify a slicing plane to view omnidirectional cross-sections of the heart.

Developing ARCollab on iOS works twofold. This streamlines deployment and accessibility by making it available on the iOS App Store and Apple devices. Building ARCollab on Apple’s peer-to-peer network framework ensures the functionality of the AR components. It also lessens the learning curve, especially for experienced AR users.

ARCollab overcomes traditional surgical planning practices of using physical heart models. Producing physical models is time-consuming, resource-intensive, and irreversible compared to digital models. It is also difficult for surgical teams to plan together since they are limited to studying a single physical model.

Digital and AR modeling is growing as an alternative to physical models. CardiacAR is one such tool the group has already created. 

However, digital platforms lack multi-user features essential for surgical teams to collaborate during planning. ARCollab’s multi-user workspace progresses the technology’s potential as a mass replacement for physical modeling.

“Over the past year and a half, we have been working on incorporating collaboration into our prior work with CardiacAR,” Mehta said. 

“This involved completely changing the codebase, rebuilding the entire app and its features from the ground up in a newer AR framework that was better suited for collaboration and future development.”

Its interactive and visualization features, along with its novelty and innovation, led the Conference on Human Factors in Computing Systems (CHI 2024) to accept ARCollab for presentation. The conference occurs May 11-16 in Honolulu.

CHI is considered the most prestigious conference for human-computer interaction and one of the top-ranked conferences in computer science.

M.S. student Harsha Karanth and alumnus Alex Yang (CS 2022, M.S. CS 2023) co-authored the paper with Mehta. They study under Polo Chau, an associate professor in the School of Computational Science and Engineering.

The Georgia Tech group partnered with Timothy Slesnick and Fawwaz Shaw from CHOA on ARCollab’s development.

“Working with the doctors and having them test out versions of our application and give us feedback has been the most important part of the collaboration with CHOA,” Mehta said. 

“These medical professionals are experts in their field. We want to make sure to have features that they want and need, and that would make their job easier.”

News Contact

Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu

CHI 2024 Farsight

Thanks to a Georgia Tech researcher's new tool, application developers can now see potential harmful attributes in their prototypes.

Farsight is a tool designed for developers who use large language models (LLMs) to create applications powered by artificial intelligence (AI). Farsight alerts prototypers when they write LLM prompts that could be harmful and misused.

Downstream users can expect to benefit from better quality and safer products made with Farsight’s assistance. The tool’s lasting impact, though, is that it fosters responsible AI awareness by coaching developers on the proper use of LLMs.

Machine Learning Ph.D. candidate Zijie (Jay) Wang is Farsight’s lead architect. He will present the paper at the upcoming Conference on Human Factors in Computing Systems (CHI 2024). Farsight ranked in the top 5% of papers accepted to CHI 2024, earning it an honorable mention for the conference’s best paper award.

“LLMs have empowered millions of people with diverse backgrounds, including writers, doctors, and educators, to build and prototype powerful AI apps through prompting. However, many of these AI prototypers don’t have training in computer science, let alone responsible AI practices,” said Wang.

“With a growing number of AI incidents related to LLMs, it is critical to make developers aware of the potential harms associated with their AI applications.”

Wang referenced an example when two lawyers used ChatGPT to write a legal brief. A U.S. judge sanctioned the lawyers because their submitted brief contained six fictitious case citations that the LLM fabricated.

With Farsight, the group aims to improve developers’ awareness of responsible AI use. It achieves this by highlighting potential use cases, affected stakeholders, and possible harm associated with an application in the early prototyping stage. 

A user study involving 42 prototypers showed that developers could better identify potential harms associated with their prompts after using Farsight. The users also found the tool more helpful and usable than existing resources. 

Feedback from the study showed Farsight encouraged developers to focus on end-users and think beyond immediate harmful outcomes.

“While resources, like workshops and online videos, exist to help AI prototypers, they are often seen as tedious, and most people lack the incentive and time to use them,” said Wang.

“Our approach was to consolidate and display responsible AI resources in the same space where AI prototypers write prompts. In addition, we leverage AI to highlight relevant real-life incidents and guide users to potential harms based on their prompts.”

Farsight employs an in-situ user interface to show developers the potential negative consequences of their applications during prototyping. 

Alert symbols for “neutral,” “caution,” and “warning” notify users when prompts require more attention. When a user clicks the alert symbol, an awareness sidebar expands from one side of the screen. 

The sidebar shows an incident panel with actual news headlines from incidents relevant to the harmful prompt. The sidebar also has a use-case panel that helps developers imagine how different groups of people can use their applications in varying contexts.

Another key feature is the harm envisioner. This functionality takes a user’s prompt as input and assists them in envisioning potential harmful outcomes. The prompt branches into an interactive node tree that lists use cases, stakeholders, and harms, like “societal harm,” “allocative harm,” “interpersonal harm,” and more.

The novel design and insightful findings from the user study resulted in Farsight’s acceptance for presentation at CHI 2024.

CHI is considered the most prestigious conference for human-computer interaction and one of the top-ranked conferences in computer science.

CHI is affiliated with the Association for Computing Machinery. The conference takes place May 11-16 in Honolulu.

Wang worked on Farsight in Summer 2023 while interning at Google + AI Research group (PAIR).

Farsight’s co-authors from Google PAIR include Chinmay KulkarniLauren WilcoxMichael Terry, and Michael Madaio. The group possesses closer ties to Georgia Tech than just through Wang.

Terry, the current co-leader of Google PAIR, earned his Ph.D. in human-computer interaction from Georgia Tech in 2005. Madaio graduated from Tech in 2015 with a M.S. in digital media. Wilcox was a full-time faculty member in the School of Interactive Computing from 2013 to 2021 and serves in an adjunct capacity today.

Though not an author, one of Wang’s influences is his advisor, Polo Chau. Chau is an associate professor in the School of Computational Science and Engineering. His group specializes in data science, human-centered AI, and visualization research for social good.  

“I think what makes Farsight interesting is its unique in-workflow and human-AI collaborative approach,” said Wang. 

“Furthermore, Farsight leverages LLMs to expand prototypers’ creativity and brainstorm a wide range of use cases, stakeholders, and potential harms.”

News Contact

Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu