Mar. 05, 2024
Georgia Tech is developing a new artificial intelligence (AI) based method to automatically find and stop threats to renewable energy and local generators for energy customers across the nation’s power grid.
The research will concentrate on protecting distributed energy resources (DER), which are most often used on low-voltage portions of the power grid. They can include rooftop solar panels, controllable electric vehicle chargers, and battery storage systems.
The cybersecurity concern is that an attacker could compromise these systems and use them to cause problems across the electrical grid like, overloading components and voltage fluctuations. These issues are a national security risk and could cause massive customer disruptions through blackouts and equipment damage.
“Cyber-physical critical infrastructures provide us with core societal functionalities and services such as electricity,” said Saman Zonouz, Georgia Tech associate professor and lead researcher for the project.
“Our multi-disciplinary solution, DerGuard, will leverage device-level cybersecurity, system-wide analysis, and AI techniques for automated vulnerability assessment, discovery, and mitigation in power grids with emerging renewable energy resources.”
The project’s long-term outcome will be a secure, AI-enabled power grid solution that can search and protect the DER’s on its network from cyberattacks.
“First, we will identify sets of critical DERs that, if compromised, would allow the attacker to cause the most trouble for the power grid,” said Daniel Molzahn, assistant professor at Georgia Tech.
“These DERs would then be prioritized for analysis and patching any identified cyber problems. Identifying the critical sets of DERs would require information about the DERs themselves- like size or location- and the power grid. This way, the utility company or other aggregator would be in the best position to use this tool.”
Additionally, the team will establish a testbed with industry partners. They will then develop and evaluate technology applications to better understand the behavior between people, devices, and network performance.
Along with Zonouz and Molzahn, Georgia Tech faculty Wenke Lee, professor, and John P. Imlay Jr. chair in software, will also lead the team of researchers from across the country.
The researchers are collaborating with the University of Illinois at Urbana-Champaign, the Department of Energy’s National Renewable Energy Lab, the Idaho National Labs, the National Rural Electric Cooperative Association, and Fortiphyd Logic. Industry partners Network Perception, Siemens, and PSE&G will advise the researchers.
The work will be carried out at Georgia Tech’s Cyber-Physical Security Lab (CPSec) within the School of Cybersecurity and Privacy (SCP) and the School of Electrical and Computer Engineering (ECE).
The U.S. Department of Energy (DOE) announced a $45 million investment at the end of February for 16 cybersecurity initiatives. The projects will identify new cybersecurity tools and technologies designed to reduce cyber risks for energy infrastructure followed by tech-transfer initiatives. The DOE’s Office of Cybersecurity, Energy Security, and Emergency Response (CESER) awarded $4.2 million for the Institute’s DerGuard project.
News Contact
JP Popham, Communications Officer II
Georgia Tech School of Cybersecurity & Privacy
john.popham@cc.gatech.edu
Mar. 04, 2024
Hyundai Motor Group Innovation Center Singapore hosted the Meta-Factory Conference Jan. 23 – 24. It brought together academic leaders, industry experts, and manufacturing companies to discuss technology and the next generation of integrated manufacturing facilities.
Seth Hutchinson, executive director of the Institute for Robotics and Intelligent Machines at Georgia Tech, delivered a keynote lecture on “The Impacts of Today’s Robotics Innovation on the Relationship Between Robots and Their Human Co-Workers in Manufacturing Applications” — an overview of current state-of-the-art robotic technologies and future research trends for developing robotics aimed at interactions with human workers in manufacturing.
In addition to the keynote, Hutchinson also participated in the Hyundai Motor Group's Smart Factory Executive Technology Advisory Committee (E-TAC) panel on comprehensive future manufacturing directions and toured the new Hyundai Meta-Factory to observe how digital-twin technology is being applied in their human-robot collaborative manufacturing environment.
Hutchinson is a professor in the School of Interactive Computing. He received his Ph.D. from Purdue University in 1988, and in 1990 joined the University of Illinois Urbana-Champaign, where he was professor of electrical and computer engineering until 2017 and is currently professor emeritus. He has served on the Hyundai Motor Group's Smart Factory E-TAC since 2022.
Hyundai Motor Group Innovation Center Singapore is Hyundai Motor Group’s open innovation hub to support research and development of human-centered smart manufacturing processes using advanced technologies such as artificial intelligence, the Internet of Things, and robotics.
- Christa M. Ernst
Related Links
- Hyundai Newsroom Article: Link
- Event Link: https://mfc2024.com/
- Keynote Speakers: https://mfc2024.com/keynotes/
News Contact
Christa M. Ernst - Research Communications Program Manager
christa.ernst@research.gatech.edu
Feb. 29, 2024
One of the hallmarks of humanity is language, but now, powerful new artificial intelligence tools also compose poetry, write songs, and have extensive conversations with human users. Tools like ChatGPT and Gemini are widely available at the tap of a button — but just how smart are these AIs?
A new multidisciplinary research effort co-led by Anna (Anya) Ivanova, assistant professor in the School of Psychology at Georgia Tech, alongside Kyle Mahowald, an assistant professor in the Department of Linguistics at the University of Texas at Austin, is working to uncover just that.
Their results could lead to innovative AIs that are more similar to the human brain than ever before — and also help neuroscientists and psychologists who are unearthing the secrets of our own minds.
The study, “Dissociating Language and Thought in Large Language Models,” is published this week in the scientific journal Trends in Cognitive Sciences. The work is already making waves in the scientific community: an earlier preprint of the paper, released in January 2023, has already been cited more than 150 times by fellow researchers. The research team has continued to refine the research for this final journal publication.
“ChatGPT became available while we were finalizing the preprint,” Ivanova explains. “Over the past year, we've had an opportunity to update our arguments in light of this newer generation of models, now including ChatGPT.”
Form versus function
The study focuses on large language models (LLMs), which include AIs like ChatGPT. LLMs are text prediction models, and create writing by predicting which word comes next in a sentence — just like how a cell phone or email service like Gmail might suggest what next word you might want to write. However, while this type of language learning is extremely effective at creating coherent sentences, that doesn’t necessarily signify intelligence.
Ivanova’s team argues that formal competence — creating a well-structured, grammatically correct sentence — should be differentiated from functional competence — answering the right question, communicating the correct information, or appropriately communicating. They also found that while LLMs trained on text prediction are often very good at formal skills, they still struggle with functional skills.
“We humans have the tendency to conflate language and thought,” Ivanova says. “I think that’s an important thing to keep in mind as we're trying to figure out what these models are capable of, because using that ability to be good at language, to be good at formal competence, leads many people to assume that AIs are also good at thinking — even when that's not the case.
It's a heuristic that we developed when interacting with other humans over thousands of years of evolution, but now in some respects, that heuristic is broken,” Ivanova explains.
The distinction between formal and functional competence is also vital in rigorously testing an AI’s capabilities, Ivanova adds. Evaluations often don’t distinguish formal and functional competence, making it difficult to assess what factors are determining a model’s success or failure. The need to develop distinct tests is one of the team’s more widely accepted findings, and one that some researchers in the field have already begun to implement.
Creating a modular system
While the human tendency to conflate functional and formal competence may have hindered understanding of LLMs in the past, our human brains could also be the key to unlocking more powerful AIs.
Leveraging the tools of cognitive neuroscience while a postdoctoral associate at Massachusetts Institute of Technology (MIT), Ivanova and her team studied brain activity in neurotypical individuals via fMRI, and used behavioral assessments of individuals with brain damage to test the causal role of brain regions in language and cognition — both conducting new research and drawing on previous studies. The team’s results showed that human brains use different regions for functional and formal competence, further supporting this distinction in AIs.
“Our research shows that in the brain, there is a language processing module and separate modules for reasoning,” Ivanova says. This modularity could also serve as a blueprint for how to develop future AIs.
“Building on insights from human brains — where the language processing system is sharply distinct from the systems that support our ability to think — we argue that the language-thought distinction is conceptually important for thinking about, evaluating, and improving large language models, especially given recent efforts to imbue these models with human-like intelligence,” says Ivanova’s former advisor and study co-author Evelina Fedorenko, a professor of brain and cognitive sciences at MIT and a member of the McGovern Institute for Brain Research.
Developing AIs in the pattern of the human brain could help create more powerful systems — while also helping them dovetail more naturally with human users. “Generally, differences in a mechanism’s internal structure affect behavior,” Ivanova says. “Building a system that has a broad macroscopic organization similar to that of the human brain could help ensure that it might be more aligned with humans down the road.”
In the rapidly developing world of AI, these systems are ripe for experimentation. After the team’s preprint was published, OpenAI announced their intention to add plug-ins to their GPT models.
“That plug-in system is actually very similar to what we suggest,” Ivanova adds. “It takes a modularity approach where the language model can be an interface to another specialized module within a system.”
While the OpenAI plug-in system will include features like booking flights and ordering food, rather than cognitively inspired features, it demonstrates that “the approach has a lot of potential,” Ivanova says.
The future of AI — and what it can tell us about ourselves
While our own brains might be the key to unlocking better, more powerful AIs, these AIs might also help us better understand ourselves. “When researchers try to study the brain and cognition, it's often useful to have some smaller system where you can actually go in and poke around and see what's going on before you get to the immense complexity,” Ivanova explains.
However, since human language is unique, model or animal systems are more difficult to relate. That's where LLMs come in.
“There are lots of surprising similarities between how one would approach the study of the brain and the study of an artificial neural network” like a large language model, she adds. “They are both information processing systems that have biological or artificial neurons to perform computations.”
In many ways, the human brain is still a black box, but openly available AIs offer a unique opportunity to see the synthetic system's inner workings and modify variables, and explore these corresponding systems like never before.
“It's a really wonderful model that we have a lot of control over,” Ivanova says. “Neural networks — they are amazing.”
Along with Anna (Anya) Ivanova, Kyle Mahowald, and Evelina Fedorenko, the research team also includes Idan Blank (University of California, Los Angeles), as well as Nancy Kanwisher and Joshua Tenenbaum (Massachusetts Institute of Technology).
DOI: https://doi.org/10.1016/j.tics.2024.01.011
Researcher Acknowledgements
For helpful conversations, we thank Jacob Andreas, Alex Warstadt, Dan Roberts, Kanishka Misra, students in the 2023 UT Austin Linguistics 393 seminar, the attendees of the Harvard LangCog journal club, the attendees of the UT Austin Department of Linguistics SynSem seminar, Gary Lupyan, John Krakauer, members of the Intel Deep Learning group, Yejin Choi and her group members, Allyson Ettinger, Nathan Schneider and his group members, the UT NLL Group, attendees of the KUIS AI Talk Series at Koç University in Istanbul, Tom McCoy, attendees of the NYU Philosophy of Deep Learning conference and his group members, Sydney Levine, organizers and attendees of the ILFC seminar, and others who have engaged with our ideas. We also thank Aalok Sathe for help with document formatting and references.
Funding sources
Anna (Anya) Ivanova was supported by funds from the Quest Initiative for Intelligence. Kyle Mahowald acknowledges funding from NSF Grant 2104995. Evelina Fedorenko was supported by NIH awards R01-DC016607, R01-DC016950, and U01-NS121471 and by research funds from the Brain and Cognitive Sciences Department, McGovern Institute for Brain Research, and the Simons Foundation through the Simons Center for the Social Brain.
News Contact
Written by Selena Langner
Editor and Press Contact:
Jess Hunt-Ralston
Director of Communications
College of Sciences
Georgia Tech
Feb. 21, 2024
Energy is everywhere, affecting everything, all the time. And it can be manipulated and converted into the kind of energy that we depend on as a civilization. But transforming this ambient energy (the result of gyrating atoms and molecules) into something we can plug into and use when we need it requires specific materials.
These energy materials — some natural, some manufactured, some a combination — facilitate the conversion or transmission of energy. They also play an essential role in how we store energy, how we reduce power consumption, and how we develop cleaner, efficient energy solutions.
“Advanced materials and clean energy technologies are tightly connected, and at Georgia Tech we’ve been making major investments in people and facilities in batteries, solar energy, and hydrogen, for several decades,” said Tim Lieuwen, the David S. Lewis Jr. Chair and professor of aerospace engineering, and executive director of Georgia Tech’s Strategic Energy Institute (SEI).
That research synergy is the underpinning of Georgia Tech Energy Materials Day (March 27), a gathering of people from academia, government, and industry, co-hosted by SEI, the Institute for Materials (IMat), and the Georgia Tech Advanced Battery Center. This event aims to build on the momentum created by Georgia Tech Battery Day, held in March 2023, which drew more than 230 energy researchers and industry representatives.
“We thought it would be a good idea to expand on the Battery Day idea and showcase a wide range of research and expertise in other areas, such as solar energy and clean fuels, in addition to what we’re doing in batteries and energy storage,” said Matt McDowell, associate professor in the George W. Woodruff School of Mechanical Engineering and the School of Materials Science and Engineering (MSE), and co-director, with Gleb Yushin, of the Advanced Battery Center.
Energy Materials Day will bring together experts from academia, government, and industry to discuss and accelerate research in three key areas: battery materials and technologies, photovoltaics and the grid, and materials for carbon-neutral fuel production, “all of which are crucial for driving the clean energy transition,” noted Eric Vogel, executive director of IMat and the Hightower Professor of Materials Science and Engineering.
“Georgia Tech is leading the charge in research in these three areas,” he said. “And we’re excited to unite so many experts to spark the important discussions that will help us advance our nation’s path to net-zero emissions.”
Building an Energy Hub
Energy Materials Day is part of an ongoing, long-range effort to position Georgia Tech, and Georgia, as a go-to location for modern energy companies. So far, the message seems to be landing. Georgia has had more than $28 billion invested or announced in electric vehicle-related projects since 2020. And Georgia Tech was recently ranked by U.S. News & World Report as the top public university for energy research.
Georgia has become a major player in solar energy, also, with the announcement last year of a $2.5 billion plant being developed by Korean solar company Hanwha Qcells, taking advantage of President Biden’s climate policies. Qcells’ global chief technology officer, Danielle Merfeld, a member of SEI’s External Advisory Board, will be the keynote speaker for Energy Materials Day.
“Growing these industry relationships, building trust through collaborations with industry — these have been strong motivations in our efforts to create a hub here in Atlanta,” said Yushin, professor in MSE and co-founder of Sila Nanotechnologies, a battery materials startup valued at more than $3 billion.
McDowell and Yushin are leading the battery initiative for Energy Materials Day and they’ll be among 12 experts making presentations on battery materials and technologies, including six from Georgia Tech and four from industry. In addition to the formal sessions and presentations, there will also be an opportunity for networking.
“I think Georgia Tech has a responsibility to help grow a manufacturing ecosystem,” McDowell said. “We have the research and educational experience and expertise that companies need, and we’re working to coordinate our efforts with industry.”
Marta Hatzell, associate professor of mechanical engineering and chemical and biomolecular engineering, is leading the carbon-neutral fuel production portion of the event, while Juan-Pablo Correa-Baena, assistant professor in MSE, is leading the photovoltaics initiative.
They’ll be joined by a host of experts from Georgia Tech and institutes across the country, “some of the top thought leaders in their fields,” said Correa-Baena, whose lab has spent years optimizing a semiconductor material for solar energy conversion.
“Over the past decade, we have been working to achieve high efficiencies in solar panels based on a new, low-cost material called halide perovskites,” he said. His lab recently discovered how to prevent the chemical interactions that can degrade it. “It’s kind of a miracle material, and we want to increase its lifespan, make it more robust and commercially relevant.”
While Correa-Baena is working to revolutionize solar energy, Hatzell’s lab is designing materials to clean up the manufacturing of clean fuels.
“We’re interested in decarbonizing the industrial sector, through the production of carbon-neutral fuels,” said Hatzell, whose lab is designing new materials to make clean ammonia and hydrogen, both of which have the potential to play a major role in a carbon-free fuel system, without using fossil fuels as the feedstock. “We’re also working on a collaborative project focusing on assessing the economics of clean ammonia on a larger, global scale.”
The hope for Energy Materials Day is that other collaborations will be fostered as industry’s needs and the research enterprise collide in one place — Georgia Tech’s Exhibition Hall — over one day. The event is part of what Yushin called “the snowball effect.”
“You attract a new company to the region, and then another,” he said. “If we want to boost domestic production and supply chains, we must roll like a snowball gathering momentum. Education is a significant part of that effect. To build this new technology and new facilities for a new industry, you need trained, talented engineers. And we’ve got plenty of those. Georgia Tech can become the single point of contact, helping companies solve the technical challenges in a new age of clean energy.”
News Contact
Feb. 15, 2024
The Georgia Institute of Technology today announced the signing of a master research agreement with Micron Technology, a global leader in memory and storage solutions. Under the new agreement, the two organizations will expand their collaborative efforts in providing students with experiential research opportunities and expanding access to engineering education.
“We are proud to join forces with Georgia Tech, home to some of the nation’s top programs, to expand students’ opportunities in STEM education,” said Scott DeBoer, executive vice president of Technology and Products at Micron. “This collaboration will help push the boundaries in memory technology innovation and ensure we prepare the workforce of the future.”
“We believe that when academia and industry converge, the best ideas flourish into game-changing innovations,” said Chaouki T. Abdallah, executive vice president for Research at Georgia Tech. “The synergy between Micron and Georgia Tech has already been tremendously fruitful, and we are so excited for the boundless opportunities on our shared horizon.”
“The signing of the master research agreement represents a significant step towards increasing additional collaboration pathways between Micron and GT including the joint pursuit of major federal funding activities, technology transfer, student internships and technology transfer,” said George White, senior director of Strategic Partnerships at Georgia Tech.
The first project under the agreement is already underway. Saibal Mukhopadhyay, professor in the School of Electrical and Computer Engineering, is leading the research efforts titled “Configurable Processing-In-Memory.” This cutting-edge research will enable memory devices to work faster and more efficiently.
News Contact
Amelia Neumeister
Research Communications Program Manager
Feb. 05, 2024
Scientists are always looking for better computer models that simulate the complex systems that define our world. To meet this need, a Georgia Tech workshop held Jan. 16 illustrated how new artificial intelligence (AI) research could usher the next generation of scientific computing.
The workshop focused AI technology toward optimization of complex systems. Presentations of climatological and electromagnetic simulations showed these techniques resulted in more efficient and accurate computer modeling. The workshop also progressed AI research itself since AI models typically are not well-suited for optimization tasks.
The School of Computational Science and Engineering (CSE) and Institute for Data Engineering and Science jointly sponsored the workshop.
School of CSE Assistant Professors Peng Chen and Raphaël Pestourie led the workshop’s organizing committee and moderated the workshop’s two panel discussions. The duo also pitched their own research, highlighting potential of scientific AI.
Chen shared his work on derivative-informed neural operators (DINOs). DINOs are a class of neural networks that use derivative information to approximate solutions of partial differential equations. The derivative enhancement results in neural operators that are more accurate and efficient.
During his talk, Chen showed how DINOs makes better predictions with reliable derivatives. These have potential to solve data assimilation problems in weather and flooding prediction. Other applications include allocating sensors for early tsunami warnings and designing new self-assembly materials.
All these models contain elements of uncertainty where data is unknown, noisy, or changes over time. Not only is DINOs a powerful tool to quantify uncertainty, but it also requires little training data to become functional.
“Recent advances in AI tools have become critical in enhancing societal resilience and quality, particularly through their scientific uses in environmental, climatic, material, and energy domains,” Chen said.
“These tools are instrumental in driving innovation and efficiency in these and many other vital sectors.”
[Related: Machine Learning Key to Proposed App that Could Help Flood-prone Communities]
One challenge in studying complex systems is that it requires many simulations to generate enough data to learn from and make better predictions. But with limited data on hand, it is costly to run enough simulations to produce new data.
At the workshop, Pestourie presented his physics-enhanced deep surrogates (PEDS) as a solution to this optimization problem.
PEDS employs scientific AI to make efficient use of available data while demanding less computational resources. PEDS demonstrated to be up to three times more accurate than models using neural networks while needing less training data by at least a factor of 100.
PEDS yielded these results in tests on diffusion, reaction-diffusion, and electromagnetic scattering models. PEDS performed well in these experiments geared toward physics-based applications because it combines a physics simulator with a neural network generator.
“Scientific AI makes it possible to systematically leverage models and data simultaneously,” Pestourie said. “The more adoption of scientific AI there will be by domain scientists, the more knowledge will be created for society.”
[Related: Technique Could Efficiently Solve Partial Differential Equations for Numerous Applications]
Study and development of AI applications at these scales require use of the most powerful computers available. The workshop invited speakers from national laboratories who showcased supercomputing capabilities available at their facilities. These included Oak Ridge National Laboratory, Sandia National Laboratories, and Pacific Northwest National Laboratory.
The workshop hosted Georgia Tech faculty who represented the Colleges of Computing, Design, Engineering, and Sciences. Among these were workshop co-organizers Yan Wang and Ebeneser Fanijo. Wang is a professor in the George W. Woodruff School of Mechanical Engineering and Fanjio is an assistant professor in the School of Building Construction.
The workshop welcomed academics outside of Georgia Tech to share research occurring at their institutions. These speakers hailed from Emory University, Clemson University, and the University of California, Berkeley.
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Jan. 29, 2024
For over three decades, a highly accurate early diagnostic test for ovarian cancer has eluded physicians. Now, scientists in the Georgia Tech Integrated Cancer Research Center (ICRC) have combined machine learning with information on blood metabolites to develop a new test able to detect ovarian cancer with 93 percent accuracy among samples from the team’s study group.
John McDonald, professor emeritus in the School of Biological Sciences, founding director of the ICRC, and the study’s corresponding author, explains that the new test’s accuracy is better in detecting ovarian cancer than existing tests for women clinically classified as normal, with a particular improvement in detecting early-stage ovarian disease in that cohort.
The team’s results and methodologies are detailed in a new paper, “A Personalized Probabilistic Approach to Ovarian Cancer Diagnostics,” published in the March 2024 online issue of the medical journal Gynecologic Oncology. Based on their computer models, the researchers have developed what they believe will be a more clinically useful approach to ovarian cancer diagnosis — whereby a patient’s individual metabolic profile can be used to assign a more accurate probability of the presence or absence of the disease.
“This personalized, probabilistic approach to cancer diagnostics is more clinically informative and accurate than traditional binary (yes/no) tests,” McDonald says. “It represents a promising new direction in the early detection of ovarian cancer, and perhaps other cancers as well.”
The study co-authors also include Dongjo Ban, a Bioinformatics Ph.D. student in McDonald’s lab; Research Scientists Stephen N. Housley, Lilya V. Matyunina, and L.DeEtte (Walker) McDonald; Regents’ Professor Jeffrey Skolnick, who also serves as Mary and Maisie Gibson Chair in the School of Biological Sciences and Georgia Research Alliance Eminent Scholar in Computational Systems Biology; and two collaborating physicians: University of North Carolina Professor Victoria L. Bae-Jump and Ovarian Cancer Institute of Atlanta Founder and Chief Executive Officer Benedict B. Benigno. Members of the research team are forming a startup to transfer and commercialize the technology, and plan to seek requisite trials and FDA approval for the test.
Silent killer
Ovarian cancer is often referred to as the silent killer because the disease is typically asymptomatic when it first arises — and is usually not detected until later stages of development, when it is difficult to treat.
McDonald explains that while the average five-year survival rate for late-stage ovarian cancer patients, even after treatment, is around 31 percent — but that if ovarian cancer is detected and treated early, the average five-year survival rate is more than 90 percent.
“Clearly, there is a tremendous need for an accurate early diagnostic test for this insidious disease,” McDonald says.
And although development of an early detection test for ovarian cancer has been vigorously pursued for more than three decades, the development of early, accurate diagnostic tests has proven elusive. Because cancer begins on the molecular level, McDonald explains, there are multiple possible pathways capable of leading to even the same cancer type.
“Because of this high-level molecular heterogeneity among patients, the identification of a single universal diagnostic biomarker of ovarian cancer has not been possible,” McDonald says. “For this reason, we opted to use a branch of artificial intelligence — machine learning — to develop an alternative probabilistic approach to the challenge of ovarian cancer diagnostics.”
Metabolic profiles
Georgia Tech co-author Dongjo Ban, whose thesis research contributed to the study, explains that “because end-point changes on the metabolic level are known to be reflective of underlying changes operating collectively on multiple molecular levels, we chose metabolic profiles as the backbone of our analysis.”
“The set of human metabolites is a collective measure of the health of cells,” adds coauthor Jeffrey Skolnick, “and by not arbitrarily choosing any subset in advance, one lets the artificial intelligence figure out which are the key players for a given individual.”
Mass spectrometry can identify the presence of metabolites in the blood by detecting their mass and charge signatures. However, Ban says, the precise chemical makeup of a metabolite requires much more extensive characterization.
Ban explains that because the precise chemical composition of less than seven percent of the metabolites circulating in human blood have, thus far, been chemically characterized, it is currently impossible to accurately pinpoint the specific molecular processes contributing to an individual's metabolic profile.
However, the research team recognized that, even without knowing the precise chemical make-up of each individual metabolite, the mere presence of different metabolites in the blood of different individuals, as detected by mass spectrometry, can be incorporated as features in the building of accurate machine learning-based predictive models (similar to the use of individual facial features in the building of facial pattern recognition algorithms).
“Thousands of metabolites are known to be circulating in the human bloodstream, and they can be readily and accurately detected by mass spectrometry and combined with machine learning to establish an accurate ovarian cancer diagnostic,” Ban says.
A new probabilistic approach
The researchers developed their integrative approach by combining metabolomic profiles and machine learning-based classifiers to establish a diagnostic test with 93 percent accuracy when tested on 564 women from Georgia, North Carolina, Philadelphia and Western Canada. 431 of the study participants were active ovarian cancer patients, and while the remaining 133 women in the study did not have ovarian cancer.
Further studies have been initiated to study the possibility that the test is able to detect very early-stage disease in women displaying no clinical symptoms, McDonald says.
McDonald anticipates a clinical future where a person with a metabolic profile that falls within a score range that makes cancer highly unlikely would only require yearly monitoring. But someone with a metabolic score that lies in a range where a majority (say, 90%) have previously been diagnosed with ovarian cancer would likely be monitored more frequently — or perhaps immediately referred for advanced screening.
Citation: https://doi.org/10.1016/j.ygyno.2023.12.030
Funding
This research was funded by the Ovarian Cancer Institute (Atlanta), the Laura Crandall Brown Foundation, the Deborah Nash Endowment Fund, Northside Hospital (Atlanta), and the Mark Light Integrated Cancer Research Student Fellowship.
Disclosure
Study co-authors John McDonald, Stephen N. Housley, Jeffrey Skolnick, and Benedict B. Benigno are the co-founders of MyOncoDx, Inc., formed to support further research, technology transfer, and commercialization for the team’s new clinical tool for the diagnosis of ovarian cancer.
News Contact
Writer: Renay San Miguel
Communications Officer II/Science Writer
College of Sciences
404-894-5209
Editor: Jess Hunt-Ralston
Jan. 16, 2024
Machine learning (ML) has transformed the digital landscape with its unprecedented ability to automate complex tasks and improve decision-making processes. However, many organizations, including the U.S. Department of Defense (DoD), still rely on time-consuming methods for developing and testing machine learning models, which can create strategic vulnerabilities in today’s fast-changing environment.
The Georgia Tech Research Institute (GTRI) is addressing this challenge by developing a Machine Learning Operations (MLOps) platform that standardizes the development and testing of artificial intelligence (AI) and ML models to enhance the speed and efficiency with which these models are utilized during real-time decision-making situations.
“It’s been difficult for organizations to transition these models from a research environment and turn them into fully-functional products that can be used in real-time,” said Austin Ruth, a GTRI research engineer who is leading this project. “Our goal is to bring AI/ML to the tactical edge where it could be used during active threat situations to heighten the survivability of our warfighters.”
Rather than treating ML development in isolation, GTRI’s MLOps platform would bridge the gap between data scientists and field operations so that organizations can oversee the entire lifecycle of ML projects from development to deployment at the tactical edge.
The tactical edge refers to the immediate operational space where decisions are made and actions take place. Bringing AI and ML capabilities closer to the point of action would enhance the speed, efficiency and effectiveness of decision-making processes and contribute to more agile and adaptive responses to threats.
“We want to develop a system where fighter jets or warships don’t have to do any data transfers but could train and label the data right where they are and have the AI/ML models improve in real-time as they’re actively going up against threats,” said Ruth.
For example, a model could monitor a plane’s altitude and speed, immediately spot potential wing drag issues and alert the pilot about it. In an electronic warfare (EW) situation when facing enemy aircraft or missiles, the models could process vast amounts of incoming data to more quickly identify threats and recommend effective countermeasures in real time.
AI/ML models need to be trained and tested to ensure their effectiveness in adapting to new, unseen data. However, without having a standardized process in place, training and testing is done in a fragmented manner, which poses several risks, such as overfitting, where the model performs well on the training data but fails to generalize unseen data and makes inaccurate predictions or decisions in real-world situations, security vulnerabilities where bad actors exploit weaknesses in the models, and a general lack of robustness and inefficient resource utilization.
“Throughout this project, we noticed that training and testing are often done in a piecemeal fashion and thus aren’t repeatable,” said Jovan Munroe, a GTRI senior research engineer who is also leading this project. “Our MLOps platform makes the training and testing process more consistent and well-defined so that these models are better equipped to identify and address unknown variables in the battle space.”
This project has been supported by GTRI’s Independent Research and Development (IRAD) Program, winning an IRAD of the Year award in fiscal year 2023. In fiscal year 2024, the project received funding from a U.S. government sponsor.
Writer: Anna Akins
Photos: Sean McNeil
GTRI Communications
Georgia Tech Research Institute
Atlanta, Georgia
The Georgia Tech Research Institute (GTRI) is the nonprofit, applied research division of the Georgia Institute of Technology (Georgia Tech). Founded in 1934 as the Engineering Experiment Station, GTRI has grown to more than 2,900 employees, supporting eight laboratories in over 20 locations around the country and performing more than $940 million of problem-solving research annually for government and industry. GTRI's renowned researchers combine science, engineering, economics, policy, and technical expertise to solve complex problems for the U.S. federal government, state, and industry.
News Contact
(Interim) Director of Communications
Michelle Gowdy
Michelle.Gowdy@gtri.gatech.edu
404-407-8060
Jan. 04, 2024
While increasing numbers of people are seeking mental health care, mental health providers are facing critical shortages. Now, an interdisciplinary team of investigators at Georgia Tech, Emory University, and Penn State aim to develop an interactive AI system that can provide key insights and feedback to help these professionals improve and provide higher quality care, while satisfying the increasing demand for highly trained, effective mental health professionals.
A new $2,000,000 grant from the National Science Foundation (NSF) will support the research.
The research builds on previous collaboration between Rosa Arriaga, an associate professor in the College of Computing and Andrew Sherrill, an assistant professor in the Department of Psychiatry and Behavioral Sciences at Emory University, who worked together on a computational system for PTSD therapy.
Arriaga and Christopher Wiese, an assistant professor in the School of Psychology will lead the Georgia Tech team, Saeed Abdullah, an assistant professor in the College of Information Sciences and Technology will lead the Penn State team, and Sherrill will serve as overall project lead and Emory team lead.
The grant, for “Understanding the Ethics, Development, Design, and Integration of Interactive Artificial Intelligence Teammates in Future Mental Health Work” will allocate $801,660 of support to the Georgia Tech team, supporting four years of research.
“The initial three years of our project are dedicated to understanding and defining what functionalities and characteristics make an AI system a 'teammate' rather than just a tool,” Wiese says. “This involves extensive research and interaction with mental health professionals to identify their specific needs and challenges. We aim to understand the nuances of their work, their decision-making processes, and the areas where AI can provide meaningful support.In the final year, we plan to implement a trial run of this AI teammate philosophy with mental health professionals.”
While the project focuses on mental health workers, the impacts of the project range far beyond. “AI is going to fundamentally change the nature of work and workers,” Arriaga says. “And, as such, there’s a significant need for research to develop best practices for integrating worker, work, and future technology.”
The team underscores that sectors like business, education, and customer service could easily apply this research. The ethics protocol the team will develop will also provide a critical framework for best practices. The team also hopes that their findings could inform policymakers and stakeholders making key decisions regarding AI.
“The knowledge and strategies we develop have the potential to revolutionize how AI is integrated into the broader workforce,” Wiese adds. “We are not just exploring the intersection of human and synthetic intelligence in the mental health profession; we are laying the groundwork for a future where AI and humans collaborate effectively across all areas of work.”
Collaborative project
The project aims to develop an AI coworker called TEAMMAIT (short for “the Trustworthy, Explainable, and Adaptive Monitoring Machine for AI Team”). Rather than functioning as a tool, as many AI’s currently do, TEAMMAIT will act more as a human teammate would, providing constructive feedback and helping mental healthcare workers develop and learn new skills.
“Unlike conventional AI tools that function as mere utilities, an AI teammate is designed to work collaboratively with humans, adapting to their needs and augmenting their capabilities,” Wiese explains. “Our approach is distinctively human-centric, prioritizing the needs and perspectives of mental health professionals… it’s important to recognize that this is a complex domain and interdisciplinary collaboration is necessary to create the most optimal outcomes when it comes to integrating AI into our lives.”
With both technical and human health aspects to the research, the project will leverage an interdisciplinary team of experts spanning clinical psychology, industrial-organizational psychology, human-computer interaction, and information science.
“We need to work closely together to make sure that the system, TEAMMAIT, is useful and usable,” adds Arriaga. “Chris (Wiese) and I are looking at two types of challenges: those associated with the organization, as Chris is an industrial organizational psychology expert — and those associated with the interface, as I am a computer scientist that specializes in human computer interaction.”
Long-term timeline
The project’s long-term timeline reflects the unique challenges that it faces.
“A key challenge is in the development and design of the AI tools themselves,” Wiese says. “They need to be user-friendly, adaptable, and efficient, enhancing the capabilities of mental health workers without adding undue complexity or stress. This involves continuous iteration and feedback from end-users to refine the AI tools, ensuring they meet the real-world needs of mental health professionals.”
The team plans to deploy TEAMMAIT in diverse settings in the fourth year of development, and incorporate data from these early users to create development guidelines for Worker-AI teammates in mental health work, and to create ethical guidelines for developing and using this type of system.
“This will be a crucial phase where we test the efficacy and integration of the AI in real-world scenarios,” Wiese says. “We will assess not just the functional aspects of the AI, such as how well it performs specific tasks, but also how it impacts the work environment, the well-being of the mental health workers, and ultimately, the quality of care provided to patients.”
Assessing the psychological impacts on workers, including how TEAMMAIT impacts their day-to-day work will be crucial in ensuring TEAMMAIT has a positive impact on healthcare worker’s skills and wellbeing.
“We’re interested in understanding how mental health clinicians interact with TEAMMAIT and the subsequent impact on their work,” Wiese adds. “How long does it take for clinicians to become comfortable and proficient with TEAMMAIT? How does their engagement with TEAMMAIT change over the year? Do they feel like they are more effective when using TEAMMAIT? We’re really excited to begin answering these questions.
News Contact
Written by Selena Langner
Contact: Jess Hunt-Ralston
Dec. 20, 2023
A new machine learning method could help engineers detect leaks in underground reservoirs earlier, mitigating risks associated with geological carbon storage (GCS). Further study could advance machine learning capabilities while improving safety and efficiency of GCS.
The feasibility study by Georgia Tech researchers explores using conditional normalizing flows (CNFs) to convert seismic data points into usable information and observable images. This potential ability could make monitoring underground storage sites more practical and studying the behavior of carbon dioxide plumes easier.
The 2023 Conference on Neural Information Processing Systems (NeurIPS 2023) accepted the group’s paper for presentation. They presented their study on Dec. 16 at the conference’s workshop on Tackling Climate Change with Machine Learning.
“One area where our group excels is that we care about realism in our simulations,” said Professor Felix Herrmann. “We worked on a real-sized setting with the complexities one would experience when working in real-life scenarios to understand the dynamics of carbon dioxide plumes.”
CNFs are generative models that use data to produce images. They can also fill in the blanks by making predictions to complete an image despite missing or noisy data. This functionality is ideal for this application because data streaming from GCS reservoirs are often noisy, meaning it’s incomplete, outdated, or unstructured data.
The group found in 36 test samples that CNFs could infer scenarios with and without leakage using seismic data. In simulations with leakage, the models generated images that were 96% similar to ground truths. CNFs further supported this by producing images 97% comparable to ground truths in cases with no leakage.
This CNF-based method also improves current techniques that struggle to provide accurate information on the spatial extent of leakage. Conditioning CNFs to samples that change over time allows it to describe and predict the behavior of carbon dioxide plumes.
This study is part of the group’s broader effort to produce digital twins for seismic monitoring of underground storage. A digital twin is a virtual model of a physical object. Digital twins are commonplace in manufacturing, healthcare, environmental monitoring, and other industries.
“There are very few digital twins in earth sciences, especially based on machine learning,” Herrmann explained. “This paper is just a prelude to building an uncertainty aware digital twin for geological carbon storage.”
Herrmann holds joint appointments in the Schools of Earth and Atmospheric Sciences (EAS), Electrical and Computer Engineering, and Computational Science and Engineering (CSE).
School of EAS Ph.D. student Abhinov Prakash Gahlot is the paper’s first author. Ting-Ying (Rosen) Yu (B.S. ECE 2023) started the research as an undergraduate group member. School of CSE Ph.D. students Huseyin Tuna Erdinc, Rafael Orozco, and Ziyi (Francis) Yin co-authored with Gahlot and Herrmann.
NeurIPS 2023 took place Dec. 10-16 in New Orleans. Occurring annually, it is one of the largest conferences in the world dedicated to machine learning.
Over 130 Georgia Tech researchers presented more than 60 papers and posters at NeurIPS 2023. One-third of CSE’s faculty represented the School at the conference. Along with Herrmann, these faculty included Ümit Çatalyürek, Polo Chau, Bo Dai, Srijan Kumar, Yunan Luo, Anqi Wu, and Chao Zhang.
“In the field of geophysics, inverse problems and statistical solutions of these problems are known, but no one has been able to characterize these statistics in a realistic way,” Herrmann said.
“That’s where these machine learning techniques come into play, and we can do things now that you could never do before.”
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Pagination
- Previous page
- 14 Page 14
- Next page