Aug. 21, 2024
A new agreement between Los Alamos National Laboratory (LANL) and the National Science Foundation’s Artificial Intelligence Institute for Advances in Optimization (AI4OPT) at Georgia Tech is set to propel research in applied artificial intelligence (AI) and engage students and professionals in this rapidly growing field.
“This collaboration will help develop new AI technologies for the next generation of scientific discovery and the design of complex systems and the control of engineered systems,” said Russell Bent, scientist at Los Alamos. “At Los Alamos, we have a lot of interest in optimizing complex systems. We see an opportunity with AI to enhance system resilience and efficiency in the face of climate change, extreme events, and other challenges.”
The agreement establishes a research and educational partnership focused on advancing AI tools for a next-generation power grid. Maintaining and optimizing the energy grid involves extensive computation, and AI-informed approaches, including modeling, could address power-grid issues more effectively.
AI Approaches to Optimization and Problem-Solving
Optimization involves finding solutions that utilize resources effectively and efficiently. This research partnership will leverage Georgia Tech's expertise to develop “trustworthy foundation models” that, by incorporating AI, reduce the vast computing resources needed for solving complex problems.
In energy grid systems, optimization involves quickly sorting through possibilities and resources to deliver immediate solutions during a power-distribution crisis. The research will develop “optimization proxies” that extend current methods by incorporating broader parameters such as generator limits, line ratings, and grid topologies. Training these proxies with AI for energy applications presents a significant research challenge.
The collaboration will also address problems related to LANL’s diverse missions and applications. The team’s research will advance pioneering efforts in graph-based, physics-informed machine learning to solve Laboratory mission problems.
Outreach and Training Opportunities
In January 2025, the Laboratory will host a Grid Science Winter School and Conference, featuring lectures from LANL scientists and academic partners on electrical grid methods and techniques. With Georgia Tech as a co-organizer, AI optimization for the energy grid will be a focal point of the event.
Since 2020, the Laboratory has been working with Georgia Tech on energy grid projects. AI4OPT, which includes several industrial and academic partners, aims to achieve breakthroughs by combining AI and mathematical optimization.
“The use-inspired research in AI4OPT addresses fundamental societal and technological challenges,” said Pascal Van Hentenryck, AI4OPT director. “The energy grid is crucial to our daily lives. Our collaboration with Los Alamos advances a research mission and educational vision with significant impact for science and society.”
The three-year agreement, funded through the Laboratory Directed Research and Development program’s ArtIMis initiative, runs through 2027. It supports the Laboratory’s commitment to advancing AI. Earl Lawrence is the project’s principal investigator, with Diane Oyen and Emily Castleton joining Bent as co-principal investigators.
Bent, Castleton, Lawrence, and Oyen are also members of the AI Council at the Laboratory. The AI Council helps the Lab navigate the evolving AI landscape, build investment capacities, and forge industry and academic partnerships.
As highlighted in the Department of Energy’s Frontiers in Artificial Intelligence for Science, Security, and Technology (FASST) initiative, AI technologies will significantly enhance the contributions of laboratories to national missions. This partnership with Georgia Tech through AI4OPT is a key step towards that future.
News Contact
Breon Martin
Aug. 20, 2024
For three days, a cybercriminal unleashed a crippling ransomware attack on the futuristic city of Northbridge. The attack shut down the city’s infrastructure and severely impacted public services, until Georgia Tech cybersecurity experts stepped in to stop it.
This scenario played out this weekend at the DARPA AI Cyber Challenge (AIxCC) semi-final competition held at DEF CON 32 in Las Vegas. Team Atlanta, which included the Georgia Tech experts, were among the contest’s winners.
Team Atlanta will now compete against six other teams in the final round that takes place at DEF CON 33 in August 2025. The finalists will keep their AI system and improve it over the next 12 months using the $2 million semi-final prize.
The AI systems in the finals must be open sourced and ready for immediate, real-world launch. The AIxCC final competition will award a $4 million grand prize to the ultimate champion.
Team Atlanta is made up of past and present Georgia Tech students and was put together with the help of SCP Professor Taesoo Kim. Not only did the team secure a spot in the final competition, they found a zero-day vulnerability in the contest.
“I am incredibly proud to announce that Team Atlanta has qualified for the finals in the DARPA AIxCC competition,” said Taesoo Kim, professor in the School of Cybersecurity and Privacy and a vice president of Samsung Research.
“This achievement is the result of exceptional collaboration across various organizations, including the Georgia Tech Research Institute (GTRI), industry partners like Samsung, and international academic institutions such as KAIST and POSTECH.”
After noticing discrepancies in the competition score board, the team discovered and reported a bug in the competition itself. The type of vulnerability they discovered is known as a zero-day vulnerability, because vendors have zero days to fix the issue.
While this didn’t earn Team Atlanta additional points, the competition organizer acknowledged the team and their finding during the closing ceremony.
“Our team, deeply rooted in Atlanta and largely composed of Georgia Tech alumni, embodies the innovative spirit and community values that define our city,” said Kim.
“With over 30 dedicated students and researchers, we have demonstrated the power of cross-disciplinary teamwork in the semi-final event. As we advance to the finals, we are committed to pushing the boundaries of cybersecurity and artificial intelligence, and I firmly believe the resulting systems from this competition will transform the security landscape in the coming year!”
The team tested their cyber reasoning system (CRS), dubbed Atlantis, on software used for data management, website support, healthcare systems, supply chains, electrical grids, transportation, and other critical infrastructures.
Atlantis is a next-generation, bug-finding and fixing system that can hunt bugs in multiple coding languages. The system immediately issues accurate software patches without any human intervention.
AIxCC is a Pentagon-backed initiative that was announced in August 2023 and will award up to $20 million in prize money throughout the competition. Team Atlanta was among the 42 teams that qualified for the semi-final competition earlier this year.
News Contact
John Popham
Communications Officer II at the School of Cybersecurity and Privacy
Aug. 09, 2024
A research group is calling for internet and social media moderators to strengthen their detection and intervention protocols for violent speech.
Their study of language detection software found that algorithms struggle to differentiate anti-Asian violence-provoking speech from general hate speech. Left unchecked, threats of violence online can go unnoticed and turn into real-world attacks.
Researchers from Georgia Tech and the Anti-Defamation League (ADL) teamed together in the study. They made their discovery while testing natural language processing (NLP) models trained on data they crowdsourced from Asian communities.
“The Covid-19 pandemic brought attention to how dangerous violence-provoking speech can be. There was a clear increase in reports of anti-Asian violence and hate crimes,” said Gaurav Verma, a Georgia Tech Ph.D. candidate who led the study.
“Such speech is often amplified on social platforms, which in turn fuels anti-Asian sentiments and attacks.”
Violence-provoking speech differs from more commonly studied forms of harmful speech, like hate speech. While hate speech denigrates or insults a group, violence-provoking speech implicitly or explicitly encourages violence against targeted communities.
Humans can define and characterize violent speech as a subset of hateful speech. However, computer models struggle to tell the difference due to subtle cues and implications in language.
The researchers tested five different NLP classifiers and analyzed their F1 score, which measures a model's performance. The classifiers reported a 0.89 score for detecting hate speech, while detecting violence-provoking speech was only 0.69. This contrast highlights the notable gap between these tools and their accuracy and reliability.
The study stresses the importance of developing more refined methods for detecting violence-provoking speech. Internet misinformation and inflammatory rhetoric escalate tensions that lead to real-world violence.
The Covid-19 pandemic exemplified how public health crises intensify this behavior, helping inspire the study. The group cited that anti-Asian crime across the U.S. increased by 339% in 2021 due to malicious content blaming Asians for the virus.
The researchers believe their findings show the effectiveness of community-centric approaches to problems dealing with harmful speech. These approaches would enable informed decision-making between policymakers, targeted communities, and developers of online platforms.
Along with stronger models for detecting violence-provoking speech, the group discusses a direct solution: a tiered penalty system on online platforms. Tiered systems align penalties with severity of offenses, acting as both deterrent and intervention to different levels of harmful speech.
“We believe that we cannot tackle a problem that affects a community without involving people who are directly impacted,” said Jiawei Zhou, a Ph.D. student who studies human-centered computing at Georgia Tech.
“By collaborating with experts and community members, we ensure our research builds on front-line efforts to combat violence-provoking speech while remaining rooted in real experiences and needs of the targeted community.”
The researchers trained their tested NLP classifiers on a dataset crowdsourced from a survey of 120 participants who self-identified as Asian community members. In the survey, the participants labeled 1,000 posts from X (formerly Twitter) as containing either violence-provoking speech, hateful speech, or neither.
Since characterizing violence-provoking speech is not universal, the researchers created a specialized codebook for survey participants. The participants studied the codebook before their survey and used an abridged version while labeling.
To create the codebook, the group used an initial set of anti-Asian keywords to scan posts on X from January 2020 to February 2023. This tactic yielded 420,000 posts containing harmful, anti-Asian language.
The researchers then filtered the batch through new keywords and phrases. This refined the sample to 4,000 posts that potentially contained violence-provoking content. Keywords and phrases were added to the codebook while the filtered posts were used in the labeling survey.
The team used discussion and pilot testing to validate its codebook. During trial testing, pilots labeled 100 Twitter posts to ensure the sound design of the Asian community survey. The group also sent the codebook to the ADL for review and incorporated the organization’s feedback.
“One of the major challenges in studying violence-provoking content online is effective data collection and funneling down because most platforms actively moderate and remove overtly hateful and violent material,” said Tech alumnus Rynaa Grover (M.S. CS 2024).
“To address the complexities of this data, we developed an innovative pipeline that deals with the scale of this data in a community-aware manner.”
Emphasis on community input extended into collaboration within Georgia Tech’s College of Computing. Faculty members Srijan Kumar and Munmun De Choudhury oversaw the research that their students spearheaded.
Kumar, an assistant professor in the School of Computational Science and Engineering, advises Verma and Grover. His expertise is in artificial intelligence, data mining, and online safety.
De Choudhury is an associate professor in the School of Interactive Computing and advises Zhou. Their research connects societal mental health and social media interactions.
The Georgia Tech researchers partnered with the ADL, a leading non-governmental organization that combats real-world hate and extremism. ADL researchers Binny Mathew and Jordan Kraemer co-authored the paper.
The group will present its paper at the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), which takes place in Bangkok, Thailand, Aug. 11-16
ACL 2024 accepted 40 papers written by Georgia Tech researchers. Of the 12 Georgia Tech faculty who authored papers accepted at the conference, nine are from the College of Computing, including Kumar and De Choudhury.
“It is great to see that the peers and research community recognize the importance of community-centric work that provides grounded insights about the capabilities of leading language models,” Verma said.
“We hope the platform encourages more work that presents community-centered perspectives on important societal problems.”
Visit https://sites.gatech.edu/research/acl-2024/ for news and coverage of Georgia Tech research presented at ACL 2024.
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Aug. 08, 2024
Social media users may need to think twice before hitting that “Post” button.
A new large-language model (LLM) developed by Georgia Tech researchers can help them filter content that could risk their privacy and offer alternative phrasing that keeps the context of their posts intact.
According to a new paper that will be presented at the 2024 Association for Computing Linguistics(ACL) conference, social media users should tread carefully about the information they self-disclose in their posts.
Many people use social media to express their feelings about their experiences without realizing the risks to their privacy. For example, a person revealing their gender identity or sexual orientation may be subject to doxing and harassment from outside parties.
Others want to express their opinions without their employers or families knowing.
Ph.D. student Yao Dou and associate professors Alan Ritter and Wei Xu originally set out to study user awareness of self-disclosure privacy risks on Reddit. Working with anonymous users, they created an LLM to detect at-risk content.
While the study boosted user awareness of the personal information they revealed, many called for an intervention. They asked the researchers for assistance to rewrite their posts so they didn’t have to be concerned about privacy.
The researchers revamped the model to suggest alternative phrases that reduce the risk of privacy invasion.
One user disclosed, “I’m 16F I think I want to be a bi M.” The new tool offered alternative phrases such as:
- “I am exploring my sexual identity.”
- “I have a desire to explore new options.”
- “I am attracted to the idea of exploring different gender identities.”
Dou said the challenge is making sure the model provides suggestions that don’t change or distort the desired context of the post.
“That’s why instead of providing one suggestion, we provide three suggestions that are different from each other, and we allow the user to choose which one they want,” Dou said. “In some cases, the discourse information is important to the post, and in that case, they can choose what to abstract.”
WEIGHING THE RISKS
The researchers sampled 10,000 Reddit posts from a pool of 4 million that met their search criteria. They annotated those posts and created 19 categories of self-disclosures, including age, sexual orientation, gender, race or nationality, and location.
From there, they worked with Reddit users to test the effectiveness and accuracy of their model, with 82% giving positive feedback.
However, a contingent thought the model was “oversensitive,” highlighting content they did not believe posed a risk.
Ultimately, the researchers say users must decide what they will post.
“It’s a personal decision,” Ritter said. “People need to look at this and think about what they’re writing and decide between this tradeoff of what benefits they are getting from sharing information versus what privacy risks are associated with that.”
Xu acknowledged that future work on the project should include a metric that gives users a better idea of what types of content are more at risk than others.
“It’s kind of the way passwords work,” she said. “Years ago, they never told you your password strength, and now there’s a bar telling you how good your password is. Then you realize you need to add a special character and capitalize some letters, and that’s become a standard. This is telling the public how they can protect themselves. The risk isn’t zero, but it helps them think about it.”
WHAT ARE THE CONSEQUENCES?
While doxing and harassment are the most likely consequences of posting sensitive personal information, especially for those who belong to minority groups, the researchers say users have other privacy concerns.
Users should know that when they draft posts on a site, their input can be extracted by the site’s application programming interface (API). If that site has a data breach, a user’s personal information could fall into unwanted hands.
“I think we should have a path toward having everything work locally on the user’s computer, so it doesn’t rely on any external APIs to send this data off their local machine,” Ritter said.
Ritter added that users could also be targets of popular scams like phishing without ever knowing it.
“People trying targeted phishing attacks can learn personal information about people online that might help them craft more customized attacks that could make users vulnerable,” he said.
The safest way to avoid a breach of privacy is to stay off social media. But Xu said that’s impractical as there are resources and support these sites can provide that users may not get from anywhere else.
“We want people who may be afraid of social media to use it and feel safe when they post,” she said. “Maybe the best way to get an answer to a question is to ask online, but some people don’t feel comfortable doing that, so a tool like this can make them more comfortable sharing without much risk.”
For more information about Georgia Tech research at ACL, please visit https://sites.gatech.edu/research/acl-2024/.
News Contact
Nathan Deen
Communications Officer
School of Interactive Computing
Aug. 01, 2024
A Georgia Tech researcher will continue to mitigate harmful post-deployment effects created by artificial intelligence (AI) as he joins the 2024-2025 cohort of fellows selected by the Berkman-Klein Center (BKC) for Internet and Society at Harvard University.
Upol Ehsan is the first Georgia Tech graduate selected by BKC. As a fellow, he will contribute to its mission of exploring and understanding cyberspace, focusing on AI, social media, and university discourse.
Entering its 25th year, the BKC Harvard fellowship program addresses pressing issues and produces impactful research that influences academia and public policy. It offers a global perspective, a vibrant intellectual community, and significant funding and resources that attract top scholars and leaders.
The program is highly competitive and sought after by early career candidates and veteran academic and industry professionals. Cohorts hail from numerous backgrounds, including law, computer science, sociology, political science, neuroscience, philosophy, and media studies.
“Having the opportunity to join such a talented group of people and working with them is a treat,” Ehsan said. “I’m looking forward to adding to the prismatic network of BKC Harvard and learning from the cohesively diverse community.”
While at Georgia Tech, Ehsan expanded the field of explainable AI (XAI) and pioneered a subcategory he labeled human-centered explainable AI (HCXAI). Several of his papers introduced novel and foundational concepts into that subcategory of XAI.
Ehsan works with Professor Mark Riedl in the School of Interactive Computing and the Human-centered AI and Entertainment Intelligence Lab.
Ehsan says he will continue to work on research he introduced in his 2022 paper The Algorithmic Imprint, which shows how the potential harm from algorithms can linger even after an algorithm is no longer used. His research has informed the United Nations’ algorithmic reparations policies and has been incorporated into the National Institute of Standards and Technology AI Risk Management Framework.
“It’s a massive honor to receive this recognition of my work,” Ehsan said. “The Algorithmic Imprint remains a globally applicable Responsible AI concept developed entirely from the Global South. This recognition is dedicated to the participants who made this work possible. I want to take their stories even further."
While at BKC Harvard, Ehsan will develop a taxonomy of potentially harmful AI effects after a model is no longer used. He will also design a process to anticipate these effects and create interventions. He said his work addresses an “accountability blindspot” in responsible AI, which tends to focus on potential harmful effects created during AI deployment.
News Contact
Nathan Deen
Communications Officer
School of Interactive Computing
Jul. 22, 2024
This partnership in the advancement of AI and mathematical optimization to address pressing energy transformations in Latin America and the U.S. has formed between the NSF Artificial Intelligence (AI) Research Institute for Advances in Optimization (AI4OPT) at Georgia Tech and PSR, Inc. - Energy Consulting and Analytics.
PSR is a global leader in analytical solutions for the energy sector, providing innovative technical consultancy services and state-of-the-art power systems planning software. Their tools are used for detailed modeling of entire countries or regions and are utilized in over 70 countries. Together with AI4OPT, they aim to leverage advancements in AI and mathematical optimization to address pressing energy transformations in Latin America and the U.S.
Latin America boasts abundant renewable energy resources, especially hydropower, leading to one of the largest shares of renewables in its energy mix. However, expanding renewable energy capacity in Latin America and the U.S. to meet decarbonization goals will require system operational advances and new technologies that can adapt to current needs.
One focus of this collaboration will be studying how to efficiently incorporate pumped storage into the resource mix as a solution for long-duration storage. These plants act as large batteries, pumping water to higher reservoirs during low demand periods and generating electricity during high demand with minimal energy loss over time. This technology supports both short-term and long-term energy storage, making it crucial for managing the variability of intermittent renewables like solar and wind.
The complex and large-scale nature of the expansion problem, exacerbated by inherent uncertainty and time-coupled decisions, traditionally requires sophisticated optimization techniques. AI innovations now provide faster solutions and better representations of non-linear dynamics, leading to more cost-effective operations and optimized energy mix selection for the energy transition.
This collaboration plans to use machine learning to enhance power system operators' ability to perform faster security checks and screenings. As renewable energy sources introduce more variability, traditional methods struggle with the increasing number of scenarios needed for grid stability. Machine learning offers a solution by expediting these analyses, supporting the integration of more renewable energy into the system.
About PSR
PSR is a global provider of analytical solutions for the energy sector, spanning from insightful and innovative technical consultancy services to state-of-the-art specialized power systems planning software applied to the detailed modelling of entire countries or regions. Having its products applied in over 70 countries, PSR contributes to the research and development of optimization and data analytics’ tools for guaranteeing a reliable and least-cost operation of power systems, helping the countries achieve their decarbonization targets.
About AI4OPT
The Artificial Intelligence (AI) Research Institute for Advances in Optimization, or AI4OPT, aims to deliver a paradigm shift in automated decision-making at massive scales by fusing AI and Mathematical Optimization (MO) to achieve breakthroughs that neither field can achieve independently. The Institute is driven by societal challenges in energy, logistics and supply chains, resilience and sustainability, and circuit design and control. To address the widening gap in job opportunities, the Institute delivers an innovative longitudinal education and workforce development program.
News Contact
Breon Martin
AI Research Communications Manager
Georgia Tech
Jul. 22, 2024
Clark Atlanta University (CAU), in collaboration with Georgia Tech’s NSF Artificial Intelligence (AI) Research Institute for Advances in Optimization (AI4OPT), has been awarded a four-year $2.79 million grant (Award ID 2402493) by the National Science Foundation (NSF) to create an AI Hub. This collaborative effort aims to advance AI education and research at minority-serving institutions, particularly historically Black colleges and universities (HBCUs).
This initiative, part of the NSF ExpandAI program, aims to boost minority-serving institution participation in AI research, education, and workforce development through capacity-building projects and partnerships within the NSF-led National AI Research Institutes ecosystem.
Building an AI community is no easy feat, but the CAU-GT/AI4OPT collaboration is prepared to meet it. The project, known as AIHUB@CAU, will be led by principal investigator Charles B. Pierre, associate professor in CAU’s Department of Mathematical Sciences.
"The mission of the grant aligns with the AI4OPT Faculty Training Program, which focuses on strategies to increase minority participation in AI research programs from HBCUs to other minority-serving institutions," said Pierre, who also leads the Educational and Diversity Initiatives at AI4OPT. "Our goal is to ensure diverse representation in the AI field."
The collaboration will use existing educational resources and infrastructure to build centers of excellence in AI and a community of empowered Black AI researchers.
"We anticipate challenges in developing coursework, including finding qualified industry professionals to teach and preparing academic professors unfamiliar with AI," Pierre said. "Our aim is to establish Ph.D. programs at CAU and position the university as a hub for AI training, addressing these issues head-on."
AIHUB@CAU will integrate industry partnerships to accelerate curriculum development and real-world applications. It expands AI education beyond machine learning to encompass decision-making and applications in fields like business analytics, cyber-physical security, and operations research.
Partially funded through NSF's Louis Stokes Alliances for Minority Participation program, this award underscores NSF's commitment to diversity in STEM fields through impactful educational and research initiatives.
"Establishing programs at institutions like Clark Atlanta University and AI4OPT at Georgia Tech provides students with essential resources and tools to succeed in this ever-evolving field," Pierre noted.
Goals and Structure of the AI Education Program
Main Goals of Creating AI Courses at the Undergraduate and Graduate Levels:
- Close the gap of AI graduates from HBCUs at undergraduate and graduate levels.
- Prepare HBCU students for the AI workforce.
- Align with the vision of AI4OPT at Georgia Tech to "democratize access to AI education."
Impact on Students' Career Prospects and the AI Research Community:
- Undergraduate courses and programs will prepare students for entry-level positions in the field.
- Graduate courses and programs will prepare students for research and participation in the AI research community.
Role and Contribution:
- AI4OPT at Georgia Tech will assist CAU with the development of both undergraduate and graduate courses and programs.
- Offer research opportunities to CAU students at the undergraduate and graduate levels.
- AI4OPT at Georgia Tech will be a partner in the established AI Research Hub.
Support for Development of MS and Ph.D. Courses:
- Use current courses at Georgia Tech as a template.
- Use the courses offered through the Faculty Training Program (FTP) of AI4OPT.
Foundational AI Courses:
- Courses already taught by CAU faculty in the AI4OPT FTP.
- Courses available at Georgia Tech.
- New courses to be developed by AIHUB@CAU based on Intel material, focusing on computer vision and natural language processing.
- Courses in applied optimization developed by AI4OPT.
- New use-inspired AI courses teaching applications of AI in various domains, such as supply chains, security, chemistry, and manufacturing.
Research Opportunities:
- The Undergraduate Research Program (URP) will provide students with early exposure to AI research, including summer internships at Georgia Tech and other AI4OPT sites.
- The graduate programs will include an 18-month non-thesis master's degree with a summer internship and capstone project, and a two-year thesis master's degree supported by a six-month research project.
Structure of the New Master in AI Program:
- Courses in five categories to support the master’s program:
- Existing courses at CAU taught in the AI4OPT FTP.
- Courses available at Georgia Tech.
- New courses based on Intel material.
- Applied optimization courses developed by AI4OPT.
- New courses developed by AIHUB@CAU focusing on AI applications in various domains.
Collaborations and Internships:
- Joint supervision of research projects by CAU and AI4OPT faculty.
- Summer internships starting in 2026.
- Capstone projects facilitated by Georgia Tech and industrial partners.
About AI4OPT
The Artificial Intelligence (AI) Research Institute for Advances in Optimization, or AI4OPT, aims to deliver a paradigm shift in automated decision-making at massive scales by fusing AI and Mathematical Optimization (MO) to achieve breakthroughs that neither field can achieve independently. The Institute is driven by societal challenges in energy, logistics and supply chains, resilience and sustainability, and circuit design and control. To address the widening gap in job opportunities, the Institute delivers an innovative longitudinal education and workforce development program.
About Georgia Tech
The Georgia Institute of Technology, or Georgia Tech, is a top 10 public research university developing leaders who advance technology and improve the human condition. The Institute offers business, computing, design, engineering, liberal arts, and sciences degrees. Its nearly 40,000 students, representing 50 states and 149 countries, study at the main campus in Atlanta, at international campuses, and through distance and online learning. As a leading technological university, Georgia Tech is an engine of economic development for Georgia, the Southeast, and the nation, conducting more than $1 billion in research annually for government, industry, and society.
About Clark Atlanta University
Clark Atlanta University was formed with the consolidation of Atlanta University and Clark College, both of which hold unique places in the annals of African American history. Atlanta University, established in 1865 by the American Missionary Association, was the nation’s first institution to award graduate degrees to African Americans. CAU is also the largest of the 37-member UNCF institutions. CAU, established four years later in 1869, was the nation’s first four-year liberal arts college to serve a primarily African American student population. Today, with over 4,000 students, CAU is the largest of the four institutions (CAU, Morehouse College, Spelman College, and Morehouse School of Medicine) that comprise the Atlanta University Center Consortium.
About National Science Foundation
The U.S. National Science Foundation propels the nation forward by advancing fundamental research in all fields of science and engineering. NSF supports research and people by providing facilities, instruments and funding to support their ingenuity and sustain the U.S. as a global leader in research and innovation. With a fiscal year 2023 budget of $9.5 billion, NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and institutions. Each year, NSF receives more than 40,000 competitive proposals and makes about 11,000 new awards. Those awards include support for cooperative research with industry, Arctic and Antarctic research and operations, and U.S. participation in international scientific efforts.
News Contact
Breon Martin
AI Research Communications Manager
Georgia Tech
Jul. 15, 2024
Hepatic, or liver, disease affects more than 100 million people in the U.S. About 4.5 million adults (1.8%) have been diagnosed with liver disease, but it is estimated that between 80 and 100 million adults in the U.S. have undiagnosed fatty liver disease in varying stages. Over time, undiagnosed and untreated hepatic diseases can lead to cirrhosis, a severe scarring of the liver that cannot be reversed.
Most hepatic diseases are chronic conditions that will be present over the life of the patient, but early detection improves overall health and the ability to manage specific conditions over time. Additionally, assessing patients over time allows for effective treatments to be adjusted as necessary. The standard protocol for diagnosis, as well as follow-up tissue assessment, is a biopsy after the return of an abnormal blood test, but biopsies are time-consuming and pose risks for the patient. Several non-invasive imaging techniques have been developed to assess the stiffness of liver tissue, an indication of scarring, including magnetic resonance elastography (MRE).
MRE combines elements of ultrasound and MRI imaging to create a visual map showing gradients of stiffness throughout the liver and is increasingly used to diagnose hepatic issues. MRE exams, however, can fail for many reasons, including patient motion, patient physiology, imaging issues, and mechanical issues such as improper wave generation or propagation in the liver. Determining the success of MRE exams depends on visual inspection of technologists and radiologists. With increasing work demands and workforce shortages, providing an accurate, automated way to classify image quality will create a streamlined approach and reduce the need for repeat scans.
Professor Jun Ueda in the George W. Woodruff School of Mechanical Engineering and robotics Ph.D. student Heriberto Nieves, working with a team from the Icahn School of Medicine at Mount Sinai, have successfully applied deep learning techniques for accurate, automated quality control image assessment. The research, “Deep Learning-Enabled Automated Quality Control for Liver MR Elastography: Initial Results,” was published in the Journal of Magnetic Resonance Imaging.
Using five deep learning training models, an accuracy of 92% was achieved by the best-performing ensemble on retrospective MRE images of patients with varied liver stiffnesses. The team also achieved a return of the analyzed data within seconds. The rapidity of image quality return allows the technician to focus on adjusting hardware or patient orientation for re-scan in a single session, rather than requiring patients to return for costly and timely re-scans due to low-quality initial images.
This new research is a step toward streamlining the review pipeline for MRE using deep learning techniques, which have remained unexplored compared to other medical imaging modalities. The research also provides a helpful baseline for future avenues of inquiry, such as assessing the health of the spleen or kidneys. It may also be applied to automation for image quality control for monitoring non-hepatic conditions, such as breast cancer or muscular dystrophy, in which tissue stiffness is an indicator of initial health and disease progression. Ueda, Nieves, and their team hope to test these models on Siemens Healthineers magnetic resonance scanners within the next year.
Publication
Nieves-Vazquez, H.A., Ozkaya, E., Meinhold, W., Geahchan, A., Bane, O., Ueda, J. and Taouli, B. (2024), Deep Learning-Enabled Automated Quality Control for Liver MR Elastography: Initial Results. J Magn Reson Imaging. https://doi.org/10.1002/jmri.29490
Prior Work
Robotically Precise Diagnostics and Therapeutics for Degenerative Disc Disorder
Related Material
Editorial for “Deep Learning-Enabled Automated Quality Control for Liver MR Elastography: Initial Results”
News Contact
Christa M. Ernst |
Research Communications Program Manager |
Topic Expertise: Robotics, Data Sciences, Semiconductor Design & Fab |
Jul. 11, 2024
New research from Georgia Tech is giving scientists more control options over generative artificial intelligence (AI) models in their studies. Greater customization from this research can lead to discovery of new drugs, materials, and other applications tailor-made for consumers.
The Tech group dubbed its method PRODIGY (PROjected DIffusion for controlled Graph Generation). PRODIGY enables diffusion models to generate 3D images of complex structures, such as molecules from chemical formulas.
Scientists in pharmacology, materials science, social network analysis, and other fields can use PRODIGY to simulate large-scale networks. By generating 3D molecules from multiple graph datasets, the group proved that PRODIGY could handle complex structures.
In keeping with its name, PRODIGY is the first plug-and-play machine learning (ML) approach to controllable graph generation in diffusion models. This method overcomes a known limitation inhibiting diffusion models from broad use in science and engineering.
“We hope PRODIGY enables drug designers and scientists to generate structures that meet their precise needs,” said Kartik Sharma, lead researcher on the project. “It should also inspire future innovations to precisely control modern generative models across domains.”
PRODIGY works on diffusion models, a generative AI model for computer vision tasks. While suitable for image creation and denoising, diffusion methods are limited because they cannot accurately generate graph representations of custom parameters a user provides.
PRODIGY empowers any pre-trained diffusion model for graph generation to produce graphs that meet specific, user-given constraints. This capability means, as an example, that a drug designer could use any diffusion model to design a molecule with a specific number of atoms and bonds.
The group tested PRODIGY on two molecular and five generic datasets to generate custom 2D and 3D structures. This approach ensured the method could create such complex structures, accounting for the atoms, bonds, structures, and other properties at play in molecules.
Molecular generation experiments with PRODIGY directly impact chemistry, biology, pharmacology, materials science, and other fields. The researchers say PRODIGY has potential in other fields using large networks and datasets, such as social sciences and telecommunications.
These features led to PRODIGY’s acceptance for presentation at the upcoming International Conference on Machine Learning (ICML 2024). ICML 2024 is the leading international academic conference on ML. The conference is taking place July 21-27 in Vienna.
Assistant Professor Srijan Kumar is Sharma’s advisor and paper co-author. They worked with Tech alumnus Rakshit Trivedi (Ph.D. CS 2020), a Massachusetts Institute of Technology postdoctoral associate.
Twenty-four Georgia Tech faculty from the Colleges of Computing and Engineering will present 40 papers at ICML 2024. Kumar is one of six faculty representing the School of Computational Science and Engineering (CSE) at the conference.
Sharma is a fourth-year Ph.D. student studying computer science. He researches ML models for structured data that are reliable and easily controlled by users. While preparing for ICML, Sharma has been interning this summer at Microsoft Research in the Research for Industry lab.
“ICML is the pioneering conference for machine learning,” said Kumar. “A strong presence at ICML from Georgia Tech illustrates the ground-breaking research conducted by our students and faculty, including those in my research group.”
Visit https://sites.gatech.edu/research/icml-2024 for news and coverage of Georgia Tech research presented at ICML 2024.
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Jul. 11, 2024
A new machine learning (ML) model created at Georgia Tech is helping neuroscientists better understand communications between brain regions. Insights from the model could lead to personalized medicine, better brain-computer interfaces, and advances in neurotechnology.
The Georgia Tech group combined two current ML methods into their hybrid model called MRM-GP (Multi-Region Markovian Gaussian Process).
Neuroscientists who use MRM-GP learn more about communications and interactions within the brain. This in turn improves understanding of brain functions and disorders.
“Clinically, MRM-GP could enhance diagnostic tools and treatment monitoring by identifying and analyzing neural activity patterns linked to various brain disorders,” said Weihan Li, the study’s lead researcher.
“Neuroscientists can leverage MRM-GP for its robust modeling capabilities and efficiency in handling large-scale brain data.”
MRM-GP reveals where and how communication travels across brain regions.
The group tested MRM-GP using spike trains and local field potential recordings, two kinds of measurements of brain activity. These tests produced representations that illustrated directional flow of communication among brain regions.
Experiments also disentangled brainwaves, called oscillatory interactions, into organized frequency bands. MRM-GP’s hybrid configuration allows it to model frequencies and phase delays within the latent space of neural recordings.
MRM-GP combines the strengths of two existing methods: the Gaussian process (GP) and linear dynamical systems (LDS). The researchers say that MRM-GP is essentially an LDS that mirrors a GP.
LDS is a computationally efficient and cost-effective method, but it lacks the power to produce representations of the brain. GP-based approaches boost LDS's power, facilitating the discovery of variables in frequency bands and communication directions in the brain.
Converting GP outputs into an LDS is a difficult task in ML. The group overcame this challenge by instilling separability in the model’s multi-region kernel. Separability establishes a connection between the kernel and LDS while modeling communication between brain regions.
Through this approach, MRM-GP overcomes two challenges facing both neuroscience and ML fields. The model helps solve the mystery of intraregional brain communication. It does so by bridging a gap between GP and LDS, a feat not previously accomplished in ML.
“The introduction of MRM-GP provides a useful tool to model and understand complex brain region communications,” said Li, a Ph.D. student in the School of Computational Science and Engineering (CSE).
“This marks a significant advancement in both neuroscience and machine learning.”
Fellow doctoral students Chengrui Li and Yule Wang co-authored the paper with Li. School of CSE Assistant Professor Anqi Wu advises the group.
Each MRM-GP student pursues a different Ph.D. degree offered by the School of CSE. W. Li studies computer science, C. Li studies computational science and engineering, and Wang studies machine learning. The school also offers Ph.D. degrees in bioinformatics and bioengineering.
Wu is a 2023 recipient of the Sloan Research Fellowship for neuroscience research. Her work straddles two of the School’s five research areas: machine learning and computational bioscience.
MRM-GP will be featured at the world’s top conference on ML and artificial intelligence. The group will share their work at the International Conference on Machine Learning (ICML 2024), which will be held July 21-27 in Vienna.
ICML 2024 also accepted for presentation a second paper from Wu’s group intersecting neuroscience and ML. The same authors will present A Differentiable Partially Observable Generalized Linear Model with Forward-Backward Message Passing.
Twenty-four Georgia Tech faculty from the Colleges of Computing and Engineering will present 40 papers at ICML 2024. Wu is one of six faculty representing the School of CSE who will present eight total papers.
The group’s ICML 2024 presentations exemplify Georgia Tech’s focus on neuroscience research as a strategic initiative.
Wu is an affiliated faculty member with the Neuro Next Initiative, a new interdisciplinary program at Georgia Tech that will lead research in neuroscience, neurotechnology, and society. The University System of Georgia Board of Regents recently approved a new neuroscience and neurotechnology Ph.D. program at Georgia Tech.
“Presenting papers at international conferences like ICML is crucial for our group to gain recognition and visibility, facilitates networking with other researchers and industry professionals, and offers valuable feedback for improving our work,” Wu said.
“It allows us to share our findings, stay updated on the latest developments in the field, and enhance our professional development and public speaking skills.”
Visit https://sites.gatech.edu/research/icml-2024 for news and coverage of Georgia Tech research presented at ICML 2024.
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Pagination
- Previous page
- 4 Page 4
- Next page