Sep. 03, 2024
The Institute for Matter and Systems (IMS) has received $700,000 in funding from the National Science Foundation (NSF) for two education and outreach programs.
The awards will support the Research Experience for Undergraduates (REU) and Research Experience for Teachers (RET) programs at Georgia Tech. The REU summer internship program provides undergraduate students from two- and four-year programs the chance to perform cutting-edge research at the forefront of nanoscale science and engineering. The RET program for high school teachers and technical college faculty offers a paid opportunity to experience the excitement of nanotechnology research and to share this experience in their classrooms.
“This NSF funding allows us to be able to do more with the programs,” said Mikkel Thomas, associate director for education and outreach. “These are programs that have existed in the past, but we haven’t had external funding for the last three years. The NSF support allows us to do more — bring more students into the program or increase the RET stipends.”
In addition to the REU and RET programs, IMS offers short courses and workshops focused on professional development, instructional labs for undergraduate and graduate students, a certificate for veterans in microelectronics and nano-manufacturing, and community engagement activities such as the Atlanta Science Festival.
News Contact
Amelia Neumeister | Communications Program Manager
Aug. 30, 2024
The Cloud Hub, a key initiative of the Institute for Data Engineering and Science (IDEaS) at Georgia Tech, recently concluded a successful Call for Proposals focused on advancing the field of Generative Artificial Intelligence (GenAI). This initiative, made possible by a generous gift funding from Microsoft, aims to push the boundaries of GenAI research by supporting projects that explore both foundational aspects and innovative applications of this cutting-edge technology.
Call for Proposals: A Gateway to Innovation
Launched in early 2024, the Call for Proposals invited researchers from across Georgia Tech to submit their innovative ideas on GenAI. The scope was broad, encouraging proposals that spanned foundational research, system advancements, and novel applications in various disciplines, including arts, sciences, business, and engineering. A special emphasis was placed on projects that addressed responsible and ethical AI use.
The response from the Georgia Tech research community was overwhelming, with 76 proposals submitted by teams eager to explore this transformative technology. After a rigorous selection process, eight projects were selected for support. Each awarded team will also benefit from access to Microsoft’s Azure cloud resources..
Recognizing Microsoft’s Generous Contribution
This successful initiative was made possible through the generous support of Microsoft, whose contribution of research resources has empowered Georgia Tech researchers to explore new frontiers in GenAI. By providing access to Azure’s advanced tools and services, Microsoft has played a pivotal role in accelerating GenAI research at Georgia Tech, enabling researchers to tackle some of the most pressing challenges and opportunities in this rapidly evolving field.
Looking Ahead: Pioneering the Future of GenAI
The awarded projects, set to commence in Fall 2024, represent a diverse array of research directions, from improving the capabilities of large language models to innovative applications in data management and interdisciplinary collaborations. These projects are expected to make significant contributions to the body of knowledge in GenAI and are poised to have a lasting impact on the industry and beyond.
IDEaS and the Cloud Hub are committed to supporting these teams as they embark on their research journeys. The outcomes of these projects will be shared through publications and highlighted on the Cloud Hub web portal, ensuring visibility for the groundbreaking work enabled by this initiative.
Congratulations to the Fall 2024 Winners
- Annalisa Bracco | EAS "Modeling the Dispersal and Connectivity of Marine Larvae with GenAI Agents" [proposal co-funded with support from the Brook Byers Institute for Sustainable Systems]
- Yunan Luo | CSE “Designing New and Diverse Proteins with Generative AI”
- Kartik Goyal | IC “Generative AI for Greco-Roman Architectural Reconstruction: From Partial Unstructured Archaeological Descriptions to Structured Architectural Plans”
- Victor Fung | CSE “Intelligent LLM Agents for Materials Design and Automated Experimentation”
- Noura Howell | LMC “Applying Generative AI for STEM Education: Supporting AI literacy and community engagement with marginalized youth”
- Neha Kumar | IC “Towards Responsible Integration of Generative AI in Creative Game Development”
- Maureen Linden | Design “Best Practices in Generative AI Used in the Creation of Accessible Alternative Formats for People with Disabilities”
- Surya Kalidindi | ME & MSE “Accelerating Materials Development Through Generative AI Based Dimensionality Expansion Techniques”
- Tuo Zhao | ISyE “Adaptive and Robust Alignment of LLMs with Complex Rewards”
News Contact
Christa M. Ernst - Research Communications Program Manager
christa.ernst@research.gatech.edu
Aug. 30, 2024
Georgia Tech researcher W. Hong Yeo has been awarded a $3 million grant to help develop a new generation of engineers and scientists in the field of sustainable medical devices.
“The workforce that will emerge from this program will tackle a global challenge through sustainable innovations in device design and manufacturing,” said Yeo, Woodruff Faculty Fellow and associate professor in the George W. Woodruff School of Mechanical Engineering and the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University.
The funding, from the National Science Foundation (NSF) Research Training (NRT) program, will address the environmental impacts resulting from the mass production of medical devices, including the increase in material waste and greenhouse gas emissions.
Under Yeo’s leadership, the Georgia Tech team comprises multidisciplinary faculty: Andrés García (bioengineering), HyunJoo Oh (industrial design and interactive computing), Lewis Wheaton (biology), and Josiah Hester (sustainable computing). Together, they’ll train 100 graduate students, including 25 NSF-funded trainees, who will develop reuseable, reliable medical devices for a range of uses.
“We plan to educate students on how to develop medical devices using biocompatible and biodegradable materials and green manufacturing processes using low-cost printing technologies,” said Yeo. “These wearable and implantable devices will enhance disease diagnosis, therapeutics, rehabilitation, and health monitoring.”
Students in the program will be challenged by a comprehensive, multidisciplinary curriculum, with deep dives into bioengineering, public policy, physiology, industrial design, interactive computing, and medicine. And they’ll get real-world experience through collaborations with clinicians and medical product developers, working to create devices that meet the needs of patients and care providers.
The Georgia Tech NRT program aims to attract students from various backgrounds, fostering a diverse, inclusive environment in the classroom — and ultimately in the workforce.
The program will also introduce a new Ph.D. concentration in smart medical devices as part of Georgia Tech's bioengineering program, and a new M.S. program in the sustainable development of medical devices. Yeo also envisions an academic impact that extends beyond the Tech campus.
“Collectively, this NRT program's curriculum, combining methods from multiple domains, will help establish best practices in many higher education institutions for developing reliable and personalized medical devices for healthcare,” he said. “We’d like to broaden students' perspectives, move past the current technology-first mindset, and reflect the needs of patients and healthcare providers through sustainable technological solutions.”
News Contact
Jerry Grillo
Aug. 29, 2024
Between revitalized investments in America’s manufacturing infrastructure and an increased focus on AI and automation, the U.S. is experiencing a manufacturing renaissance. A key focus of this resurgence lies in improving the resiliency of supply chains in the U.S., particularly in crucial sectors like defense.
“If we were to suddenly have a seismic shift in defense manufacturing needs,” asks Aaron Stebner, professor and Eugene C. Gwaltney Jr. Chair in Manufacturing in the George W. Woodruff School of Mechanical Engineering, “do we have the supply chain and manufacturers who could meet that sudden increase in demand? How do we do that in a way that’s sustainable for long periods of time as a nation if that need arises?”
The Georgia Tech Manufacturing Institute (GTMI) officially launched the Manufacturing 4.0 Consortium in 2023 to address that need. Designed to form a network of engaged manufacturers from across the country, the Consortium serves as a key connection point between Georgia Tech and industry partners — and as fertile ground for collaborative innovation.
“By bringing us all together,” says Stebner, who serves on the board of the Consortium, “we can do bigger, more meaningful things and find unique ways and opportunities to get money flowing back to the companies and Georgia Tech.”
With over 25 founding company members, the Consortium celebrated its first official year of operation in August.
Creating a Resilient Network
The Manufacturing 4.0 Consortium originally grew out of an 18-month pilot project funded by the Department of Defense Office of Local Community Cooperation aiming to increase defense supply chain resilience, assist Georgia manufacturers in adopting new technologies, and foster collaboration by connecting manufacturers across Georgia.
Those goals and more are tackled by the Consortium’s focus on “networking, engagement, and collaboration,” says Stebner. “It's not just a consortium for Georgia Tech to take money from industry and do stuff with their money — the goal is to create new resources that enable us to collaborate in bigger ways than we could otherwise.”
To join the Consortium, industry members pay up to $10,000 annually to access its network, intellectual property, and facilities. With a 10% membership discount for Georgia businesses and a 75% discount for small businesses, the Consortium especially aims to promote growth for small Georgia manufacturers.
“Memberships come with time at the Advanced Manufacturing Pilot Facility, which we’re expanding to be this test bed for autonomous maturation of research and development,” says Stebner. “The fact that we have what’s going to be an almost $60 million facility behind us as a mechanism and a playground for all these companies is unique.”
“Having a shared use facility that is fully equipped to solve manufacturing’s most interesting challenges is not only a perk of Consortium memberships,” said Executive Director Steven Ferguson, “but it also serves as a hub for innovation in manufacturing.”
Industry Innovation
Many consortiums founded by academic institutions are primarily focused on academic research.
“The Manufacturing 4.0 consortium has an industry focus,” said Branden Kappes, founder and president of Consortium member company Contextualize LLC. “It's more about how we take this capability that, at the moment, is trapped in a lab and transition from a wonderful concept into a wonderful product.”
The Consortium achieves that translation through shared intellectual property agreements, collaborative research initiatives, and an emphasis on creating an engaged and open network of members.
“I see camaraderie inside the Manufacturing 4.0 Consortium,” says Kappes. “I see companies that overlap and compete in some areas, are complementary in others, and are willing to build a bridge to advance the capabilities of both sides and the community as a whole. That type of mentality is very exciting.”
“This is one of the most highly engaged groups I have interacted with in a professional setting,” said John Flynn, vice president of Sales at Consortium member company Endeavor 3D. “It is an incredibly dynamic melting pot of all the different facets of industry 4.0 and digital manufacturing, bringing everyone together from that part of the supply chain to create what I know will be important and value-added projects, ultimately resulting in intellectual property.”
“We are able to connect Consortium members with subject matter experts at Georgia Tech and within the Consortium who have ‘been there and done that,’” said Ferguson. “At the same time, we are working with manufacturers to create novel solutions to complex problems through research engagements. Blending all of those activities into one organization is part of the magic that is the Consortium.”
News Contact
Audra Davidson
Communications Manager
Georgia Tech Manufacturing Institute
Aug. 28, 2024
The National Science Foundation has awarded $2 million to Clark Atlanta University in partnership with the HBCU CHIPS Network, a collaborative effort involving historically black colleges and universities (HBCUs), government agencies, academia, and industry that will serve as a national resource for semiconductor research and education.
“This is an exciting time for the HBCU CHIPS Network,” said George White, senior director for Strategic Partnerships at Georgia Tech. “This funding, and the support of Georgia Tech Executive Vice President for Research Chaouki Abdallah, is integral for the successful launch of the CHIPS Network.”
The HBCU Chips Network works to cultivate a diverse and skilled workforce that supports the national semiconductor industry. The student research and internship opportunities along with the development of specialized curricula in semiconductor design, fabrication, and related fields will expand the microelectronics workforce. As part of the network, Georgia Tech will optimize the packaging of chips into systems.
News Contact
Georgia Tech Contact:
Amelia Neumeister | Research Communications Program Manager
Clark Atlanta University Contact:
Frances Williams
Aug. 21, 2024
- Written by Benjamin Wright -
As Georgia Tech establishes itself as a national leader in AI research and education, some researchers on campus are putting AI to work to help meet sustainability goals in a range of areas including climate change adaptation and mitigation, urban farming, food distribution, and life cycle assessments while also focusing on ways to make sure AI is used ethically.
Josiah Hester, interim associate director for Community-Engaged Research in the Brook Byers Institute for Sustainable Systems (BBISS) and associate professor in the School of Interactive Computing, sees these projects as wins from both a research standpoint and for the local, national, and global communities they could affect.
“These faculty exemplify Georgia Tech's commitment to serving and partnering with communities in our research,” he says. “Sustainability is one of the most pressing issues of our time. AI gives us new tools to build more resilient communities, but the complexities and nuances in applying this emerging suite of technologies can only be solved by community members and researchers working closely together to bridge the gap. This approach to AI for sustainability strengthens the bonds between our university and our communities and makes lasting impacts due to community buy-in.”
Flood Monitoring and Carbon Storage
Peng Chen, assistant professor in the School of Computational Science and Engineering in the College of Computing, focuses on computational mathematics, data science, scientific machine learning, and parallel computing. Chen is combining these areas of expertise to develop algorithms to assist in practical applications such as flood monitoring and carbon dioxide capture and storage.
He is currently working on a National Science Foundation (NSF) project with colleagues in Georgia Tech’s School of City and Regional Planning and from the University of South Florida to develop flood models in the St. Petersburg, Florida area. As a low-lying state with more than 8,400 miles of coastline, Florida is one of the states most at risk from sea level rise and flooding caused by extreme weather events sparked by climate change.
Chen’s novel approach to flood monitoring takes existing high-resolution hydrological and hydrographical mapping and uses machine learning to incorporate real-time updates from social media users and existing traffic cameras to run rapid, low-cost simulations using deep neural networks. Current flood monitoring software is resource and time-intensive. Chen’s goal is to produce live modeling that can be used to warn residents and allocate emergency response resources as conditions change. That information would be available to the general public through a portal his team is working on.
“This project focuses on one particular community in Florida,” Chen says, “but we hope this methodology will be transferable to other locations and situations affected by climate change.”
In addition to the flood-monitoring project in Florida, Chen and his colleagues are developing new methods to improve the reliability and cost-effectiveness of storing carbon dioxide in underground rock formations. The process is plagued with uncertainty about the porosity of the bedrock, the optimal distribution of monitoring wells, and the rate at which carbon dioxide can be injected without over-pressurizing the bedrock, leading to collapse. The new simulations are fast, inexpensive, and minimize the risk of failure, which also decreases the cost of construction.
“Traditional high-fidelity simulation using supercomputers takes hours and lots of resources,” says Chen. “Now we can run these simulations in under one minute using AI models without sacrificing accuracy. Even when you factor in AI training costs, this is a huge savings in time and financial resources.”
Flood monitoring and carbon capture are passion projects for Chen, who sees an opportunity to use artificial intelligence to increase the pace and decrease the cost of problem-solving.
“I’m very excited about the possibility of solving grand challenges in the sustainability area with AI and machine learning models,” he says. “Engineering problems are full of uncertainty, but by using this technology, we can characterize the uncertainty in new ways and propagate it throughout our predictions to optimize designs and maximize performance.”
Urban Farming and Optimization
Yongsheng Chen works at the intersection of food, energy, and water. As the Bonnie W. and Charles W. Moorman Professor in the School of Civil and Environmental Engineering and director of the Nutrients, Energy, and Water Center for Agriculture Technology, Chen is focused on making urban agriculture technologically feasible, financially viable, and, most importantly, sustainable. To do that he’s leveraging AI to speed up the design process and optimize farming and harvesting operations.
Chen’s closed-loop hydroponic system uses anaerobically treated wastewater for fertilization and irrigation by extracting and repurposing nutrients as fertilizer before filtering the water through polymeric membranes with nano-scale pores. Advancing filtration and purification processes depends on finding the right membrane materials to selectively separate contaminants, including antibiotics and per- and polyfluoroalkyl substances (PFAS). Chen and his team are using AI and machine learning to guide membrane material selection and fabrication to make contaminant separation as efficient as possible. Similarly, AI and machine learning are assisting in developing carbon capture materials such as ionic liquids that can retain carbon dioxide generated during wastewater treatment and redirect it to hydroponics systems, boosting food productivity.
“A fundamental angle of our research is that we do not see municipal wastewater as waste,” explains Chen. “It is a resource we can treat and recover components from to supply irrigation, fertilizer, and biogas, all while reducing the amount of energy used in conventional wastewater treatment methods.”
In addition to aiding in materials development, which reduces design time and production costs, Chen is using machine learning to optimize the growing cycle of produce, maximizing nutritional value. His USDA-funded vertical farm uses autonomous robots to measure critical cultivation parameters and take pictures without destroying plants. This data helps determine optimum environmental conditions, fertilizer supply, and harvest timing, resulting in a faster-growing, optimally nutritious plant with less fertilizer waste and lower emissions.
Chen’s work has received considerable federal funding. As the Urban Resilience and Sustainability Thrust Leader within the NSF-funded AI Institute for Advances in Optimization (AI4OPT), he has received additional funding to foster international collaboration in digital agriculture with colleagues across the United States and in Japan, Australia, and India.
Optimizing Food Distribution
At the other end of the agricultural spectrum is postdoc Rosemarie Santa González in the H. Milton Stewart School of Industrial and Systems Engineering, who is conducting her research under the supervision of Professor Chelsea White and Professor Pascal Van Hentenryck, the director of Georgia Tech’s AI Hub as well as the director of AI4OPT.
Santa González is working with the Wisconsin Food Hub Cooperative to help traditional farmers get their products into the hands of consumers as efficiently as possible to reduce hunger and food waste. Preventing food waste is a priority for both the EPA and USDA. Current estimates are that 30 to 40% of the food produced in the United States ends up in landfills, which is a waste of resources on both the production end in the form of land, water, and chemical use, as well as a waste of resources when it comes to disposing of it, not to mention the impact of the greenhouses gases when wasted food decays.
To tackle this problem, Santa González and the Wisconsin Food Hub are helping small-scale farmers access refrigeration facilities and distribution chains. As part of her research, she is helping to develop AI tools that can optimize the logistics of the small-scale farmer supply chain while also making local consumers in underserved areas aware of what’s available so food doesn’t end up in landfills.
“This solution has to be accessible,” she says. “Not just in the sense that the food is accessible, but that the tools we are providing to them are accessible. The end users have to understand the tools and be able to use them. It has to be sustainable as a resource.”
Making AI accessible to people in the community is a core goal of the NSF’s AI Institute for Intelligent Cyberinfrastructure with Computational Learning in the Environment (ICICLE), one of the partners involved with the project.
“A large segment of the population we are working with, which includes historically marginalized communities, has a negative reaction to AI. They think of machines taking over, or data being stolen. Our goal is to democratize AI in these decision-support tools as we work toward the UN Sustainable Development Goal of Zero Hunger. There is so much power in these tools to solve complex problems that have very real results. More people will be fed and less food will spoil before it gets to people’s homes.”
Santa González hopes the tools they are building can be packaged and customized for food co-ops everywhere.
AI and Ethics
Like Santa González, Joe Bozeman III is also focused on the ethical and sustainable deployment of AI and machine learning, especially among marginalized communities. The assistant professor in the School of Civil and Environmental Engineering is an industrial ecologist committed to fostering ethical climate change adaptation and mitigation strategies. His SEEEL Lab works to make sure researchers understand the consequences of decisions before they move from academic concepts to policy decisions, particularly those that rely on data sets involving people and communities.
“With the administration of big data, there is a human tendency to assume that more data means everything is being captured, but that's not necessarily true,” he cautions. “More data could mean we're just capturing more of the data that already exists, while new research shows that we’re not including information from marginalized communities that have historically not been brought into the decision-making process. That includes underrepresented minorities, rural populations, people with disabilities, and neurodivergent people who may not interface with data collection tools.”
Bozeman is concerned that overlooking marginalized communities in data sets will result in decisions that at best ignore them and at worst cause them direct harm.
“Our lab doesn't wait for the negative harms to occur before we start talking about them,” explains Bozeman, who holds a courtesy appointment in the School of Public Policy. “Our lab forecasts what those harms will be so decision-makers and engineers can develop technologies that consider these things.”
He focuses on urbanization, the food-energy-water nexus, and the circular economy. He has found that much of the research in those areas is conducted in a vacuum without consideration for human engagement and the impact it could have when implemented.
Bozeman is lobbying for built-in tools and safeguards to mitigate the potential for harm from researchers using AI without appropriate consideration. He already sees a disconnect between the academic world and the public. Bridging that trust gap will require ethical uses of AI.
“We have to start rigorously including their voices in our decision-making to begin gaining trust with the public again. And with that trust, we can all start moving toward sustainable development. If we don't do that, I don't care how good our engineering solutions are, we're going to miss the boat entirely on bringing along the majority of the population.”
BBISS Support
Moving forward, Hester is excited about the impact the Brooks Byers Institute for Sustainable Systems can have on AI and sustainability research through a variety of support mechanisms.
“BBISS continues to invest in faculty development and training in community-driven research strategies, including the Community Engagement Faculty Fellows Program (with the Center for Sustainable Communities Research and Education), while empowering multidisciplinary teams to work together to solve grand engineering challenges with AI by supporting the AI+Climate Faculty Interest Group, as well as partnering with and providing administrative support for community-driven research projects.”
News Contact
Brent Verrill, Research Communications Program Manager, BBISS
Aug. 21, 2024
A new agreement between Los Alamos National Laboratory (LANL) and the National Science Foundation’s Artificial Intelligence Institute for Advances in Optimization (AI4OPT) at Georgia Tech is set to propel research in applied artificial intelligence (AI) and engage students and professionals in this rapidly growing field.
“This collaboration will help develop new AI technologies for the next generation of scientific discovery and the design of complex systems and the control of engineered systems,” said Russell Bent, scientist at Los Alamos. “At Los Alamos, we have a lot of interest in optimizing complex systems. We see an opportunity with AI to enhance system resilience and efficiency in the face of climate change, extreme events, and other challenges.”
The agreement establishes a research and educational partnership focused on advancing AI tools for a next-generation power grid. Maintaining and optimizing the energy grid involves extensive computation, and AI-informed approaches, including modeling, could address power-grid issues more effectively.
AI Approaches to Optimization and Problem-Solving
Optimization involves finding solutions that utilize resources effectively and efficiently. This research partnership will leverage Georgia Tech's expertise to develop “trustworthy foundation models” that, by incorporating AI, reduce the vast computing resources needed for solving complex problems.
In energy grid systems, optimization involves quickly sorting through possibilities and resources to deliver immediate solutions during a power-distribution crisis. The research will develop “optimization proxies” that extend current methods by incorporating broader parameters such as generator limits, line ratings, and grid topologies. Training these proxies with AI for energy applications presents a significant research challenge.
The collaboration will also address problems related to LANL’s diverse missions and applications. The team’s research will advance pioneering efforts in graph-based, physics-informed machine learning to solve Laboratory mission problems.
Outreach and Training Opportunities
In January 2025, the Laboratory will host a Grid Science Winter School and Conference, featuring lectures from LANL scientists and academic partners on electrical grid methods and techniques. With Georgia Tech as a co-organizer, AI optimization for the energy grid will be a focal point of the event.
Since 2020, the Laboratory has been working with Georgia Tech on energy grid projects. AI4OPT, which includes several industrial and academic partners, aims to achieve breakthroughs by combining AI and mathematical optimization.
“The use-inspired research in AI4OPT addresses fundamental societal and technological challenges,” said Pascal Van Hentenryck, AI4OPT director. “The energy grid is crucial to our daily lives. Our collaboration with Los Alamos advances a research mission and educational vision with significant impact for science and society.”
The three-year agreement, funded through the Laboratory Directed Research and Development program’s ArtIMis initiative, runs through 2027. It supports the Laboratory’s commitment to advancing AI. Earl Lawrence is the project’s principal investigator, with Diane Oyen and Emily Castleton joining Bent as co-principal investigators.
Bent, Castleton, Lawrence, and Oyen are also members of the AI Council at the Laboratory. The AI Council helps the Lab navigate the evolving AI landscape, build investment capacities, and forge industry and academic partnerships.
As highlighted in the Department of Energy’s Frontiers in Artificial Intelligence for Science, Security, and Technology (FASST) initiative, AI technologies will significantly enhance the contributions of laboratories to national missions. This partnership with Georgia Tech through AI4OPT is a key step towards that future.
News Contact
Breon Martin
Aug. 19, 2024
Nylon, Teflon, Kevlar. These are just a few familiar polymers — large-molecule chemical compounds — that have changed the world. From Teflon-coated frying pans to 3D printing, polymers are vital to creating the systems that make the world function better.
Finding the next groundbreaking polymer is always a challenge, but now Georgia Tech researchers are using artificial intelligence (AI) to shape and transform the future of the field. Rampi Ramprasad’s group develops and adapts AI algorithms to accelerate materials discovery.
This summer, two papers published in the Nature family of journals highlight the significant advancements and success stories emerging from years of AI-driven polymer informatics research. The first, featured in Nature Reviews Materials, showcases recent breakthroughs in polymer design across critical and contemporary application domains: energy storage, filtration technologies, and recyclable plastics. The second, published in Nature Communications, focuses on the use of AI algorithms to discover a subclass of polymers for electrostatic energy storage, with the designed materials undergoing successful laboratory synthesis and testing.
“In the early days of AI in materials science, propelled by the White House’s Materials Genome Initiative over a decade ago, research in this field was largely curiosity-driven,” said Ramprasad, a professor in the School of Materials Science and Engineering. “Only in recent years have we begun to see tangible, real-world success stories in AI-driven accelerated polymer discovery. These successes are now inspiring significant transformations in the industrial materials R&D landscape. That’s what makes this review so significant and timely.”
AI Opportunities
Ramprasad’s team has developed groundbreaking algorithms that can instantly predict polymer properties and formulations before they are physically created. The process begins by defining application-specific target property or performance criteria. Machine learning (ML) models train on existing material-property data to predict these desired outcomes. Additionally, the team can generate new polymers, whose properties are forecasted with ML models. The top candidates that meet the target property criteria are then selected for real-world validation through laboratory synthesis and testing. The results from these new experiments are integrated with the original data, further refining the predictive models in a continuous, iterative process.
While AI can accelerate the discovery of new polymers, it also presents unique challenges. The accuracy of AI predictions depends on the availability of rich, diverse, extensive initial data sets, making quality data paramount. Additionally, designing algorithms capable of generating chemically realistic and synthesizable polymers is a complex task.
The real challenge begins after the algorithms make their predictions: proving that the designed materials can be made in the lab and function as expected and then demonstrating their scalability beyond the lab for real-world use. Ramprasad’s group designs these materials, while their fabrication, processing, and testing are carried out by collaborators at various institutions, including Georgia Tech. Professor Ryan Lively from the School of Chemical and Biomolecular Engineering frequently collaborates with Ramprasad’s group and is a co-author of the paper published in Nature Reviews Materials.
"In our day-to-day research, we extensively use the machine learning models Rampi’s team has developed,” Lively said. “These tools accelerate our work and allow us to rapidly explore new ideas. This embodies the promise of ML and AI because we can make model-guided decisions before we commit time and resources to explore the concepts in the laboratory."
Using AI, Ramprasad’s team and their collaborators have made significant advancements in diverse fields, including energy storage, filtration technologies, additive manufacturing, and recyclable materials.
Polymer Progress
One notable success, described in the Nature Communications paper, involves the design of new polymers for capacitors, which store electrostatic energy. These devices are vital components in electric and hybrid vehicles, among other applications. Ramprasad’s group worked with researchers from the University of Connecticut.
Current capacitor polymers offer either high energy density or thermal stability, but not both. By leveraging AI tools, the researchers determined that insulating materials made from polynorbornene and polyimide polymers can simultaneously achieve high energy density and high thermal stability. The polymers can be further enhanced to function in demanding environments, such as aerospace applications, while maintaining environmental sustainability.
“The new class of polymers with high energy density and high thermal stability is one of the most concrete examples of how AI can guide materials discovery,” said Ramprasad. “It is also the result of years of multidisciplinary collaborative work with Greg Sotzing and Yang Cao at the University of Connecticut and sustained sponsorship by the Office of Naval Research.”
Industry Potential
The potential for real-world translation of AI-assisted materials development is underscored by industry participation in the Nature Reviews Materials article. Co-authors of this paper also include scientists from Toyota Research Institute and General Electric. To further accelerate the adoption of AI-driven materials development in industry, Ramprasad co-founded Matmerize Inc., a software startup company recently spun out of Georgia Tech. Their cloud-based polymer informatics software is already being used by companies across various sectors, including energy, electronics, consumer products, chemical processing, and sustainable materials.
“Matmerize has transformed our research into a robust, versatile, and industry-ready solution, enabling users to design materials virtually with enhanced efficiency and reduced cost,” Ramprasad said. “What began as a curiosity has gained significant momentum, and we are entering an exciting new era of materials by design.”
News Contact
Tess Malone, Senior Research Writer/Editor
tess.malone@gatech.edu
Aug. 09, 2024
A research group is calling for internet and social media moderators to strengthen their detection and intervention protocols for violent speech.
Their study of language detection software found that algorithms struggle to differentiate anti-Asian violence-provoking speech from general hate speech. Left unchecked, threats of violence online can go unnoticed and turn into real-world attacks.
Researchers from Georgia Tech and the Anti-Defamation League (ADL) teamed together in the study. They made their discovery while testing natural language processing (NLP) models trained on data they crowdsourced from Asian communities.
“The Covid-19 pandemic brought attention to how dangerous violence-provoking speech can be. There was a clear increase in reports of anti-Asian violence and hate crimes,” said Gaurav Verma, a Georgia Tech Ph.D. candidate who led the study.
“Such speech is often amplified on social platforms, which in turn fuels anti-Asian sentiments and attacks.”
Violence-provoking speech differs from more commonly studied forms of harmful speech, like hate speech. While hate speech denigrates or insults a group, violence-provoking speech implicitly or explicitly encourages violence against targeted communities.
Humans can define and characterize violent speech as a subset of hateful speech. However, computer models struggle to tell the difference due to subtle cues and implications in language.
The researchers tested five different NLP classifiers and analyzed their F1 score, which measures a model's performance. The classifiers reported a 0.89 score for detecting hate speech, while detecting violence-provoking speech was only 0.69. This contrast highlights the notable gap between these tools and their accuracy and reliability.
The study stresses the importance of developing more refined methods for detecting violence-provoking speech. Internet misinformation and inflammatory rhetoric escalate tensions that lead to real-world violence.
The Covid-19 pandemic exemplified how public health crises intensify this behavior, helping inspire the study. The group cited that anti-Asian crime across the U.S. increased by 339% in 2021 due to malicious content blaming Asians for the virus.
The researchers believe their findings show the effectiveness of community-centric approaches to problems dealing with harmful speech. These approaches would enable informed decision-making between policymakers, targeted communities, and developers of online platforms.
Along with stronger models for detecting violence-provoking speech, the group discusses a direct solution: a tiered penalty system on online platforms. Tiered systems align penalties with severity of offenses, acting as both deterrent and intervention to different levels of harmful speech.
“We believe that we cannot tackle a problem that affects a community without involving people who are directly impacted,” said Jiawei Zhou, a Ph.D. student who studies human-centered computing at Georgia Tech.
“By collaborating with experts and community members, we ensure our research builds on front-line efforts to combat violence-provoking speech while remaining rooted in real experiences and needs of the targeted community.”
The researchers trained their tested NLP classifiers on a dataset crowdsourced from a survey of 120 participants who self-identified as Asian community members. In the survey, the participants labeled 1,000 posts from X (formerly Twitter) as containing either violence-provoking speech, hateful speech, or neither.
Since characterizing violence-provoking speech is not universal, the researchers created a specialized codebook for survey participants. The participants studied the codebook before their survey and used an abridged version while labeling.
To create the codebook, the group used an initial set of anti-Asian keywords to scan posts on X from January 2020 to February 2023. This tactic yielded 420,000 posts containing harmful, anti-Asian language.
The researchers then filtered the batch through new keywords and phrases. This refined the sample to 4,000 posts that potentially contained violence-provoking content. Keywords and phrases were added to the codebook while the filtered posts were used in the labeling survey.
The team used discussion and pilot testing to validate its codebook. During trial testing, pilots labeled 100 Twitter posts to ensure the sound design of the Asian community survey. The group also sent the codebook to the ADL for review and incorporated the organization’s feedback.
“One of the major challenges in studying violence-provoking content online is effective data collection and funneling down because most platforms actively moderate and remove overtly hateful and violent material,” said Tech alumnus Rynaa Grover (M.S. CS 2024).
“To address the complexities of this data, we developed an innovative pipeline that deals with the scale of this data in a community-aware manner.”
Emphasis on community input extended into collaboration within Georgia Tech’s College of Computing. Faculty members Srijan Kumar and Munmun De Choudhury oversaw the research that their students spearheaded.
Kumar, an assistant professor in the School of Computational Science and Engineering, advises Verma and Grover. His expertise is in artificial intelligence, data mining, and online safety.
De Choudhury is an associate professor in the School of Interactive Computing and advises Zhou. Their research connects societal mental health and social media interactions.
The Georgia Tech researchers partnered with the ADL, a leading non-governmental organization that combats real-world hate and extremism. ADL researchers Binny Mathew and Jordan Kraemer co-authored the paper.
The group will present its paper at the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), which takes place in Bangkok, Thailand, Aug. 11-16
ACL 2024 accepted 40 papers written by Georgia Tech researchers. Of the 12 Georgia Tech faculty who authored papers accepted at the conference, nine are from the College of Computing, including Kumar and De Choudhury.
“It is great to see that the peers and research community recognize the importance of community-centric work that provides grounded insights about the capabilities of leading language models,” Verma said.
“We hope the platform encourages more work that presents community-centered perspectives on important societal problems.”
Visit https://sites.gatech.edu/research/acl-2024/ for news and coverage of Georgia Tech research presented at ACL 2024.
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Aug. 08, 2024
Social media users may need to think twice before hitting that “Post” button.
A new large-language model (LLM) developed by Georgia Tech researchers can help them filter content that could risk their privacy and offer alternative phrasing that keeps the context of their posts intact.
According to a new paper that will be presented at the 2024 Association for Computing Linguistics(ACL) conference, social media users should tread carefully about the information they self-disclose in their posts.
Many people use social media to express their feelings about their experiences without realizing the risks to their privacy. For example, a person revealing their gender identity or sexual orientation may be subject to doxing and harassment from outside parties.
Others want to express their opinions without their employers or families knowing.
Ph.D. student Yao Dou and associate professors Alan Ritter and Wei Xu originally set out to study user awareness of self-disclosure privacy risks on Reddit. Working with anonymous users, they created an LLM to detect at-risk content.
While the study boosted user awareness of the personal information they revealed, many called for an intervention. They asked the researchers for assistance to rewrite their posts so they didn’t have to be concerned about privacy.
The researchers revamped the model to suggest alternative phrases that reduce the risk of privacy invasion.
One user disclosed, “I’m 16F I think I want to be a bi M.” The new tool offered alternative phrases such as:
- “I am exploring my sexual identity.”
- “I have a desire to explore new options.”
- “I am attracted to the idea of exploring different gender identities.”
Dou said the challenge is making sure the model provides suggestions that don’t change or distort the desired context of the post.
“That’s why instead of providing one suggestion, we provide three suggestions that are different from each other, and we allow the user to choose which one they want,” Dou said. “In some cases, the discourse information is important to the post, and in that case, they can choose what to abstract.”
WEIGHING THE RISKS
The researchers sampled 10,000 Reddit posts from a pool of 4 million that met their search criteria. They annotated those posts and created 19 categories of self-disclosures, including age, sexual orientation, gender, race or nationality, and location.
From there, they worked with Reddit users to test the effectiveness and accuracy of their model, with 82% giving positive feedback.
However, a contingent thought the model was “oversensitive,” highlighting content they did not believe posed a risk.
Ultimately, the researchers say users must decide what they will post.
“It’s a personal decision,” Ritter said. “People need to look at this and think about what they’re writing and decide between this tradeoff of what benefits they are getting from sharing information versus what privacy risks are associated with that.”
Xu acknowledged that future work on the project should include a metric that gives users a better idea of what types of content are more at risk than others.
“It’s kind of the way passwords work,” she said. “Years ago, they never told you your password strength, and now there’s a bar telling you how good your password is. Then you realize you need to add a special character and capitalize some letters, and that’s become a standard. This is telling the public how they can protect themselves. The risk isn’t zero, but it helps them think about it.”
WHAT ARE THE CONSEQUENCES?
While doxing and harassment are the most likely consequences of posting sensitive personal information, especially for those who belong to minority groups, the researchers say users have other privacy concerns.
Users should know that when they draft posts on a site, their input can be extracted by the site’s application programming interface (API). If that site has a data breach, a user’s personal information could fall into unwanted hands.
“I think we should have a path toward having everything work locally on the user’s computer, so it doesn’t rely on any external APIs to send this data off their local machine,” Ritter said.
Ritter added that users could also be targets of popular scams like phishing without ever knowing it.
“People trying targeted phishing attacks can learn personal information about people online that might help them craft more customized attacks that could make users vulnerable,” he said.
The safest way to avoid a breach of privacy is to stay off social media. But Xu said that’s impractical as there are resources and support these sites can provide that users may not get from anywhere else.
“We want people who may be afraid of social media to use it and feel safe when they post,” she said. “Maybe the best way to get an answer to a question is to ask online, but some people don’t feel comfortable doing that, so a tool like this can make them more comfortable sharing without much risk.”
For more information about Georgia Tech research at ACL, please visit https://sites.gatech.edu/research/acl-2024/.
News Contact
Nathan Deen
Communications Officer
School of Interactive Computing
Pagination
- Previous page
- 8 Page 8
- Next page