Aug. 21, 2024
- Written by Benjamin Wright -
As Georgia Tech establishes itself as a national leader in AI research and education, some researchers on campus are putting AI to work to help meet sustainability goals in a range of areas including climate change adaptation and mitigation, urban farming, food distribution, and life cycle assessments while also focusing on ways to make sure AI is used ethically.
Josiah Hester, interim associate director for Community-Engaged Research in the Brook Byers Institute for Sustainable Systems (BBISS) and associate professor in the School of Interactive Computing, sees these projects as wins from both a research standpoint and for the local, national, and global communities they could affect.
“These faculty exemplify Georgia Tech's commitment to serving and partnering with communities in our research,” he says. “Sustainability is one of the most pressing issues of our time. AI gives us new tools to build more resilient communities, but the complexities and nuances in applying this emerging suite of technologies can only be solved by community members and researchers working closely together to bridge the gap. This approach to AI for sustainability strengthens the bonds between our university and our communities and makes lasting impacts due to community buy-in.”
Flood Monitoring and Carbon Storage
Peng Chen, assistant professor in the School of Computational Science and Engineering in the College of Computing, focuses on computational mathematics, data science, scientific machine learning, and parallel computing. Chen is combining these areas of expertise to develop algorithms to assist in practical applications such as flood monitoring and carbon dioxide capture and storage.
He is currently working on a National Science Foundation (NSF) project with colleagues in Georgia Tech’s School of City and Regional Planning and from the University of South Florida to develop flood models in the St. Petersburg, Florida area. As a low-lying state with more than 8,400 miles of coastline, Florida is one of the states most at risk from sea level rise and flooding caused by extreme weather events sparked by climate change.
Chen’s novel approach to flood monitoring takes existing high-resolution hydrological and hydrographical mapping and uses machine learning to incorporate real-time updates from social media users and existing traffic cameras to run rapid, low-cost simulations using deep neural networks. Current flood monitoring software is resource and time-intensive. Chen’s goal is to produce live modeling that can be used to warn residents and allocate emergency response resources as conditions change. That information would be available to the general public through a portal his team is working on.
“This project focuses on one particular community in Florida,” Chen says, “but we hope this methodology will be transferable to other locations and situations affected by climate change.”
In addition to the flood-monitoring project in Florida, Chen and his colleagues are developing new methods to improve the reliability and cost-effectiveness of storing carbon dioxide in underground rock formations. The process is plagued with uncertainty about the porosity of the bedrock, the optimal distribution of monitoring wells, and the rate at which carbon dioxide can be injected without over-pressurizing the bedrock, leading to collapse. The new simulations are fast, inexpensive, and minimize the risk of failure, which also decreases the cost of construction.
“Traditional high-fidelity simulation using supercomputers takes hours and lots of resources,” says Chen. “Now we can run these simulations in under one minute using AI models without sacrificing accuracy. Even when you factor in AI training costs, this is a huge savings in time and financial resources.”
Flood monitoring and carbon capture are passion projects for Chen, who sees an opportunity to use artificial intelligence to increase the pace and decrease the cost of problem-solving.
“I’m very excited about the possibility of solving grand challenges in the sustainability area with AI and machine learning models,” he says. “Engineering problems are full of uncertainty, but by using this technology, we can characterize the uncertainty in new ways and propagate it throughout our predictions to optimize designs and maximize performance.”
Urban Farming and Optimization
Yongsheng Chen works at the intersection of food, energy, and water. As the Bonnie W. and Charles W. Moorman Professor in the School of Civil and Environmental Engineering and director of the Nutrients, Energy, and Water Center for Agriculture Technology, Chen is focused on making urban agriculture technologically feasible, financially viable, and, most importantly, sustainable. To do that he’s leveraging AI to speed up the design process and optimize farming and harvesting operations.
Chen’s closed-loop hydroponic system uses anaerobically treated wastewater for fertilization and irrigation by extracting and repurposing nutrients as fertilizer before filtering the water through polymeric membranes with nano-scale pores. Advancing filtration and purification processes depends on finding the right membrane materials to selectively separate contaminants, including antibiotics and per- and polyfluoroalkyl substances (PFAS). Chen and his team are using AI and machine learning to guide membrane material selection and fabrication to make contaminant separation as efficient as possible. Similarly, AI and machine learning are assisting in developing carbon capture materials such as ionic liquids that can retain carbon dioxide generated during wastewater treatment and redirect it to hydroponics systems, boosting food productivity.
“A fundamental angle of our research is that we do not see municipal wastewater as waste,” explains Chen. “It is a resource we can treat and recover components from to supply irrigation, fertilizer, and biogas, all while reducing the amount of energy used in conventional wastewater treatment methods.”
In addition to aiding in materials development, which reduces design time and production costs, Chen is using machine learning to optimize the growing cycle of produce, maximizing nutritional value. His USDA-funded vertical farm uses autonomous robots to measure critical cultivation parameters and take pictures without destroying plants. This data helps determine optimum environmental conditions, fertilizer supply, and harvest timing, resulting in a faster-growing, optimally nutritious plant with less fertilizer waste and lower emissions.
Chen’s work has received considerable federal funding. As the Urban Resilience and Sustainability Thrust Leader within the NSF-funded AI Institute for Advances in Optimization (AI4OPT), he has received additional funding to foster international collaboration in digital agriculture with colleagues across the United States and in Japan, Australia, and India.
Optimizing Food Distribution
At the other end of the agricultural spectrum is postdoc Rosemarie Santa González in the H. Milton Stewart School of Industrial and Systems Engineering, who is conducting her research under the supervision of Professor Chelsea White and Professor Pascal Van Hentenryck, the director of Georgia Tech’s AI Hub as well as the director of AI4OPT.
Santa González is working with the Wisconsin Food Hub Cooperative to help traditional farmers get their products into the hands of consumers as efficiently as possible to reduce hunger and food waste. Preventing food waste is a priority for both the EPA and USDA. Current estimates are that 30 to 40% of the food produced in the United States ends up in landfills, which is a waste of resources on both the production end in the form of land, water, and chemical use, as well as a waste of resources when it comes to disposing of it, not to mention the impact of the greenhouses gases when wasted food decays.
To tackle this problem, Santa González and the Wisconsin Food Hub are helping small-scale farmers access refrigeration facilities and distribution chains. As part of her research, she is helping to develop AI tools that can optimize the logistics of the small-scale farmer supply chain while also making local consumers in underserved areas aware of what’s available so food doesn’t end up in landfills.
“This solution has to be accessible,” she says. “Not just in the sense that the food is accessible, but that the tools we are providing to them are accessible. The end users have to understand the tools and be able to use them. It has to be sustainable as a resource.”
Making AI accessible to people in the community is a core goal of the NSF’s AI Institute for Intelligent Cyberinfrastructure with Computational Learning in the Environment (ICICLE), one of the partners involved with the project.
“A large segment of the population we are working with, which includes historically marginalized communities, has a negative reaction to AI. They think of machines taking over, or data being stolen. Our goal is to democratize AI in these decision-support tools as we work toward the UN Sustainable Development Goal of Zero Hunger. There is so much power in these tools to solve complex problems that have very real results. More people will be fed and less food will spoil before it gets to people’s homes.”
Santa González hopes the tools they are building can be packaged and customized for food co-ops everywhere.
AI and Ethics
Like Santa González, Joe Bozeman III is also focused on the ethical and sustainable deployment of AI and machine learning, especially among marginalized communities. The assistant professor in the School of Civil and Environmental Engineering is an industrial ecologist committed to fostering ethical climate change adaptation and mitigation strategies. His SEEEL Lab works to make sure researchers understand the consequences of decisions before they move from academic concepts to policy decisions, particularly those that rely on data sets involving people and communities.
“With the administration of big data, there is a human tendency to assume that more data means everything is being captured, but that's not necessarily true,” he cautions. “More data could mean we're just capturing more of the data that already exists, while new research shows that we’re not including information from marginalized communities that have historically not been brought into the decision-making process. That includes underrepresented minorities, rural populations, people with disabilities, and neurodivergent people who may not interface with data collection tools.”
Bozeman is concerned that overlooking marginalized communities in data sets will result in decisions that at best ignore them and at worst cause them direct harm.
“Our lab doesn't wait for the negative harms to occur before we start talking about them,” explains Bozeman, who holds a courtesy appointment in the School of Public Policy. “Our lab forecasts what those harms will be so decision-makers and engineers can develop technologies that consider these things.”
He focuses on urbanization, the food-energy-water nexus, and the circular economy. He has found that much of the research in those areas is conducted in a vacuum without consideration for human engagement and the impact it could have when implemented.
Bozeman is lobbying for built-in tools and safeguards to mitigate the potential for harm from researchers using AI without appropriate consideration. He already sees a disconnect between the academic world and the public. Bridging that trust gap will require ethical uses of AI.
“We have to start rigorously including their voices in our decision-making to begin gaining trust with the public again. And with that trust, we can all start moving toward sustainable development. If we don't do that, I don't care how good our engineering solutions are, we're going to miss the boat entirely on bringing along the majority of the population.”
BBISS Support
Moving forward, Hester is excited about the impact the Brooks Byers Institute for Sustainable Systems can have on AI and sustainability research through a variety of support mechanisms.
“BBISS continues to invest in faculty development and training in community-driven research strategies, including the Community Engagement Faculty Fellows Program (with the Center for Sustainable Communities Research and Education), while empowering multidisciplinary teams to work together to solve grand engineering challenges with AI by supporting the AI+Climate Faculty Interest Group, as well as partnering with and providing administrative support for community-driven research projects.”
News Contact
Brent Verrill, Research Communications Program Manager, BBISS
Aug. 21, 2024
A new agreement between Los Alamos National Laboratory (LANL) and the National Science Foundation’s Artificial Intelligence Institute for Advances in Optimization (AI4OPT) at Georgia Tech is set to propel research in applied artificial intelligence (AI) and engage students and professionals in this rapidly growing field.
“This collaboration will help develop new AI technologies for the next generation of scientific discovery and the design of complex systems and the control of engineered systems,” said Russell Bent, scientist at Los Alamos. “At Los Alamos, we have a lot of interest in optimizing complex systems. We see an opportunity with AI to enhance system resilience and efficiency in the face of climate change, extreme events, and other challenges.”
The agreement establishes a research and educational partnership focused on advancing AI tools for a next-generation power grid. Maintaining and optimizing the energy grid involves extensive computation, and AI-informed approaches, including modeling, could address power-grid issues more effectively.
AI Approaches to Optimization and Problem-Solving
Optimization involves finding solutions that utilize resources effectively and efficiently. This research partnership will leverage Georgia Tech's expertise to develop “trustworthy foundation models” that, by incorporating AI, reduce the vast computing resources needed for solving complex problems.
In energy grid systems, optimization involves quickly sorting through possibilities and resources to deliver immediate solutions during a power-distribution crisis. The research will develop “optimization proxies” that extend current methods by incorporating broader parameters such as generator limits, line ratings, and grid topologies. Training these proxies with AI for energy applications presents a significant research challenge.
The collaboration will also address problems related to LANL’s diverse missions and applications. The team’s research will advance pioneering efforts in graph-based, physics-informed machine learning to solve Laboratory mission problems.
Outreach and Training Opportunities
In January 2025, the Laboratory will host a Grid Science Winter School and Conference, featuring lectures from LANL scientists and academic partners on electrical grid methods and techniques. With Georgia Tech as a co-organizer, AI optimization for the energy grid will be a focal point of the event.
Since 2020, the Laboratory has been working with Georgia Tech on energy grid projects. AI4OPT, which includes several industrial and academic partners, aims to achieve breakthroughs by combining AI and mathematical optimization.
“The use-inspired research in AI4OPT addresses fundamental societal and technological challenges,” said Pascal Van Hentenryck, AI4OPT director. “The energy grid is crucial to our daily lives. Our collaboration with Los Alamos advances a research mission and educational vision with significant impact for science and society.”
The three-year agreement, funded through the Laboratory Directed Research and Development program’s ArtIMis initiative, runs through 2027. It supports the Laboratory’s commitment to advancing AI. Earl Lawrence is the project’s principal investigator, with Diane Oyen and Emily Castleton joining Bent as co-principal investigators.
Bent, Castleton, Lawrence, and Oyen are also members of the AI Council at the Laboratory. The AI Council helps the Lab navigate the evolving AI landscape, build investment capacities, and forge industry and academic partnerships.
As highlighted in the Department of Energy’s Frontiers in Artificial Intelligence for Science, Security, and Technology (FASST) initiative, AI technologies will significantly enhance the contributions of laboratories to national missions. This partnership with Georgia Tech through AI4OPT is a key step towards that future.
News Contact
Breon Martin
Aug. 19, 2024
Nylon, Teflon, Kevlar. These are just a few familiar polymers — large-molecule chemical compounds — that have changed the world. From Teflon-coated frying pans to 3D printing, polymers are vital to creating the systems that make the world function better.
Finding the next groundbreaking polymer is always a challenge, but now Georgia Tech researchers are using artificial intelligence (AI) to shape and transform the future of the field. Rampi Ramprasad’s group develops and adapts AI algorithms to accelerate materials discovery.
This summer, two papers published in the Nature family of journals highlight the significant advancements and success stories emerging from years of AI-driven polymer informatics research. The first, featured in Nature Reviews Materials, showcases recent breakthroughs in polymer design across critical and contemporary application domains: energy storage, filtration technologies, and recyclable plastics. The second, published in Nature Communications, focuses on the use of AI algorithms to discover a subclass of polymers for electrostatic energy storage, with the designed materials undergoing successful laboratory synthesis and testing.
“In the early days of AI in materials science, propelled by the White House’s Materials Genome Initiative over a decade ago, research in this field was largely curiosity-driven,” said Ramprasad, a professor in the School of Materials Science and Engineering. “Only in recent years have we begun to see tangible, real-world success stories in AI-driven accelerated polymer discovery. These successes are now inspiring significant transformations in the industrial materials R&D landscape. That’s what makes this review so significant and timely.”
AI Opportunities
Ramprasad’s team has developed groundbreaking algorithms that can instantly predict polymer properties and formulations before they are physically created. The process begins by defining application-specific target property or performance criteria. Machine learning (ML) models train on existing material-property data to predict these desired outcomes. Additionally, the team can generate new polymers, whose properties are forecasted with ML models. The top candidates that meet the target property criteria are then selected for real-world validation through laboratory synthesis and testing. The results from these new experiments are integrated with the original data, further refining the predictive models in a continuous, iterative process.
While AI can accelerate the discovery of new polymers, it also presents unique challenges. The accuracy of AI predictions depends on the availability of rich, diverse, extensive initial data sets, making quality data paramount. Additionally, designing algorithms capable of generating chemically realistic and synthesizable polymers is a complex task.
The real challenge begins after the algorithms make their predictions: proving that the designed materials can be made in the lab and function as expected and then demonstrating their scalability beyond the lab for real-world use. Ramprasad’s group designs these materials, while their fabrication, processing, and testing are carried out by collaborators at various institutions, including Georgia Tech. Professor Ryan Lively from the School of Chemical and Biomolecular Engineering frequently collaborates with Ramprasad’s group and is a co-author of the paper published in Nature Reviews Materials.
"In our day-to-day research, we extensively use the machine learning models Rampi’s team has developed,” Lively said. “These tools accelerate our work and allow us to rapidly explore new ideas. This embodies the promise of ML and AI because we can make model-guided decisions before we commit time and resources to explore the concepts in the laboratory."
Using AI, Ramprasad’s team and their collaborators have made significant advancements in diverse fields, including energy storage, filtration technologies, additive manufacturing, and recyclable materials.
Polymer Progress
One notable success, described in the Nature Communications paper, involves the design of new polymers for capacitors, which store electrostatic energy. These devices are vital components in electric and hybrid vehicles, among other applications. Ramprasad’s group worked with researchers from the University of Connecticut.
Current capacitor polymers offer either high energy density or thermal stability, but not both. By leveraging AI tools, the researchers determined that insulating materials made from polynorbornene and polyimide polymers can simultaneously achieve high energy density and high thermal stability. The polymers can be further enhanced to function in demanding environments, such as aerospace applications, while maintaining environmental sustainability.
“The new class of polymers with high energy density and high thermal stability is one of the most concrete examples of how AI can guide materials discovery,” said Ramprasad. “It is also the result of years of multidisciplinary collaborative work with Greg Sotzing and Yang Cao at the University of Connecticut and sustained sponsorship by the Office of Naval Research.”
Industry Potential
The potential for real-world translation of AI-assisted materials development is underscored by industry participation in the Nature Reviews Materials article. Co-authors of this paper also include scientists from Toyota Research Institute and General Electric. To further accelerate the adoption of AI-driven materials development in industry, Ramprasad co-founded Matmerize Inc., a software startup company recently spun out of Georgia Tech. Their cloud-based polymer informatics software is already being used by companies across various sectors, including energy, electronics, consumer products, chemical processing, and sustainable materials.
“Matmerize has transformed our research into a robust, versatile, and industry-ready solution, enabling users to design materials virtually with enhanced efficiency and reduced cost,” Ramprasad said. “What began as a curiosity has gained significant momentum, and we are entering an exciting new era of materials by design.”
News Contact
Tess Malone, Senior Research Writer/Editor
tess.malone@gatech.edu
Aug. 09, 2024
A research group is calling for internet and social media moderators to strengthen their detection and intervention protocols for violent speech.
Their study of language detection software found that algorithms struggle to differentiate anti-Asian violence-provoking speech from general hate speech. Left unchecked, threats of violence online can go unnoticed and turn into real-world attacks.
Researchers from Georgia Tech and the Anti-Defamation League (ADL) teamed together in the study. They made their discovery while testing natural language processing (NLP) models trained on data they crowdsourced from Asian communities.
“The Covid-19 pandemic brought attention to how dangerous violence-provoking speech can be. There was a clear increase in reports of anti-Asian violence and hate crimes,” said Gaurav Verma, a Georgia Tech Ph.D. candidate who led the study.
“Such speech is often amplified on social platforms, which in turn fuels anti-Asian sentiments and attacks.”
Violence-provoking speech differs from more commonly studied forms of harmful speech, like hate speech. While hate speech denigrates or insults a group, violence-provoking speech implicitly or explicitly encourages violence against targeted communities.
Humans can define and characterize violent speech as a subset of hateful speech. However, computer models struggle to tell the difference due to subtle cues and implications in language.
The researchers tested five different NLP classifiers and analyzed their F1 score, which measures a model's performance. The classifiers reported a 0.89 score for detecting hate speech, while detecting violence-provoking speech was only 0.69. This contrast highlights the notable gap between these tools and their accuracy and reliability.
The study stresses the importance of developing more refined methods for detecting violence-provoking speech. Internet misinformation and inflammatory rhetoric escalate tensions that lead to real-world violence.
The Covid-19 pandemic exemplified how public health crises intensify this behavior, helping inspire the study. The group cited that anti-Asian crime across the U.S. increased by 339% in 2021 due to malicious content blaming Asians for the virus.
The researchers believe their findings show the effectiveness of community-centric approaches to problems dealing with harmful speech. These approaches would enable informed decision-making between policymakers, targeted communities, and developers of online platforms.
Along with stronger models for detecting violence-provoking speech, the group discusses a direct solution: a tiered penalty system on online platforms. Tiered systems align penalties with severity of offenses, acting as both deterrent and intervention to different levels of harmful speech.
“We believe that we cannot tackle a problem that affects a community without involving people who are directly impacted,” said Jiawei Zhou, a Ph.D. student who studies human-centered computing at Georgia Tech.
“By collaborating with experts and community members, we ensure our research builds on front-line efforts to combat violence-provoking speech while remaining rooted in real experiences and needs of the targeted community.”
The researchers trained their tested NLP classifiers on a dataset crowdsourced from a survey of 120 participants who self-identified as Asian community members. In the survey, the participants labeled 1,000 posts from X (formerly Twitter) as containing either violence-provoking speech, hateful speech, or neither.
Since characterizing violence-provoking speech is not universal, the researchers created a specialized codebook for survey participants. The participants studied the codebook before their survey and used an abridged version while labeling.
To create the codebook, the group used an initial set of anti-Asian keywords to scan posts on X from January 2020 to February 2023. This tactic yielded 420,000 posts containing harmful, anti-Asian language.
The researchers then filtered the batch through new keywords and phrases. This refined the sample to 4,000 posts that potentially contained violence-provoking content. Keywords and phrases were added to the codebook while the filtered posts were used in the labeling survey.
The team used discussion and pilot testing to validate its codebook. During trial testing, pilots labeled 100 Twitter posts to ensure the sound design of the Asian community survey. The group also sent the codebook to the ADL for review and incorporated the organization’s feedback.
“One of the major challenges in studying violence-provoking content online is effective data collection and funneling down because most platforms actively moderate and remove overtly hateful and violent material,” said Tech alumnus Rynaa Grover (M.S. CS 2024).
“To address the complexities of this data, we developed an innovative pipeline that deals with the scale of this data in a community-aware manner.”
Emphasis on community input extended into collaboration within Georgia Tech’s College of Computing. Faculty members Srijan Kumar and Munmun De Choudhury oversaw the research that their students spearheaded.
Kumar, an assistant professor in the School of Computational Science and Engineering, advises Verma and Grover. His expertise is in artificial intelligence, data mining, and online safety.
De Choudhury is an associate professor in the School of Interactive Computing and advises Zhou. Their research connects societal mental health and social media interactions.
The Georgia Tech researchers partnered with the ADL, a leading non-governmental organization that combats real-world hate and extremism. ADL researchers Binny Mathew and Jordan Kraemer co-authored the paper.
The group will present its paper at the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), which takes place in Bangkok, Thailand, Aug. 11-16
ACL 2024 accepted 40 papers written by Georgia Tech researchers. Of the 12 Georgia Tech faculty who authored papers accepted at the conference, nine are from the College of Computing, including Kumar and De Choudhury.
“It is great to see that the peers and research community recognize the importance of community-centric work that provides grounded insights about the capabilities of leading language models,” Verma said.
“We hope the platform encourages more work that presents community-centered perspectives on important societal problems.”
Visit https://sites.gatech.edu/research/acl-2024/ for news and coverage of Georgia Tech research presented at ACL 2024.
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Aug. 08, 2024
Social media users may need to think twice before hitting that “Post” button.
A new large-language model (LLM) developed by Georgia Tech researchers can help them filter content that could risk their privacy and offer alternative phrasing that keeps the context of their posts intact.
According to a new paper that will be presented at the 2024 Association for Computing Linguistics(ACL) conference, social media users should tread carefully about the information they self-disclose in their posts.
Many people use social media to express their feelings about their experiences without realizing the risks to their privacy. For example, a person revealing their gender identity or sexual orientation may be subject to doxing and harassment from outside parties.
Others want to express their opinions without their employers or families knowing.
Ph.D. student Yao Dou and associate professors Alan Ritter and Wei Xu originally set out to study user awareness of self-disclosure privacy risks on Reddit. Working with anonymous users, they created an LLM to detect at-risk content.
While the study boosted user awareness of the personal information they revealed, many called for an intervention. They asked the researchers for assistance to rewrite their posts so they didn’t have to be concerned about privacy.
The researchers revamped the model to suggest alternative phrases that reduce the risk of privacy invasion.
One user disclosed, “I’m 16F I think I want to be a bi M.” The new tool offered alternative phrases such as:
- “I am exploring my sexual identity.”
- “I have a desire to explore new options.”
- “I am attracted to the idea of exploring different gender identities.”
Dou said the challenge is making sure the model provides suggestions that don’t change or distort the desired context of the post.
“That’s why instead of providing one suggestion, we provide three suggestions that are different from each other, and we allow the user to choose which one they want,” Dou said. “In some cases, the discourse information is important to the post, and in that case, they can choose what to abstract.”
WEIGHING THE RISKS
The researchers sampled 10,000 Reddit posts from a pool of 4 million that met their search criteria. They annotated those posts and created 19 categories of self-disclosures, including age, sexual orientation, gender, race or nationality, and location.
From there, they worked with Reddit users to test the effectiveness and accuracy of their model, with 82% giving positive feedback.
However, a contingent thought the model was “oversensitive,” highlighting content they did not believe posed a risk.
Ultimately, the researchers say users must decide what they will post.
“It’s a personal decision,” Ritter said. “People need to look at this and think about what they’re writing and decide between this tradeoff of what benefits they are getting from sharing information versus what privacy risks are associated with that.”
Xu acknowledged that future work on the project should include a metric that gives users a better idea of what types of content are more at risk than others.
“It’s kind of the way passwords work,” she said. “Years ago, they never told you your password strength, and now there’s a bar telling you how good your password is. Then you realize you need to add a special character and capitalize some letters, and that’s become a standard. This is telling the public how they can protect themselves. The risk isn’t zero, but it helps them think about it.”
WHAT ARE THE CONSEQUENCES?
While doxing and harassment are the most likely consequences of posting sensitive personal information, especially for those who belong to minority groups, the researchers say users have other privacy concerns.
Users should know that when they draft posts on a site, their input can be extracted by the site’s application programming interface (API). If that site has a data breach, a user’s personal information could fall into unwanted hands.
“I think we should have a path toward having everything work locally on the user’s computer, so it doesn’t rely on any external APIs to send this data off their local machine,” Ritter said.
Ritter added that users could also be targets of popular scams like phishing without ever knowing it.
“People trying targeted phishing attacks can learn personal information about people online that might help them craft more customized attacks that could make users vulnerable,” he said.
The safest way to avoid a breach of privacy is to stay off social media. But Xu said that’s impractical as there are resources and support these sites can provide that users may not get from anywhere else.
“We want people who may be afraid of social media to use it and feel safe when they post,” she said. “Maybe the best way to get an answer to a question is to ask online, but some people don’t feel comfortable doing that, so a tool like this can make them more comfortable sharing without much risk.”
For more information about Georgia Tech research at ACL, please visit https://sites.gatech.edu/research/acl-2024/.
News Contact
Nathan Deen
Communications Officer
School of Interactive Computing
Aug. 01, 2024
A Georgia Tech researcher will continue to mitigate harmful post-deployment effects created by artificial intelligence (AI) as he joins the 2024-2025 cohort of fellows selected by the Berkman-Klein Center (BKC) for Internet and Society at Harvard University.
Upol Ehsan is the first Georgia Tech graduate selected by BKC. As a fellow, he will contribute to its mission of exploring and understanding cyberspace, focusing on AI, social media, and university discourse.
Entering its 25th year, the BKC Harvard fellowship program addresses pressing issues and produces impactful research that influences academia and public policy. It offers a global perspective, a vibrant intellectual community, and significant funding and resources that attract top scholars and leaders.
The program is highly competitive and sought after by early career candidates and veteran academic and industry professionals. Cohorts hail from numerous backgrounds, including law, computer science, sociology, political science, neuroscience, philosophy, and media studies.
“Having the opportunity to join such a talented group of people and working with them is a treat,” Ehsan said. “I’m looking forward to adding to the prismatic network of BKC Harvard and learning from the cohesively diverse community.”
While at Georgia Tech, Ehsan expanded the field of explainable AI (XAI) and pioneered a subcategory he labeled human-centered explainable AI (HCXAI). Several of his papers introduced novel and foundational concepts into that subcategory of XAI.
Ehsan works with Professor Mark Riedl in the School of Interactive Computing and the Human-centered AI and Entertainment Intelligence Lab.
Ehsan says he will continue to work on research he introduced in his 2022 paper The Algorithmic Imprint, which shows how the potential harm from algorithms can linger even after an algorithm is no longer used. His research has informed the United Nations’ algorithmic reparations policies and has been incorporated into the National Institute of Standards and Technology AI Risk Management Framework.
“It’s a massive honor to receive this recognition of my work,” Ehsan said. “The Algorithmic Imprint remains a globally applicable Responsible AI concept developed entirely from the Global South. This recognition is dedicated to the participants who made this work possible. I want to take their stories even further."
While at BKC Harvard, Ehsan will develop a taxonomy of potentially harmful AI effects after a model is no longer used. He will also design a process to anticipate these effects and create interventions. He said his work addresses an “accountability blindspot” in responsible AI, which tends to focus on potential harmful effects created during AI deployment.
News Contact
Nathan Deen
Communications Officer
School of Interactive Computing
Jul. 30, 2024
From airplanes to soda cans, aluminum is a crucial — not to mention, an incredibly sustainable — material in manufacturing. Since 2019, Georgia Tech has partnered with Novelis, a global leader in aluminum rolling and recycling, through the Novelis Innovation Hub to advance research and business opportunities in aluminum manufacturing.
Novelis and the Georgia Institute of Technology recently co-hosted the 19th International Conference on Aluminum Alloys (ICAA19). Held on Georgia Tech's campus, this event brought together the brightest minds in aluminum technology for four days of intensive learning and networking.
Since its inception in 1986, ICAA has been the premier global forum for aluminum manufacturing innovations. This year, the conference attracted over 300 participants from 19 countries, including representatives from academia, research organizations, and industry leaders.
“The diverse mix of attendees created a rich tapestry of knowledge and experience, fostering a robust exchange of ideas,” said Naresh Thadhani, conference co-chair and professor in the School of Materials Science and Engineering
ICAA19 featured 12 symposia topics and over 250 technical presentations, delving into critical themes such as sustainability, future mobility, and next-generation manufacturing. Keynote addresses from leaders at the Aluminum Association, Airbus, and Coca-Cola set the stage for insightful discussions. Novelis Chief Technology Officer Philippe Meyer and Georgia Tech Executive Vice President for Research Chaouki Abdallah headlined the event, underscoring the importance of Novelis’ partnership with Georgia Tech.
Marking the fifth anniversary of the Novelis Innovation Hub at Georgia Tech, Hub Executive Director Shreyes Melkote says that “ICAA19 represents a prime example of the close collaboration between Novelis and the Institute, enabled by the Novelis Innovation Hub.” Melkote, a professor in the George W. Woodruff School of Mechanical Engineering, also serves as the associate director of the Georgia Tech Manufacturing Institute.
“This unique center for research, development, and technology has been instrumental in advancing aluminum innovations, exemplifying the power of partnerships in driving industry progress,” says Meyer. “As we reflect on the success of ICAA19, we remain committed to strengthening our existing partnerships and forging new alliances to accelerate innovation. The collaborative spirit showcased at the conference is a testament to our dedication to leading the aluminum industry into a more sustainable future.”
News Contact
Audra Davidson
Research Communications Program Manager
Georgia Tech Manufacturing Institute
Jul. 23, 2024
When it comes to manufacturing innovation, the “valley of death” — the gap between the lab and the industry floor where even the best discoveries often get lost — looms large.
“An individual faculty’s lab focuses on showing the innovation or the new science that they discovered,” said Aaron Stebner, professor and Eugene C. Gwaltney Jr. Chair in Manufacturing in the George W. Woodruff School of Mechanical Engineering. “At that point, the business case hasn't been made for the technology yet — there's no testing on an industrial system to know if it breaks or if it scales up. A lot of innovation and scientific discovery dies there.”
The Georgia Tech Manufacturing Institute (GTMI) launched the Advanced Manufacturing Pilot Facility (AMPF) in 2017 to help bridge that gap.
Now, GTMI is breaking ground on an extensive expansion to bring new capabilities in automation, artificial intelligence, and data management to the facility.
“This will be the first facility of this size that's being intentionally designed to enable AI to perform research and development in materials and manufacturing at the same time,” said Stebner, “setting up GTMI as not just a leader in Georgia, but a leader in automation and AI in manufacturing across the country.”
AMPF: A Catalyst for Collaboration
Located just north of Georgia Tech’s main campus, APMF is a 20,000-square-foot facility serving as a teaching laboratory, technology test bed, and workforce development space for manufacturing innovations.
“The pilot facility,” says Stebner, “is meant to be a place where stakeholders in academic research, government, industry, and workforce development can come together and develop both the workforce that is needed for future technologies, as well as mature, de-risk, and develop business cases for new technologies — proving them out to the point where it makes sense for industry to pick them up.”
In addition to serving as the flagship facility for GTMI research and the state’s Georgia AIM (Artificial Intelligence in Manufacturing) project, the AMPF is a user facility accessible to Georgia Tech’s industry partners as well as the Institute’s faculty, staff, and students.
“We have all kinds of great capabilities and technologies, plus staff that can train students, postdocs, and faculty on how to use them,” said Stebner, who also serves as co-director of the GTMI-affiliated Georgia AIM project. “It creates a unique asset for Georgia Tech faculty, staff, and students.”
Bringing AI and Automation to the Forefront
The renovation of APMF is a key component of the $65 million grant, awarded to Georgia Tech by the U.S. Department of Commerce’s Economic Development Administration in 2022, which gave rise to the Georgia AIM project. With over $23 million in support from Georgia AIM, the improved facility will feature new workforce training programs, personnel, and equipment.
Set to complete in Spring 2026, the Institute’s investment of $16 million supports construction that will roughly triple the size of the facility — and work to address a major roadblock for incorporating AI and automation into manufacturing practices: data.
“There’s a lot of work going on across the world in using machine learning in engineering problems, including manufacturing, but it's limited in scale-up and commercial adoption,” explained Stebner.
Machine learning algorithms have the potential to make manufacturing more efficient, but they need a lot of reliable, repeatable data about the processes and materials involved to be effective. Collecting that data manually is monotonous, costly, and time-consuming.
“The idea is to automate those functions that we need to enable AI and machine learning” in manufacturing, says Stebner. “Let it be a facility where you can imagine new things and push new boundaries and not just be stuck in demonstrating concepts over and over again.”
To make that possible, the expanded facility will couple AI and data management with robotic automation.
“We're going to be able to demonstrate automation from the very beginning of our process all the way through the entire ecosystem of manufacturing,” said Steven Sheffield, GTMI’s senior assistant director of research operations.
“This expansion — no one else has done anything like it,” added Steven Ferguson, principal research scientist with GTMI and managing director of Georgia AIM. “We will have the leading facility for demonstrating what a hyperconnected and AI-driven manufacturing enterprise looks like. We’re setting the stage for Georgia Tech to continue to lead in the manufacturing space for the next decade and beyond.”
News Contact
Audra Davidson
Research Communications Program Manager
Georgia Tech Manufacturing Institute
Jul. 23, 2024
From keeping warm in the winter to doing laundry, heat is crucial to daily life. But as the world grapples with climate change, buildings’ increasing energy consumption is a critical problem. Currently, heat is produced by burning fossil fuels like coal, oil, and gas, but that will need to change as the world shifts to clean energy.
Georgia Tech researchers in the George W. Woodruff School of Mechanical Engineering (ME) are developing more efficient heating systems that don’t rely on fossil fuels. They demonstrated that combining two commonly found salts could help store clean energy as heat; this can be used for heating buildings or integrated with a heat pump for cooling buildings.
The researchers presented their research in “Thermochemical Energy Storage Using Salt Mixtures With Improved Hydration Kinetics and Cycling Stability,” in the Journal of Energy Storage.
Reaction Redux
The fundamental mechanics of heat storage are simple and can be achieved through many methods. A basic reversible chemical reaction is the foundation for their approach: A forward reaction absorbs heat and then stores it, while a reverse reaction releases the heat, enabling a building to use it.
ME Assistant Professor Akanksha Menon has been interested in thermal energy storage since she began working on her Ph.D. When she arrived at Georgia Tech and started the Water-Energy Research Lab (WERL), she became involved in not only developing storage technology and materials but also figuring out how to integrate them within a building. She thought understanding the fundamental material challenges could translate into creating better storage.
“I realized there are so many things that we don't understand, at a scientific level, about how these thermo-chemical materials work between the forward and reverse reactions,” she said.
The Superior Salt
The reactions Menon works with use salt. Each salt molecule can hold a certain number of water molecules within its structure. To instigate the chemical reaction, the researchers dehydrate the salt with heat, so it expels water vapor as a gas. To reverse the reaction, they hydrate the salt with water, forcing the salt structure’s expansion to accommodate those water molecules.
It sounds like a simple process, but as this expansion/contraction process happens, the salt gets more stressed and will eventually mechanically fail, the same way lithium-ion batteries only have so many charge-discharge cycles.
“You can start with something that's a nice spherical particle, but after it goes through a few of these dehydration-hydration cycles, it just breaks apart into tiny particles and completely pulverizes or it overhydrates and agglomerates into a block,” Menon explained.
These changes aren’t necessarily catastrophic, but they do make the salt ineffective for long-term heat storage as the storage capacity decreases over time.
Menon and her student, Erik Barbosa, a Ph.D. student in ME, began combining salts that react with water in different ways. After testing six salts over two years, they found two that complemented each other well. Magnesium chloride often fails because it absorbs too much water, whereas strontium chloride is very slow to hydrate. Together, their respective limitations can mutually benefit each other and lead to improved heat storage.
“We didn't plan to mix salts; it was just one of the experiments we tried,” Menon said. “Then we saw this interactive behavior and spent a whole year trying to understand why this was happening and if it was something we could generalize to use for thermal energy storage.”
The Energy Storage of the Future
Menon is just beginning with this research, which was supported by a National Science Foundation (NSF) CAREER Award. Her next step is developing the structures capable of containing these salts for heat storage, which is the focus of an Energy Earthshots project funded by the U.S. Department of Energy’s (DOE) Office of Basic Energy Sciences.
A system-level demonstration is also planned, where one solution is filling a drum with salts in a packed bed reactor. Then hot air would flow across the salts, dehydrating them and effectively charging the drum like a battery. To release that stored energy, humid air would be blown over the salts to rehydrate the crystals. The subsequently released heat can be used in a building instead of fossil fuels. While initiating the reaction needs electricity, this could come from off-peak (excess renewable electricity) and the stored thermal energy could be deployed at peak times. This is the focus of another ongoing project in the lab that is funded by the DOE’s Building Technologies Office.
Ultimately, this technology could lead to climate-friendly energy solutions. Plus, unlike many alternatives like lithium batteries, salt is a widely available and cost-effective material, meaning its implementation could be swift. Salt-based thermal energy storage can help reduce carbon emissions, a vital strategy in the fight against climate change.
“Our research spans the range from fundamental science to applied engineering thanks to funding from the NSF and DOE,” Menon said. “This positions Georgia Tech to make a significant impact toward decarbonizing heat and enabling a renewable future.”
News Contact
Tess Malone, Senior Research Writer/Editor
tess.malone@gatech.edu
Jul. 22, 2024
This partnership in the advancement of AI and mathematical optimization to address pressing energy transformations in Latin America and the U.S. has formed between the NSF Artificial Intelligence (AI) Research Institute for Advances in Optimization (AI4OPT) at Georgia Tech and PSR, Inc. - Energy Consulting and Analytics.
PSR is a global leader in analytical solutions for the energy sector, providing innovative technical consultancy services and state-of-the-art power systems planning software. Their tools are used for detailed modeling of entire countries or regions and are utilized in over 70 countries. Together with AI4OPT, they aim to leverage advancements in AI and mathematical optimization to address pressing energy transformations in Latin America and the U.S.
Latin America boasts abundant renewable energy resources, especially hydropower, leading to one of the largest shares of renewables in its energy mix. However, expanding renewable energy capacity in Latin America and the U.S. to meet decarbonization goals will require system operational advances and new technologies that can adapt to current needs.
One focus of this collaboration will be studying how to efficiently incorporate pumped storage into the resource mix as a solution for long-duration storage. These plants act as large batteries, pumping water to higher reservoirs during low demand periods and generating electricity during high demand with minimal energy loss over time. This technology supports both short-term and long-term energy storage, making it crucial for managing the variability of intermittent renewables like solar and wind.
The complex and large-scale nature of the expansion problem, exacerbated by inherent uncertainty and time-coupled decisions, traditionally requires sophisticated optimization techniques. AI innovations now provide faster solutions and better representations of non-linear dynamics, leading to more cost-effective operations and optimized energy mix selection for the energy transition.
This collaboration plans to use machine learning to enhance power system operators' ability to perform faster security checks and screenings. As renewable energy sources introduce more variability, traditional methods struggle with the increasing number of scenarios needed for grid stability. Machine learning offers a solution by expediting these analyses, supporting the integration of more renewable energy into the system.
About PSR
PSR is a global provider of analytical solutions for the energy sector, spanning from insightful and innovative technical consultancy services to state-of-the-art specialized power systems planning software applied to the detailed modelling of entire countries or regions. Having its products applied in over 70 countries, PSR contributes to the research and development of optimization and data analytics’ tools for guaranteeing a reliable and least-cost operation of power systems, helping the countries achieve their decarbonization targets.
About AI4OPT
The Artificial Intelligence (AI) Research Institute for Advances in Optimization, or AI4OPT, aims to deliver a paradigm shift in automated decision-making at massive scales by fusing AI and Mathematical Optimization (MO) to achieve breakthroughs that neither field can achieve independently. The Institute is driven by societal challenges in energy, logistics and supply chains, resilience and sustainability, and circuit design and control. To address the widening gap in job opportunities, the Institute delivers an innovative longitudinal education and workforce development program.
News Contact
Breon Martin
AI Research Communications Manager
Georgia Tech
Pagination
- Previous page
- 9 Page 9
- Next page