New CSE Faculty Lu Mi

Two new assistant professors joined the School of Computational Science and Engineering (CSE) faculty this fall. Lu Mi comes to Georgia Tech from the Allen Institute for Brain Science in Seattle, where she was a Shanahan Foundation Fellow. 

We sat down with Mi to learn more about her background and to introduce her to the Georgia Tech and College of Computing communities. 

Faculty: Lu Mi, assistant professor, School of CSE

Research Interests: Computational Neuroscience, Machine Learning

Education: Ph.D. in Computer Science from the Massachusetts Institute of Technology; B.S. in Measurement, Control, and Instruments from Tsinghua University

Hometown: Sichuan, China (home of the giant pandas) 

How have your first few months at Georgia Tech gone so far?

I’ve really enjoyed my time at Georgia Tech. Developing a new course has been both challenging and rewarding. I’ve learned a lot from the process and conversations with students. My colleagues have been incredibly welcoming, and I’ve had the opportunity to work with some very smart and motivated students here at Georgia Tech.

You hit the ground running this year by teaching your CSE 8803 course on brain-inspired machine intelligence. What important concepts do you teach in this class?

This course focuses on comparing biological neural networks with artificial neural networks. We explore questions like: How does the brain encode information, perform computations, and learn? What can neuroscience and artificial intelligence (AI) learn from each other? Key topics include spiking neural networks, neural coding, and biologically plausible learning rules. By the end of the course, I expect students to have a solid understanding of neural algorithms and the emerging NeuroAI field.

When and how did you become interested in computational neuroscience in the first place?

I’ve been fascinated by how the brain works since I was young. My formal engagement with the field began during my Ph.D. research, where we developed algorithms to help neuroscientists map large-scale synaptic wiring diagrams in the brain. Since then, I’ve had the opportunity to collaborate with researchers at institutions like Harvard, the Janelia Research Campus, the Allen Institute for Brain Science, and the University of Washington on various exciting projects in this field.

What about your experience and research are you currently most proud of?

I’m particularly proud of the framework we developed to integrate black-box machine learning models with biologically realistic mechanistic models. We use advanced deep-learning techniques to infer unobserved information and combine this with prior knowledge from mechanistic models. This allows us to test hypotheses by applying different model variants. I believe this framework holds great potential to address a wide range of scientific questions, leveraging the power of AI.

What about Georgia Tech convinced you to accept a faculty position?

Georgia Tech CSE felt like a perfect fit for my background and research interests, particularly within the AI4Science initiative and the development of computational tools for biology and neuroscience. My work overlaps with several colleagues here, and I’m excited to collaborate with them. Georgia Tech also has a vibrant and impactful Neuro Next Initiative community, which is another great attraction.

What are your hobbies and interests when not researching and teaching?

I enjoy photography and love spending time with my two corgi dogs, especially taking them for walks.

What have you enjoyed most so far about living in Atlanta? 

I’ve really appreciated the peaceful, green environment with so many trees. I’m also looking forward to exploring more outdoor activities, like fishing and golfing.

News Contact

Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu

woman wearing glasses standing outside

r. Teodora Baluta is looking for Ph.D. students to join her in researching deep fake detection, malicious AI use, and building secure AI models with privacy in mind. Photos by Terence Rushin, College of Computing

New cybersecurity research initiatives into generative artificial intelligence (AI) tools will soon be underway at Georgia Tech, thanks to the efforts of a new assistant professor in the School of Cybersecurity and Privacy (SCP).

While some researchers seek ways to integrate AI into security practices, Teodora Baluta studies the algorithms and datasets used to train new AI tools to assess their security in theory and practice.

Specifically, she investigates whether the outputs from generative AI tools are abusing data or producing text based on stolen data. As one of Georgia Tech’s newest faculty, Baluta is determined to build on the research she completed during her Ph.D. at the National University of Singapore. 

She plans to expand her past works by continuing to analyze existing AI technologies and researching ways to build better machine learning systems with security measures already in place. 

“One thing that excites me about joining SCP is its network of experts that can weigh in on aspects that are outside of my field,” said Baluta. “I am really looking forward to building on my past works by studying the bigger security picture of AI and machine learning.” 

As a new faculty member, Baluta is looking for Ph.D. students interested in joining her in these new research initiatives

“We’re going to be looking at topics such as the mathematical possibility of detecting deep fakes, uncovering the malicious intent behind AI use, and how to build better AI models with security and privacy safeguards,” she said. 

Baluta’s research has been recognized by Google’s Ph.D. fellowship program and Georgia Tech’s EECS Rising Stars Workshop in 2023. As a Ph.D. student, she earned the Dean’s Graduate Research Excellence Award and the President’s Graduate Fellowship at the National University of Singapore. She was also selected as a finalist for the Microsoft Research Ph.D. Fellowship, Asia-Pacific.

News Contact

John Popham

Communications Officer II

School of Cybersecurity and Privacy

woman speaking

DHS Assistant Secretary for CWMD, Mary Ellen Callahan, speaks to students on the Georgia Tech campus in September. Photo by Terence Rushin, College of Computing

Even though artificial intelligence (AI) is not advanced enough to help the average person build weapons of mass destruction, federal agencies know it could be possible and are keeping pace with next generation technologies through rigorous research and strategic partnerships. 

It is a delicate balance, but as the leader of the Department of Homeland Security (DHS), Countering Weapons of Mass Destruction Office (CWMD) told a room full of Georgia Tech students, faculty, and staff, there is no room for error. 

“You have to be right all the time, the bad guys only have to be right once,” said Mary Ellen Callahan, assistant secretary for CWMD. 

As a guest of John Tien, former DHS deputy secretary and professor of practice in the School of Cybersecurity and Privacy as well as the Sam Nunn School of International Affairs, Callahan was at Georgia Tech for three separate speaking engagements in late September. 

"Assistant Secretary Callahan's contributions were remarkable in so many ways,” said Tien. “Most importantly, I love how she demonstrated to our students that the work in the fields of cybersecurity, privacy, and homeland security is an honorable, interesting, and substantive way to serve the greater good of keeping the American people safe and secure. As her former colleague at the U.S. Department of Homeland Security, I was proud to see her represent her CWMD team, DHS, and the Biden-Harris Administration in the way she did, with humility, personality, and leadership."

While the thought of AI-assisted WMDs is terrifying to think about, it is just a glimpse into what Callahan’s office handles on a regular basis. The assistant secretary walked her listeners through how CWMD works with federal and local law enforcement on how to identify and detect the signs of potential chemical, biological, radiological, or nuclear (CBRN) weapons. 

“There's a whole cadre of professionals who spend every day preparing for the worst day in U.S. history,” said Callahan. “They are doing everything in their power to make sure that that does not happen.”

CWMD is also researching ways to implement AI technologies into current surveillance systems to help identify and respond to threats faster. For example, an AI-backed bio-hazard surveillance systems would allow analysts to characterize and contextualize the risk of potential bio-hazard threats in a timely manner.

Callahan’s office spearheaded a report exploring the advantages and risks of AI in, “Reducing the Risks at the Intersection of Artificial Intelligence and Chemical, Biological, Radiological, and Nuclear Threats,” which was released to the public earlier this year. 

The report was a multidisciplinary effort that was created in collaboration with the White House Office of Science and Technology Policy, Department of Energy, academic institutions, private industries, think tanks, and third-party evaluators. 

During his introduction of assistant secretary, SCP Chair Michael Bailey told those seated in the Coda Atrium that Callahan’s career is an incredible example of the interdisciplinary nature he hopes the school’s students and faculty can use as a roadmap.

“Important, impactful, and interdisciplinary research can be inspired by everyday problems,” he said. "We believe that building a secure future requires revolutionizing security education and being vigilant, and together, we can achieve this goal."

While on campus Tuesday, Callahan gave a special guest lecture to the students in “CS 3237 Human Dimension of Cybersecurity: People, Organizations, Societies,” and “CS 4267 - Critical Infrastructures.” Following the lecture, she gave a prepared speech to students, faculty, and staff. 

Lastly, she participated in a moderated panel discussion with SCP J.Z. Liang Chair Peter Swire and Jerry Perullo, SCP professor of practice and former CISO of International Continental Exchange as well as the New York Stock Exchange. The panel was moderated by Tien.

News Contact

John Popham, Communications Officer II 

School of Cybersecurity and Privacy | Georgia Institute of Technology

scp.cc.gatech.edu | in/jp-popham on LinkedIn

Get the latest SCP updates by joining our mailing list!

Sahil Khose

Is it a building or a street? How tall is the building? Are there powerlines nearby?

These are details autonomous flying vehicles would need to know to function safely. However, few aerial image datasets exist that can adequately train the computer vision algorithms that would pilot these vehicles.

That’s why Georgia Tech researchers created a new benchmark dataset of computer-generated aerial images.

Judy Hoffman, an assistant professor in Georgia Tech’s School of Interactive Computing, worked with students in her lab to create SKYSCENES. The dataset contains over 33,000 aerial images of cities curated from a computer simulation program.

Hoffman said sufficient training datasets could unlock the potential of autonomous flying vehicles. Constructing those datasets is a challenge the computer vision research community has been working for years to overcome.

“You can’t crowdsource it the same way you would standard internet images,” Hoffman said. “Trying to collect it manually would be very slow and expensive — akin to what the self-driving industry is doing driving around vehicles, but now you’re talking about drones flying around. 

“We must fix those problems to have models that work reliably and safely for flying vehicles.”

Many existing datasets aren’t annotated well enough for algorithms to distinguish objects in the image. For example, the algorithms may not recognize the surface of a building from the surface of a street.

Working with Hoffman, Ph.D. student Sahil Khose tried a new approach — constructing a synthetic image data set from a ground-view, open-source simulator known as CARLA.

CARLA was originally designed to provide ground-view simulation for self-driving vehicles. It creates an open-world virtual reality that allows users to drive around in computer-generated cities.

Khose and his collaborators adjusted CARLA’s interface to support aerial views that mimic views one might get from unmanned aerial vehicles (UAVs). 

What's the Forecast?

The team also created new virtual scenarios to mimic the real world by accounting for changes in weather, times of day, various altitudes, and population per city. The algorithms will struggle to recognize the objects in the frame consistently unless those details are incorporated into the training data.

“CARLA’s flexibility offers a wide range of environmental configurations, and we take several important considerations into account while curating SKYSCENES images from CARLA,” Khose said. “Those include strategies for obtaining diverse synthetic data, embedding real-world irregularities, avoiding correlated images, addressing skewed class representations, and reproducing precise viewpoints.”

SKYSCENES is not the largest dataset of aerial images to be released, but a paper co-authored by Khose shows that it performs better than existing models. 

Khose said models trained on this dataset exhibit strong generalization to real-world scenarios, and integration with real-world data enhances their performance. The dataset also controls variability, which is essential to perform various tasks.

“This dataset drives advancements in multi-view learning, domain adaptation, and multimodal approaches, with major implications for applications like urban planning, disaster response, and autonomous drone navigation,” Khose said. “We hope to bridge the gap for synthetic-to-real adaptation and generalization for aerial images.”

Seeing the Whole Picture

For algorithms, generalization is the ability to perform tasks based on new data that expands beyond the specific examples on which they were trained.

“If you have 200 images, and you train a model on those images, they’ll do well at recognizing what you want them to recognize in that closed-world initial setting,” Hoffman said. “But if we were to take aerial vehicles and fly them around cities at various times of the day or in other weather conditions, they would start to fail.”

That’s why Khose designed algorithms to enhance the quality of the curated images.

“These images are captured from 100 meters above ground, which means the objects appear small and are challenging to recognize,” he said. “We focused on developing algorithms specifically designed to address this.”

Those algorithms elevate the ability of ML models to recognize small objects, improving their performance in navigating new environments.

“Our annotations help the models capture a more comprehensive understanding of the entire scene — where the roads are, where the buildings are, and know they are buildings and not just an obstacle in the way,” Hoffman said. “It gives a richer set of information when planning a flight.

“To work safely, many autonomous flight plans might require a map given to them beforehand. If you have successful vision systems that understand exactly what the obstacles in the real world are, you could navigate in previously unseen environments.”

For more information about Georgia Tech Research at ECCV 2024, click here.

News Contact

Nathan Deen

 

Communications Officer

 

School of Interactive Computing

Socrates

A year ago, Ray Hung, a master’s student in computer science, assisted Professor Thad Starner in constructing an artificial intelligence (AI)-powered anti-plagiarism tool for Starner’s 900-student Intro to Artificial Intelligence (CS3600) course.

While the tool proved effective, Hung began considering ways to deter plagiarism and improve the education system.

Plagiarism can be prevalent in online exams, so Hung looked at oral examinations commonly used in European education systems and rooted in the Socratic method.

One of the advantages of oral assessments is they naturally hinder cheating. Consulting ChatGPT wouldn’t benefit a student unless the student memorizes the entire answer. Even then, follow-up questions would reveal a lack of genuine understanding.

Hung drew inspiration from the 2009 reboot of Star Trek, particularly the opening scene in which a young Spock provides oral answers to questions prompted by AI.

“I think we can do something similar,” Hung said. “Research has shown that oral assessment improves people’s material understanding, critical thinking, and communication skills. 

“The problem is that it’s not scalable with human teachers. A professor may have 600 students. Even with teaching assistants, it’s not practical to conduct oral assessments. But with AI, it’s now possible.”

Hung developed The Socratic Mind with Starner, Scheller College of Business Assistant Professor Eunhee Sohn, and researchers from the Georgia Tech Center for 21st Century Universities (C21U).

The Socratic Mind is a scalable, AI-powered oral assessment platform leveraging Socratic questioning to challenge students to explain, justify, and defend their answers to showcase their understanding.

“We believe that if you truly understand something, you should be able to explain it,” Hung said. 

“There is a deeper need for fostering genuine understanding and cultivating high-order thinking skills. I wanted to promote an education paradigm in which critical thinking, material understanding, and communication skills play integral roles and are at the forefront of our education.”

Hung entered his project into the Learning Engineering Tools Competition, one of the largest education technology competitions in the world. Hung and his collaborators were among five teams that won a Catalyst Award and received a $50,000 prize.

Benefits for Students

The Socratic Mind will be piloted in several classes this semester with about 2,000 students participating. One of those classes is the Intro to Computing (CS1301) class taught by College of Computing Professor David Joyner.

Hung said The Socratic Mind will be a resource students can use to prepare to defend their dissertation or to teach a class if they choose to pursue a Ph.D. Anyone struggling with public speaking or preparing for job interviews will find the tool helpful. 

“Many users are interested in AI roleplay to practice real-world conversations,” he said. “The AI can roleplay a manager if you want to discuss a promotion. It can roleplay as an interviewer if you have a job interview. There are a lot of uses for oral assessment platforms where you can practice talking with an AI.

“I hope this tool helps students find their education more valuable and help them become better citizens, workers, entrepreneurs, or whoever they want to be in the future.”

Hung said the chatbot is not only conversational but also adverse to human persuasion because it follows the Socratic method of asking follow-up questions.

“ChatGPT and most other large language models are trained as helpful, harmless assistants,” he said. “If you argue with it and hold your position strong enough, you can coerce it to agree. We don’t want that.

“The Socratic Mind AI will follow up with you in real-time about what you just said, so it’s not a one-way conversation. It’s interactive and engaging and mimics human communication well.”

Educational Overhaul

C21U Director of Research in Education Innovation Jonna Lee and C21U Research Scientist Meryem Soylu will measure The Socratic Mind’s effectiveness during the pilot and determine its scalability.

“I thought it would be interesting to develop this further from a learning engineering perspective because it’s about systematic problem solving, and we want to create scalable solutions with technologies,” Lee said.

“I hope we can find actionable insights about how this AI tool can help transform classroom learning and assessment practices compared to traditional methods. We see the potential for personalized learning for various student populations, including non-traditional lifetime learners."

Hung said The Socratic Mind has the potential to revolutionize the U.S. education system depending on how the system chooses to incorporate AI.  

Recognizing the advancement of AI is likely an unstoppable trend. Hung advocates leveraging AI to enhance learning and unlock human potential rather than focusing on restrictions.

“We are in an era in which information is abundant, but wisdom is scarce,” Hung said. “Shallow and rapid interactions drive social media, for example. We think it’s a golden time to elevate people’s critical thinking and communication skills.”

For more information about The Socratic Mind and to try a demo, visit the project's website.

News Contact

Nathan Deen

Communications Officer

School of interactive Computing

Tech AI and CSSE Forge Partnership

In a major step forward for deploying artificial intelligence (AI) in industry, Georgia Tech’s newly established AI hub, Tech AI, has partnered with the Center for Scientific Software Engineering (CSSE). This collaboration aims to bridge the gap between academia and industry by advancing scalable AI solutions in sectors such as energy, mobility, supply chains, healthcare, and services.

Building on the Foundation of Success

CSSE, founded in late 2021 and supported by Schmidt Sciences as part of their VISS initiative, was created to advance and support scientific research by applying modern software engineering practices, cutting-edge technologies, and modern tools to the development of scientific software within and outside Georgia Tech. CSSE is led by Alex Orso,  professor and associate dean in the College of Computing,  and Jeff Young, a principal scientist at Georgia Tech. The Center's team boasts over 60 years of combined experience, with engineers from companies such as Microsoft, Amazon, and various startups, working under the supervision of the Center’s Head of Engineering, Dave Brownell. Their focus is on turning cutting-edge research into real-world products.

“Software engineering is about much more than just writing code,” Orso explained. “It’s also about specifying, designing, testing, deploying, and maintaining these systems.”

A Partnership to Support AI Research and Innovation

Through this collaboration, CSSE’s expertise will be integrated into Tech AI to create a software engineering division that can support AI engineering and also create new career opportunities for students and researchers.

Pascal Van Hentenryck, the A. Russell Chandler III Chair and professor in the H. Milton Stewart School of Industrial Engineering (ISyE)  and director of both the NSF AI Research Institute for Advances in Optimization (AI4OPT) and Tech AI, highlighted the potential of this partnership.

“We are impressed with the technology and talent within CSSE,” Van Hentenryck said. “This partnership allows us to leverage an existing, highly skilled engineering team rather than building one from scratch. It’s a unique opportunity to build the engineering pillar of Tech AI and push our AI initiatives forward, moving from pilots to products.”

“Joining our forces and having a professional engineering resource within Tech AI will give Georgia Tech a great competitive advantage over other AI initiatives,” Orso added.

One of the first projects under this collaboration focuses on AI in energy, particularly in developing new-generation, AI-driven, market clearing optimization and real-time risk assessment. Plans are also in place to pursue several additional projects, including the creation of an AI-powered search engine assistant, demonstrating the center’s ability to tackle complex, real-world problems.

This partnership is positioned to make a significant impact on applied AI research and innovation at Georgia Tech. By integrating modern software engineering practices, the collaboration will address key challenges in AI deployment, scalability, and sustainability, and translate AI research innovations into products with real societal impact.

“This is a match made in heaven,” Orso noted, reflecting on the collaboration’s alignment with Georgia Tech’s strategic goals to advance technology and improve human lives. Van Hentenryck added that “the collaboration is as much about creating new technologies as it is about educating the next generation of engineers.”

Promoting Open Source at Tech AI

A crucial element supporting the new Tech AI and CSSE venture is Georgia Tech’s Open Source Program Office (OSPO), a joint effort with the College of Computing, PACE, and the Georgia Tech Library. As an important hub of open-source knowledge, OSPO will provide education, training, and guidance on best practices for using and contributing to open-source AI frameworks.

“A large majority of the software driving our current accomplishments in AI research and development is built on a long history of open-source software and data sets, including frameworks like PyTorch and models like Meta’s LLaMA,” said Jeff Young, principal investigator at OSPO. “Understanding how we can best use and contribute to open-source AI is critical to our future success with Tech AI, and OSPO is well-suited to provide guidance, training, and expertise around these open-source tools, frameworks, and pipelines.”

Looking Ahead

As the partnership between Tech AI and CSSE evolves, both groups anticipate a future in which interdisciplinary research drives innovation. By integrating AI with real-world software engineering, the collaboration promises to create new opportunities for students, researchers, and Georgia Tech as a whole.

With a strong foundation, a talented team, and a clear vision, Tech AI and CSSE together are set to break new ground in AI and scientific research, propelling Georgia Tech to the forefront of technological advancement in the AI field.

 

About the Center for Scientific Software Engineering (CSSE)

The CSSE at Georgia Tech, supported by an $11 million grant from Schmidt Sciences, is one of four scientific software engineering centers within the Virtual Institute for Scientific Software (VISS). Its mission is to develop scalable, reliable, open-source software for scientific research, ensuring maintainability and effectiveness. Learn more at https://ssecenter.cc.gatech.edu.

About Georgia Tech’s Open Source Program Office (OSPO)

Georgia Tech’s OSPO supports the development of open-source research software across campus. Funded by a Sloan Foundation grant, OSPO provides community guidelines, training, and outreach to promote a thriving open-source ecosystem. Learn more at https://ospo.cc.gatech.edu.

About Schmidt Sciences

Schmidt Sciences is a nonprofit organization founded in 2024 by Eric and Wendy Schmidt that works to advance science and technology that deepens human understanding of the natural world and develops solutions to global issues. The organization makes grants in four areas—AI and advanced computing, astrophysics and space, biosciences and climate—as well as supporting researchers in a variety of disciplines through its science systems program. Learn more at https://www.schmidtsciences.org/

About Tech AI

Tech AI is Georgia Tech’s AI hub, advancing AI through research, education, and responsible deployment. The hub focuses on AI solutions for real-world applications, preparing the next generation of AI leaders. Learn more at https://ai.gatech.edu.

News Contact

Breon Martin

AI Marketing Communications Manager

KDD 2024
KDD 2024
KDD 2024 Austin P. Wright

A new algorithm tested on NASA’s Perseverance Rover on Mars may lead to better forecasting of hurricanes, wildfires, and other extreme weather events that impact millions globally.

Georgia Tech Ph.D. student Austin P. Wright is first author of a paper that introduces Nested Fusion. The new algorithm improves scientists’ ability to search for past signs of life on the Martian surface. 

In addition to supporting NASA’s Mars 2020 mission, scientists from other fields working with large, overlapping datasets can use Nested Fusion’s methods toward their studies.

Wright presented Nested Fusion at the 2024 International Conference on Knowledge Discovery and Data Mining (KDD 2024) where it was a runner-up for the best paper award. KDD is widely considered the world's most prestigious conference for knowledge discovery and data mining research.

“Nested Fusion is really useful for researchers in many different domains, not just NASA scientists,” said Wright. “The method visualizes complex datasets that can be difficult to get an overall view of during the initial exploratory stages of analysis.”

Nested Fusion combines datasets with different resolutions to produce a single, high-resolution visual distribution. Using this method, NASA scientists can more easily analyze multiple datasets from various sources at the same time. This can lead to faster studies of Mars’ surface composition to find clues of previous life.

The algorithm demonstrates how data science impacts traditional scientific fields like chemistry, biology, and geology.

Even further, Wright is developing Nested Fusion applications to model shifting climate patterns, plant and animal life, and other concepts in the earth sciences. The same method can combine overlapping datasets from satellite imagery, biomarkers, and climate data.

“Users have extended Nested Fusion and similar algorithms toward earth science contexts, which we have received very positive feedback,” said Wright, who studies machine learning (ML) at Georgia Tech.

“Cross-correlational analysis takes a long time to do and is not done in the initial stages of research when patterns appear and form new hypotheses. Nested Fusion enables people to discover these patterns much earlier.”

Wright is the data science and ML lead for PIXLISE, the software that NASA JPL scientists use to study data from the Mars Perseverance Rover.

Perseverance uses its Planetary Instrument for X-ray Lithochemistry (PIXL) to collect data on mineral composition of Mars’ surface. PIXL’s two main tools that accomplish this are its X-ray Fluorescence (XRF) Spectrometer and Multi-Context Camera (MCC).

When PIXL scans a target area, it creates two co-aligned datasets from the components. XRF collects a sample's fine-scale elemental composition. MCC produces images of a sample to gather visual and physical details like size and shape. 

A single XRF spectrum corresponds to approximately 100 MCC imaging pixels for every scan point. Each tool’s unique resolution makes mapping between overlapping data layers challenging. However, Wright and his collaborators designed Nested Fusion to overcome this hurdle.

In addition to progressing data science, Nested Fusion improves NASA scientists' workflow. Using the method, a single scientist can form an initial estimate of a sample’s mineral composition in a matter of hours. Before Nested Fusion, the same task required days of collaboration between teams of experts on each different instrument.

“I think one of the biggest lessons I have taken from this work is that it is valuable to always ground my ML and data science problems in actual, concrete use cases of our collaborators,” Wright said. 

“I learn from collaborators what parts of data analysis are important to them and the challenges they face. By understanding these issues, we can discover new ways of formalizing and framing problems in data science.”

Wright presented Nested Fusion at KDD 2024, held Aug. 25-29 in Barcelona, Spain. KDD is an official special interest group of the Association for Computing Machinery. The conference is one of the world’s leading forums for knowledge discovery and data mining research.

Nested Fusion won runner-up for the best paper in the applied data science track, which comprised of over 150 papers. Hundreds of other papers were presented at the conference’s research track, workshops, and tutorials. 

Wright’s mentors, Scott Davidoff and Polo Chau, co-authored the Nested Fusion paper. Davidoff is a principal research scientist at the NASA Jet Propulsion Laboratory. Chau is a professor at the Georgia Tech School of Computational Science and Engineering (CSE).

“I was extremely happy that this work was recognized with the best paper runner-up award,” Wright said. “This kind of applied work can sometimes be hard to find the right academic home, so finding communities that appreciate this work is very encouraging.”

News Contact

Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu

Anna Ivanova

Anna Ivanova, assistant professor in the School of Psychology at Georgia Tech.

Anna Ivanova, assistant professor in the School of Psychology, was recently named to the MIT Technology Review’s 35 Innovators Under 35 for 2024 for her work on language processing in the human brain and artificial intelligence applications.

A key pillar of Ivanova’s work involves large language models (LLM) commonly used in artificial intelligence tools like ChatGPT. By approaching the study of LLMs with cognitive science techniques, Ivanova hopes to bring us closer to more functional AIs — and a better understanding of the brain.

“I am happy that, these days, language and human cognition are topics that the world cares deeply about, thanks to recent developments in AI,” says Ivanova, who is also a member of Georgia Tech’s Neuro Next Initiative, a burgeoning interdisciplinary research hub for neuroscience, neurotechnology, and society. “Not only are these topics important, but they are also fun to study.” 

Learn more about Ivanova’s research.

News Contact

Audra Davidson
Communications Program Manager
Neuro Next Initiative

Graphic of a circuit board with a set of interconnects leading to a cloud

Graphic of a circuit board with a set of interconnects leading to a cloud

The Cloud Hub, a key initiative of the Institute for Data Engineering and Science (IDEaS) at Georgia Tech, recently concluded a successful Call for Proposals focused on advancing the field of Generative Artificial Intelligence (GenAI). This initiative, made possible by a generous gift funding from Microsoft, aims to push the boundaries of GenAI research by supporting projects that explore both foundational aspects and innovative applications of this cutting-edge technology.

Call for Proposals: A Gateway to Innovation

Launched in early 2024, the Call for Proposals invited researchers from across Georgia Tech to submit their innovative ideas on GenAI. The scope was broad, encouraging proposals that spanned foundational research, system advancements, and novel applications in various disciplines, including arts, sciences, business, and engineering. A special emphasis was placed on projects that addressed responsible and ethical AI use.

The response from the Georgia Tech research community was overwhelming, with 76 proposals submitted by teams eager to explore this transformative technology. After a rigorous selection process, eight projects were selected for support. Each awarded team will also benefit from access to Microsoft’s Azure cloud resources..

Recognizing Microsoft’s Generous Contribution

This successful initiative was made possible through the generous support of Microsoft, whose contribution of research resources has empowered Georgia Tech researchers to explore new frontiers in GenAI. By providing access to Azure’s advanced tools and services, Microsoft has played a pivotal role in accelerating GenAI research at Georgia Tech, enabling researchers to tackle some of the most pressing challenges and opportunities in this rapidly evolving field.

Looking Ahead: Pioneering the Future of GenAI

The awarded projects, set to commence in Fall 2024, represent a diverse array of research directions, from improving the capabilities of large language models to innovative applications in data management and interdisciplinary collaborations. These projects are expected to make significant contributions to the body of knowledge in GenAI and are poised to have a lasting impact on the industry and beyond.

IDEaS and the Cloud Hub are committed to supporting these teams as they embark on their research journeys. The outcomes of these projects will be shared through publications and highlighted on the Cloud Hub web portal, ensuring visibility for the groundbreaking work enabled by this initiative.

Congratulations to the Fall 2024 Winners

  • Annalisa Bracco | EAS "Modeling the Dispersal and Connectivity of Marine Larvae with GenAI Agents" [proposal co-funded with support from the Brook Byers Institute for Sustainable Systems]
  • Yunan Luo | CSE “Designing New and Diverse Proteins with Generative AI”
  • Kartik Goyal | IC “Generative AI for Greco-Roman Architectural Reconstruction: From Partial Unstructured Archaeological Descriptions to Structured Architectural Plans”
  • Victor Fung | CSE “Intelligent LLM Agents for Materials Design and Automated Experimentation”
  • Noura Howell | LMC “Applying Generative AI for STEM Education: Supporting AI literacy and community engagement with marginalized youth”
  • Neha Kumar | IC “Towards Responsible Integration of Generative AI in Creative Game Development”
  • Maureen Linden | Design “Best Practices in Generative AI Used in the Creation of Accessible Alternative Formats for People with Disabilities”
  • Surya Kalidindi | ME & MSE “Accelerating Materials Development Through Generative AI Based Dimensionality Expansion Techniques”
  • Tuo Zhao | ISyE “Adaptive and Robust Alignment of LLMs with Complex Rewards”

 

News Contact

Christa M. Ernst - Research Communications Program Manager

christa.ernst@research.gatech.edu

Montage of five portraits, L to R, T to B: Josiah Hester, Peng Chen, Yongsheng Chen, Rosemarie Santa González, and Joe Bozeman.

Montage of five portraits, L to R, T to B: Josiah Hester, Peng Chen, Yongsheng Chen, Rosemarie Santa González, and Joe Bozeman.

- Written by Benjamin Wright -

As Georgia Tech establishes itself as a national leader in AI research and education, some researchers on campus are putting AI to work to help meet sustainability goals in a range of areas including climate change adaptation and mitigation, urban farming, food distribution, and life cycle assessments while also focusing on ways to make sure AI is used ethically.

Josiah Hester, interim associate director for Community-Engaged Research in the Brook Byers Institute for Sustainable Systems (BBISS) and associate professor in the School of Interactive Computing, sees these projects as wins from both a research standpoint and for the local, national, and global communities they could affect.

“These faculty exemplify Georgia Tech's commitment to serving and partnering with communities in our research,” he says. “Sustainability is one of the most pressing issues of our time. AI gives us new tools to build more resilient communities, but the complexities and nuances in applying this emerging suite of technologies can only be solved by community members and researchers working closely together to bridge the gap. This approach to AI for sustainability strengthens the bonds between our university and our communities and makes lasting impacts due to community buy-in.”

Flood Monitoring and Carbon Storage

Peng Chen, assistant professor in the School of Computational Science and Engineering in the College of Computing, focuses on computational mathematics, data science, scientific machine learning, and parallel computing. Chen is combining these areas of expertise to develop algorithms to assist in practical applications such as flood monitoring and carbon dioxide capture and storage.

He is currently working on a National Science Foundation (NSF) project with colleagues in Georgia Tech’s School of City and Regional Planning and from the University of South Florida to develop flood models in the St. Petersburg, Florida area. As a low-lying state with more than 8,400 miles of coastline, Florida is one of the states most at risk from sea level rise and flooding caused by extreme weather events sparked by climate change.

Chen’s novel approach to flood monitoring takes existing high-resolution hydrological and hydrographical mapping and uses machine learning to incorporate real-time updates from social media users and existing traffic cameras to run rapid, low-cost simulations using deep neural networks. Current flood monitoring software is resource and time-intensive. Chen’s goal is to produce live modeling that can be used to warn residents and allocate emergency response resources as conditions change. That information would be available to the general public through a portal his team is working on.

“This project focuses on one particular community in Florida,” Chen says, “but we hope this methodology will be transferable to other locations and situations affected by climate change.”

In addition to the flood-monitoring project in Florida, Chen and his colleagues are developing new methods to improve the reliability and cost-effectiveness of storing carbon dioxide in underground rock formations. The process is plagued with uncertainty about the porosity of the bedrock, the optimal distribution of monitoring wells, and the rate at which carbon dioxide can be injected without over-pressurizing the bedrock, leading to collapse. The new simulations are fast, inexpensive, and minimize the risk of failure, which also decreases the cost of construction.

“Traditional high-fidelity simulation using supercomputers takes hours and lots of resources,” says Chen. “Now we can run these simulations in under one minute using AI models without sacrificing accuracy. Even when you factor in AI training costs, this is a huge savings in time and financial resources.”

Flood monitoring and carbon capture are passion projects for Chen, who sees an opportunity to use artificial intelligence to increase the pace and decrease the cost of problem-solving.

“I’m very excited about the possibility of solving grand challenges in the sustainability area with AI and machine learning models,” he says. “Engineering problems are full of uncertainty, but by using this technology, we can characterize the uncertainty in new ways and propagate it throughout our predictions to optimize designs and maximize performance.”

Urban Farming and Optimization

Yongsheng Chen works at the intersection of food, energy, and water. As the Bonnie W. and Charles W. Moorman Professor in the School of Civil and Environmental Engineering and director of the Nutrients, Energy, and Water Center for Agriculture Technology, Chen is focused on making urban agriculture technologically feasible, financially viable, and, most importantly, sustainable. To do that he’s leveraging AI to speed up the design process and optimize farming and harvesting operations.

Chen’s closed-loop hydroponic system uses anaerobically treated wastewater for fertilization and irrigation by extracting and repurposing nutrients as fertilizer before filtering the water through polymeric membranes with nano-scale pores. Advancing filtration and purification processes depends on finding the right membrane materials to selectively separate contaminants, including antibiotics and per- and polyfluoroalkyl substances (PFAS). Chen and his team are using AI and machine learning to guide membrane material selection and fabrication to make contaminant separation as efficient as possible. Similarly, AI and machine learning are assisting in developing carbon capture materials such as ionic liquids that can retain carbon dioxide generated during wastewater treatment and redirect it to hydroponics systems, boosting food productivity.

“A fundamental angle of our research is that we do not see municipal wastewater as waste,” explains Chen. “It is a resource we can treat and recover components from to supply irrigation, fertilizer, and biogas, all while reducing the amount of energy used in conventional wastewater treatment methods.”

In addition to aiding in materials development, which reduces design time and production costs, Chen is using machine learning to optimize the growing cycle of produce, maximizing nutritional value. His USDA-funded vertical farm uses autonomous robots to measure critical cultivation parameters and take pictures without destroying plants. This data helps determine optimum environmental conditions, fertilizer supply, and harvest timing, resulting in a faster-growing, optimally nutritious plant with less fertilizer waste and lower emissions.

Chen’s work has received considerable federal funding. As the Urban Resilience and Sustainability Thrust Leader within the NSF-funded AI Institute for Advances in Optimization (AI4OPT), he has received additional funding to foster international collaboration in digital agriculture with colleagues across the United States and in Japan, Australia, and India.

Optimizing Food Distribution

At the other end of the agricultural spectrum is postdoc Rosemarie Santa González in the H. Milton Stewart School of Industrial and Systems Engineering, who is conducting her research under the supervision of Professor Chelsea White and Professor Pascal Van Hentenryck, the director of Georgia Tech’s AI Hub as well as the director of AI4OPT.

Santa González is working with the Wisconsin Food Hub Cooperative to help traditional farmers get their products into the hands of consumers as efficiently as possible to reduce hunger and food waste. Preventing food waste is a priority for both the EPA and USDA. Current estimates are that 30 to 40% of the food produced in the United States ends up in landfills, which is a waste of resources on both the production end in the form of land, water, and chemical use, as well as a waste of resources when it comes to disposing of it, not to mention the impact of the greenhouses gases when wasted food decays.

To tackle this problem, Santa González and the Wisconsin Food Hub are helping small-scale farmers access refrigeration facilities and distribution chains. As part of her research, she is helping to develop AI tools that can optimize the logistics of the small-scale farmer supply chain while also making local consumers in underserved areas aware of what’s available so food doesn’t end up in landfills.

“This solution has to be accessible,” she says. “Not just in the sense that the food is accessible, but that the tools we are providing to them are accessible. The end users have to understand the tools and be able to use them. It has to be sustainable as a resource.”

Making AI accessible to people in the community is a core goal of the NSF’s AI Institute for Intelligent Cyberinfrastructure with Computational Learning in the Environment (ICICLE), one of the partners involved with the project.

“A large segment of the population we are working with, which includes historically marginalized communities, has a negative reaction to AI. They think of machines taking over, or data being stolen. Our goal is to democratize AI in these decision-support tools as we work toward the UN Sustainable Development Goal of Zero Hunger. There is so much power in these tools to solve complex problems that have very real results. More people will be fed and less food will spoil before it gets to people’s homes.”

Santa González hopes the tools they are building can be packaged and customized for food co-ops everywhere.

AI and Ethics

Like Santa González, Joe Bozeman III is also focused on the ethical and sustainable deployment of AI and machine learning, especially among marginalized communities. The assistant professor in the School of Civil and Environmental Engineering is an industrial ecologist committed to fostering ethical climate change adaptation and mitigation strategies. His SEEEL Lab works to make sure researchers understand the consequences of decisions before they move from academic concepts to policy decisions, particularly those that rely on data sets involving people and communities.

“With the administration of big data, there is a human tendency to assume that more data means everything is being captured, but that's not necessarily true,” he cautions. “More data could mean we're just capturing more of the data that already exists, while new research shows that we’re not including information from marginalized communities that have historically not been brought into the decision-making process. That includes underrepresented minorities, rural populations, people with disabilities, and neurodivergent people who may not interface with data collection tools.”

Bozeman is concerned that overlooking marginalized communities in data sets will result in decisions that at best ignore them and at worst cause them direct harm.

“Our lab doesn't wait for the negative harms to occur before we start talking about them,” explains Bozeman, who holds a courtesy appointment in the School of Public Policy. “Our lab forecasts what those harms will be so decision-makers and engineers can develop technologies that consider these things.”

He focuses on urbanization, the food-energy-water nexus, and the circular economy. He has found that much of the research in those areas is conducted in a vacuum without consideration for human engagement and the impact it could have when implemented.

Bozeman is lobbying for built-in tools and safeguards to mitigate the potential for harm from researchers using AI without appropriate consideration. He already sees a disconnect between the academic world and the public. Bridging that trust gap will require ethical uses of AI.

“We have to start rigorously including their voices in our decision-making to begin gaining trust with the public again. And with that trust, we can all start moving toward sustainable development. If we don't do that, I don't care how good our engineering solutions are, we're going to miss the boat entirely on bringing along the majority of the population.”

BBISS Support

Moving forward, Hester is excited about the impact the Brooks Byers Institute for Sustainable Systems can have on AI and sustainability research through a variety of support mechanisms.

“BBISS continues to invest in faculty development and training in community-driven research strategies, including the Community Engagement Faculty Fellows Program (with the Center for Sustainable Communities Research and Education), while empowering multidisciplinary teams to work together to solve grand engineering challenges with AI by supporting the AI+Climate Faculty Interest Group, as well as partnering with and providing administrative support for community-driven research projects.”

News Contact

Brent Verrill, Research Communications Program Manager, BBISS