Oct. 01, 2024
Even though artificial intelligence (AI) is not advanced enough to help the average person build weapons of mass destruction, federal agencies know it could be possible and are keeping pace with next generation technologies through rigorous research and strategic partnerships.
It is a delicate balance, but as the leader of the Department of Homeland Security (DHS), Countering Weapons of Mass Destruction Office (CWMD) told a room full of Georgia Tech students, faculty, and staff, there is no room for error.
“You have to be right all the time, the bad guys only have to be right once,” said Mary Ellen Callahan, assistant secretary for CWMD.
As a guest of John Tien, former DHS deputy secretary and professor of practice in the School of Cybersecurity and Privacy as well as the Sam Nunn School of International Affairs, Callahan was at Georgia Tech for three separate speaking engagements in late September.
"Assistant Secretary Callahan's contributions were remarkable in so many ways,” said Tien. “Most importantly, I love how she demonstrated to our students that the work in the fields of cybersecurity, privacy, and homeland security is an honorable, interesting, and substantive way to serve the greater good of keeping the American people safe and secure. As her former colleague at the U.S. Department of Homeland Security, I was proud to see her represent her CWMD team, DHS, and the Biden-Harris Administration in the way she did, with humility, personality, and leadership."
While the thought of AI-assisted WMDs is terrifying to think about, it is just a glimpse into what Callahan’s office handles on a regular basis. The assistant secretary walked her listeners through how CWMD works with federal and local law enforcement on how to identify and detect the signs of potential chemical, biological, radiological, or nuclear (CBRN) weapons.
“There's a whole cadre of professionals who spend every day preparing for the worst day in U.S. history,” said Callahan. “They are doing everything in their power to make sure that that does not happen.”
CWMD is also researching ways to implement AI technologies into current surveillance systems to help identify and respond to threats faster. For example, an AI-backed bio-hazard surveillance systems would allow analysts to characterize and contextualize the risk of potential bio-hazard threats in a timely manner.
Callahan’s office spearheaded a report exploring the advantages and risks of AI in, “Reducing the Risks at the Intersection of Artificial Intelligence and Chemical, Biological, Radiological, and Nuclear Threats,” which was released to the public earlier this year.
The report was a multidisciplinary effort that was created in collaboration with the White House Office of Science and Technology Policy, Department of Energy, academic institutions, private industries, think tanks, and third-party evaluators.
During his introduction of assistant secretary, SCP Chair Michael Bailey told those seated in the Coda Atrium that Callahan’s career is an incredible example of the interdisciplinary nature he hopes the school’s students and faculty can use as a roadmap.
“Important, impactful, and interdisciplinary research can be inspired by everyday problems,” he said. "We believe that building a secure future requires revolutionizing security education and being vigilant, and together, we can achieve this goal."
While on campus Tuesday, Callahan gave a special guest lecture to the students in “CS 3237 Human Dimension of Cybersecurity: People, Organizations, Societies,” and “CS 4267 - Critical Infrastructures.” Following the lecture, she gave a prepared speech to students, faculty, and staff.
Lastly, she participated in a moderated panel discussion with SCP J.Z. Liang Chair Peter Swire and Jerry Perullo, SCP professor of practice and former CISO of International Continental Exchange as well as the New York Stock Exchange. The panel was moderated by Tien.
News Contact
John Popham, Communications Officer II
School of Cybersecurity and Privacy | Georgia Institute of Technology
scp.cc.gatech.edu | in/jp-popham on LinkedIn
Get the latest SCP updates by joining our mailing list!
Oct. 01, 2024
The Institute for Robotics and Intelligent Machines (IRIM) launched a new initiatives program, starting with several winning proposals, with corresponding initiative leads that will broaden the scope of IRIM’s research beyond its traditional core strengths. A major goal is to stimulate collaboration across areas not typically considered as technical robotics, such as policy, education, and the humanities, as well as open new inter-university and inter-agency collaboration routes. In addition to guiding their specific initiatives, these leads will serve as an informal internal advisory body for IRIM. Initiative leads will be announced annually, with existing initiative leaders considered for renewal based on their progress in achieving community building and research goals. We hope that initiative leads will act as the “faculty face” of IRIM and communicate IRIM’s vision and activities to audiences both within and outside of Georgia Tech.
Meet 2024 IRIM Initiative Leads
Stephen Balakirsky; Regents' Researcher, Georgia Tech Research Institute & Panagiotis Tsiotras; David & Andrew Lewis Endowed Chair, Daniel Guggenheim School of Aerospace Engineering | Proximity Operations for Autonomous Servicing
Why It Matters: Proximity operations in space refer to the intricate and precise maneuvers and activities that spacecraft or satellites perform when they are in close proximity to each other, such as docking, rendezvous, or station-keeping. These operations are essential for a variety of space missions, including crewed spaceflights, satellite servicing, space exploration, and maintaining satellite constellations. While this is a very broad field, this initiative will concentrate on robotic servicing and associated challenges. In this context, robotic servicing is composed of proximity operations that are used for servicing and repairing satellites in space. In robotic servicing, robotic arms and tools perform maintenance tasks such as refueling, replacing components, or providing operation enhancements to extend a satellite's operational life or increase a satellite’s capabilities.
Our Approach: By forming an initiative in this important area, IRIM will open opportunities within the rapidly evolving space community. This will allow us to create proposals for organizations ranging from NASA and the Defense Advanced Research Projects Agency to the U.S. Air Force and U.S. Space Force. This will also position us to become national leaders in this area. While several universities have a robust robotics program and quite a few have a strong space engineering program, there are only a handful of academic units with the breadth of expertise to tackle this problem. Also, even fewer universities have the benefit of an experienced applied research partner, such as the Georgia Tech Research Institute (GTRI), to undertake large-scale demonstrations. Georgia Tech, having world-renowned programs in aerospace engineering and robotics, is uniquely positioned to be a leader in this field. In addition, creating a workshop in proximity operations for autonomous servicing will allow the GTRI and Georgia Tech space robotics communities to come together and better understand strengths and opportunities for improvement in our abilities.
Matthew Gombolay; Assistant Professor, Interactive Computing | Human-Robot Society in 2125: IRIM Leading the Way
Why It Matters: The coming robot “apocalypse” and foundation models captured the zeitgeist in 2023 with “ChatGPT” becoming a topic at the dinner table and the probability occurrence of various scenarios of AI driven technological doom being a hotly debated topic on social media. Futuristic visions of ubiquitous embodied Artificial Intelligence (AI) and robotics have become tangible. The proliferation and effectiveness of first-person view drones in the Russo-Ukrainian War, autonomous taxi services along with their failures, and inexpensive robots (e.g., Tesla’s Optimus and Unitree’s G1) have made it seem like children alive today may have robots embedded in their everyday lives. Yet, there is a lack of trust in the public leadership bringing us into this future to ensure that robots are developed and deployed with beneficence.
Our Approach: This proposal seeks to assemble a team of bright, savvy operators across academia, government, media, nonprofits, industry, and community stakeholders to develop a roadmap for how we can be the most trusted voice to guide the public in the next 100 years of innovation in robotics here at the IRIM. We propose to carry out specific activities that include conducting the activities necessary to develop a roadmap about Robots in 2125: Altruistic and Integrated Human-Robot Society. We also aim to build partnerships to promulgate these outcomes across Georgia Tech’s campus and internationally.
Gregory Sawicki; Joseph Anderer Faculty Fellow, School of Mechanical Engineering & Aaron Young; Associate Professor, Mechanical Engineering | Wearable Robotic Augmentation for Human Resilience
Why It Matters: The field of robotics continues to evolve beyond rigid, precision-controlled machines for amplifying production on manufacturing assembly lines toward soft, wearable systems that can mediate the interface between human users and their natural and built environments. Recent advances in materials science have made it possible to construct flexible garments with embedded sensors and actuators (e.g., exosuits). In parallel, computers continue to get smaller and more powerful, and state-of-the art machine learning algorithms can extract useful information from more extensive volumes of input data in real time. Now is the time to embed lean, powerful, sensorimotor elements alongside high-speed and efficient data processing systems in a continuous wearable device.
Our Approach: The mission of the Wearable Robotic Augmentation for Human Resilience (WeRoAHR) initiative is to merge modern advances in sensing, actuation, and computing technology to imagine and create adaptive, wearable augmentation technology that can improve human resilience and longevity across the physiological spectrum — from behavioral to cellular scales. The near-term effort (~2-3 years) will draw on Georgia Tech’s existing ecosystem of basic scientists and engineers to develop WeRoAHR systems that will focus on key targets of opportunity to increase human resilience (e.g., improved balance, dexterity, and stamina). These initial efforts will establish seeds for growth intended to help launch larger-scale, center-level efforts (>5 years).
Panagiotis Tsiotras; David & Andrew Lewis Endowed Chair, Daniel Guggenheim School of Aerospace Engineering & Sam Coogan; Demetrius T. Paris Junior Professor, School of Electrical and Computer Engineering | Initiative on Reliable, Safe, and Secure Autonomous Robotics
Why It Matters: The design and operation of reliable systems is primarily an integration issue that involves not only each component (software, hardware) being safe and reliable but also the whole system being reliable (including the human operator). The necessity for reliable autonomous systems (including AI agents) is more pronounced for “safety-critical” applications, where the result of a wrong decision can be catastrophic. This is quite a different landscape from many other autonomous decision systems (e.g., recommender systems) where a wrong or imprecise decision is inconsequential.
Our Approach: This new initiative will investigate the development of protocols, techniques, methodologies, theories, and practices for designing, building, and operating safe and reliable AI and autonomous engineering systems and contribute toward promoting a culture of safety and accountability grounded in rigorous objective metrics and methodologies for AI/autonomous and intelligent machines designers and operators, to allow the widespread adoption of such systems in safety-critical areas with confidence. The proposed new initiative aims to establish Tech as the leader in the design of autonomous, reliable engineering robotic systems and investigate the opportunity for a federally funded or industry-funded research center (National Science Foundation (NSF) Science and Technology Centers/Engineering Research Centers) in this area.
Colin Usher; Robotics Systems and Technology Branch Head, GTRI | Opportunities for Agricultural Robotics and New Collaborations
Why It Matters: The concepts for how robotics might be incorporated more broadly in agriculture vary widely, ranging from large-scale systems to teams of small systems operating in farms, enabling new possibilities. In addition, there are several application areas in agriculture, ranging from planting, weeding, crop scouting, and general growing through harvesting. Georgia Tech is not a land-grant university, making our ability to capture some of the opportunities in agricultural research more challenging. By partnering with a land-grant university such as the University of Georgia (UGA), we can leverage this relationship to go after these opportunities that, historically, were not available.
Our Approach: We plan to build collaborations first by leveraging relationships we have already formed within GTRI, Georgia Tech, and UGA. We will achieve this through a significant level of networking, supported by workshops and/or seminars with which to recruit faculty and form a roadmap for research within the respective universities. Our goal is to identify and pursue multiple opportunities for robotics-related research in both row-crop and animal-based agriculture. We believe that we have a strong opportunity, starting with formalizing a program with the partners we have worked with before, with the potential to improve and grow the research area by incorporating new faculty and staff with a unified vision of ubiquitous robotics systems in agriculture. We plan to achieve this through scheduled visits with interested faculty, attendance at relevant conferences, and ultimately hosting a workshop to formalize and define a research roadmap.
Ye Zhao; Assistant Professor, School of Mechanical Engineering | Safe, Social, & Scalable Human-Robot Teaming: Interaction, Synergy, & Augmentation
Why It Matters: Collaborative robots in unstructured environments such as construction and warehouse sites show great promise in working with humans on repetitive and dangerous tasks to improve efficiency and productivity. However, pre-programmed and nonflexible interaction behaviors of existing robots lower the naturalness and flexibility of the collaboration process. Therefore, it is crucial to improve physical interaction behaviors of the collaborative human-robot teaming.
Our Approach: This proposal will advance the understanding of the bi-directional influence and interaction of human-robot teaming for complex physical activities in dynamic environments by developing new methods to predict worker intention via multi-modal wearable sensing, reasoning about complex human-robot-workspace interaction, and adaptively planning the robot’s motion considering both human teaming dynamics and physiological and cognitive states. More importantly, our team plans to prioritize efforts to (i) broaden the scope of IRIM’s autonomy research by incorporating psychology, cognitive, and manufacturing research not typically considered as technical robotics research areas; (ii) initiate new IRIM education, training, and outreach programs through collaboration with team members from various Georgia Tech educational and outreach programs (including Project ENGAGES, VIP, and CEISMC) as well as the AUCC (World’s largest consortia of African American private institutions of higher education) which comprises Clark Atlanta University, Morehouse College, & Spelman College; and (iii) aim for large governmental grants such as DOD MURI, NSF NRT, and NSF Future of Work programs.
-Christa M. Ernst
Sep. 26, 2024
Is it a building or a street? How tall is the building? Are there powerlines nearby?
These are details autonomous flying vehicles would need to know to function safely. However, few aerial image datasets exist that can adequately train the computer vision algorithms that would pilot these vehicles.
That’s why Georgia Tech researchers created a new benchmark dataset of computer-generated aerial images.
Judy Hoffman, an assistant professor in Georgia Tech’s School of Interactive Computing, worked with students in her lab to create SKYSCENES. The dataset contains over 33,000 aerial images of cities curated from a computer simulation program.
Hoffman said sufficient training datasets could unlock the potential of autonomous flying vehicles. Constructing those datasets is a challenge the computer vision research community has been working for years to overcome.
“You can’t crowdsource it the same way you would standard internet images,” Hoffman said. “Trying to collect it manually would be very slow and expensive — akin to what the self-driving industry is doing driving around vehicles, but now you’re talking about drones flying around.
“We must fix those problems to have models that work reliably and safely for flying vehicles.”
Many existing datasets aren’t annotated well enough for algorithms to distinguish objects in the image. For example, the algorithms may not recognize the surface of a building from the surface of a street.
Working with Hoffman, Ph.D. student Sahil Khose tried a new approach — constructing a synthetic image data set from a ground-view, open-source simulator known as CARLA.
CARLA was originally designed to provide ground-view simulation for self-driving vehicles. It creates an open-world virtual reality that allows users to drive around in computer-generated cities.
Khose and his collaborators adjusted CARLA’s interface to support aerial views that mimic views one might get from unmanned aerial vehicles (UAVs).
What's the Forecast?
The team also created new virtual scenarios to mimic the real world by accounting for changes in weather, times of day, various altitudes, and population per city. The algorithms will struggle to recognize the objects in the frame consistently unless those details are incorporated into the training data.
“CARLA’s flexibility offers a wide range of environmental configurations, and we take several important considerations into account while curating SKYSCENES images from CARLA,” Khose said. “Those include strategies for obtaining diverse synthetic data, embedding real-world irregularities, avoiding correlated images, addressing skewed class representations, and reproducing precise viewpoints.”
SKYSCENES is not the largest dataset of aerial images to be released, but a paper co-authored by Khose shows that it performs better than existing models.
Khose said models trained on this dataset exhibit strong generalization to real-world scenarios, and integration with real-world data enhances their performance. The dataset also controls variability, which is essential to perform various tasks.
“This dataset drives advancements in multi-view learning, domain adaptation, and multimodal approaches, with major implications for applications like urban planning, disaster response, and autonomous drone navigation,” Khose said. “We hope to bridge the gap for synthetic-to-real adaptation and generalization for aerial images.”
Seeing the Whole Picture
For algorithms, generalization is the ability to perform tasks based on new data that expands beyond the specific examples on which they were trained.
“If you have 200 images, and you train a model on those images, they’ll do well at recognizing what you want them to recognize in that closed-world initial setting,” Hoffman said. “But if we were to take aerial vehicles and fly them around cities at various times of the day or in other weather conditions, they would start to fail.”
That’s why Khose designed algorithms to enhance the quality of the curated images.
“These images are captured from 100 meters above ground, which means the objects appear small and are challenging to recognize,” he said. “We focused on developing algorithms specifically designed to address this.”
Those algorithms elevate the ability of ML models to recognize small objects, improving their performance in navigating new environments.
“Our annotations help the models capture a more comprehensive understanding of the entire scene — where the roads are, where the buildings are, and know they are buildings and not just an obstacle in the way,” Hoffman said. “It gives a richer set of information when planning a flight.
“To work safely, many autonomous flight plans might require a map given to them beforehand. If you have successful vision systems that understand exactly what the obstacles in the real world are, you could navigate in previously unseen environments.”
For more information about Georgia Tech Research at ECCV 2024, click here.
News Contact
Nathan Deen
Communications Officer
School of Interactive Computing
Sep. 24, 2024
A year ago, Ray Hung, a master’s student in computer science, assisted Professor Thad Starner in constructing an artificial intelligence (AI)-powered anti-plagiarism tool for Starner’s 900-student Intro to Artificial Intelligence (CS3600) course.
While the tool proved effective, Hung began considering ways to deter plagiarism and improve the education system.
Plagiarism can be prevalent in online exams, so Hung looked at oral examinations commonly used in European education systems and rooted in the Socratic method.
One of the advantages of oral assessments is they naturally hinder cheating. Consulting ChatGPT wouldn’t benefit a student unless the student memorizes the entire answer. Even then, follow-up questions would reveal a lack of genuine understanding.
Hung drew inspiration from the 2009 reboot of Star Trek, particularly the opening scene in which a young Spock provides oral answers to questions prompted by AI.
“I think we can do something similar,” Hung said. “Research has shown that oral assessment improves people’s material understanding, critical thinking, and communication skills.
“The problem is that it’s not scalable with human teachers. A professor may have 600 students. Even with teaching assistants, it’s not practical to conduct oral assessments. But with AI, it’s now possible.”
Hung developed The Socratic Mind with Starner, Scheller College of Business Assistant Professor Eunhee Sohn, and researchers from the Georgia Tech Center for 21st Century Universities (C21U).
The Socratic Mind is a scalable, AI-powered oral assessment platform leveraging Socratic questioning to challenge students to explain, justify, and defend their answers to showcase their understanding.
“We believe that if you truly understand something, you should be able to explain it,” Hung said.
“There is a deeper need for fostering genuine understanding and cultivating high-order thinking skills. I wanted to promote an education paradigm in which critical thinking, material understanding, and communication skills play integral roles and are at the forefront of our education.”
Hung entered his project into the Learning Engineering Tools Competition, one of the largest education technology competitions in the world. Hung and his collaborators were among five teams that won a Catalyst Award and received a $50,000 prize.
Benefits for Students
The Socratic Mind will be piloted in several classes this semester with about 2,000 students participating. One of those classes is the Intro to Computing (CS1301) class taught by College of Computing Professor David Joyner.
Hung said The Socratic Mind will be a resource students can use to prepare to defend their dissertation or to teach a class if they choose to pursue a Ph.D. Anyone struggling with public speaking or preparing for job interviews will find the tool helpful.
“Many users are interested in AI roleplay to practice real-world conversations,” he said. “The AI can roleplay a manager if you want to discuss a promotion. It can roleplay as an interviewer if you have a job interview. There are a lot of uses for oral assessment platforms where you can practice talking with an AI.
“I hope this tool helps students find their education more valuable and help them become better citizens, workers, entrepreneurs, or whoever they want to be in the future.”
Hung said the chatbot is not only conversational but also adverse to human persuasion because it follows the Socratic method of asking follow-up questions.
“ChatGPT and most other large language models are trained as helpful, harmless assistants,” he said. “If you argue with it and hold your position strong enough, you can coerce it to agree. We don’t want that.
“The Socratic Mind AI will follow up with you in real-time about what you just said, so it’s not a one-way conversation. It’s interactive and engaging and mimics human communication well.”
Educational Overhaul
C21U Director of Research in Education Innovation Jonna Lee and C21U Research Scientist Meryem Soylu will measure The Socratic Mind’s effectiveness during the pilot and determine its scalability.
“I thought it would be interesting to develop this further from a learning engineering perspective because it’s about systematic problem solving, and we want to create scalable solutions with technologies,” Lee said.
“I hope we can find actionable insights about how this AI tool can help transform classroom learning and assessment practices compared to traditional methods. We see the potential for personalized learning for various student populations, including non-traditional lifetime learners."
Hung said The Socratic Mind has the potential to revolutionize the U.S. education system depending on how the system chooses to incorporate AI.
Recognizing the advancement of AI is likely an unstoppable trend. Hung advocates leveraging AI to enhance learning and unlock human potential rather than focusing on restrictions.
“We are in an era in which information is abundant, but wisdom is scarce,” Hung said. “Shallow and rapid interactions drive social media, for example. We think it’s a golden time to elevate people’s critical thinking and communication skills.”
For more information about The Socratic Mind and to try a demo, visit the project's website.
News Contact
Nathan Deen
Communications Officer
School of interactive Computing
Sep. 24, 2024
In a major step forward for deploying artificial intelligence (AI) in industry, Georgia Tech’s newly established AI hub, Tech AI, has partnered with the Center for Scientific Software Engineering (CSSE). This collaboration aims to bridge the gap between academia and industry by advancing scalable AI solutions in sectors such as energy, mobility, supply chains, healthcare, and services.
Building on the Foundation of Success
CSSE, founded in late 2021 and supported by Schmidt Sciences as part of their VISS initiative, was created to advance and support scientific research by applying modern software engineering practices, cutting-edge technologies, and modern tools to the development of scientific software within and outside Georgia Tech. CSSE is led by Alex Orso, professor and associate dean in the College of Computing, and Jeff Young, a principal scientist at Georgia Tech. The Center's team boasts over 60 years of combined experience, with engineers from companies such as Microsoft, Amazon, and various startups, working under the supervision of the Center’s Head of Engineering, Dave Brownell. Their focus is on turning cutting-edge research into real-world products.
“Software engineering is about much more than just writing code,” Orso explained. “It’s also about specifying, designing, testing, deploying, and maintaining these systems.”
A Partnership to Support AI Research and Innovation
Through this collaboration, CSSE’s expertise will be integrated into Tech AI to create a software engineering division that can support AI engineering and also create new career opportunities for students and researchers.
Pascal Van Hentenryck, the A. Russell Chandler III Chair and professor in the H. Milton Stewart School of Industrial Engineering (ISyE) and director of both the NSF AI Research Institute for Advances in Optimization (AI4OPT) and Tech AI, highlighted the potential of this partnership.
“We are impressed with the technology and talent within CSSE,” Van Hentenryck said. “This partnership allows us to leverage an existing, highly skilled engineering team rather than building one from scratch. It’s a unique opportunity to build the engineering pillar of Tech AI and push our AI initiatives forward, moving from pilots to products.”
“Joining our forces and having a professional engineering resource within Tech AI will give Georgia Tech a great competitive advantage over other AI initiatives,” Orso added.
One of the first projects under this collaboration focuses on AI in energy, particularly in developing new-generation, AI-driven, market clearing optimization and real-time risk assessment. Plans are also in place to pursue several additional projects, including the creation of an AI-powered search engine assistant, demonstrating the center’s ability to tackle complex, real-world problems.
This partnership is positioned to make a significant impact on applied AI research and innovation at Georgia Tech. By integrating modern software engineering practices, the collaboration will address key challenges in AI deployment, scalability, and sustainability, and translate AI research innovations into products with real societal impact.
“This is a match made in heaven,” Orso noted, reflecting on the collaboration’s alignment with Georgia Tech’s strategic goals to advance technology and improve human lives. Van Hentenryck added that “the collaboration is as much about creating new technologies as it is about educating the next generation of engineers.”
Promoting Open Source at Tech AI
A crucial element supporting the new Tech AI and CSSE venture is Georgia Tech’s Open Source Program Office (OSPO), a joint effort with the College of Computing, PACE, and the Georgia Tech Library. As an important hub of open-source knowledge, OSPO will provide education, training, and guidance on best practices for using and contributing to open-source AI frameworks.
“A large majority of the software driving our current accomplishments in AI research and development is built on a long history of open-source software and data sets, including frameworks like PyTorch and models like Meta’s LLaMA,” said Jeff Young, principal investigator at OSPO. “Understanding how we can best use and contribute to open-source AI is critical to our future success with Tech AI, and OSPO is well-suited to provide guidance, training, and expertise around these open-source tools, frameworks, and pipelines.”
Looking Ahead
As the partnership between Tech AI and CSSE evolves, both groups anticipate a future in which interdisciplinary research drives innovation. By integrating AI with real-world software engineering, the collaboration promises to create new opportunities for students, researchers, and Georgia Tech as a whole.
With a strong foundation, a talented team, and a clear vision, Tech AI and CSSE together are set to break new ground in AI and scientific research, propelling Georgia Tech to the forefront of technological advancement in the AI field.
About the Center for Scientific Software Engineering (CSSE)
The CSSE at Georgia Tech, supported by an $11 million grant from Schmidt Sciences, is one of four scientific software engineering centers within the Virtual Institute for Scientific Software (VISS). Its mission is to develop scalable, reliable, open-source software for scientific research, ensuring maintainability and effectiveness. Learn more at https://ssecenter.cc.gatech.edu.
About Georgia Tech’s Open Source Program Office (OSPO)
Georgia Tech’s OSPO supports the development of open-source research software across campus. Funded by a Sloan Foundation grant, OSPO provides community guidelines, training, and outreach to promote a thriving open-source ecosystem. Learn more at https://ospo.cc.gatech.edu.
About Schmidt Sciences
Schmidt Sciences is a nonprofit organization founded in 2024 by Eric and Wendy Schmidt that works to advance science and technology that deepens human understanding of the natural world and develops solutions to global issues. The organization makes grants in four areas—AI and advanced computing, astrophysics and space, biosciences and climate—as well as supporting researchers in a variety of disciplines through its science systems program. Learn more at https://www.schmidtsciences.org/.
About Tech AI
Tech AI is Georgia Tech’s AI hub, advancing AI through research, education, and responsible deployment. The hub focuses on AI solutions for real-world applications, preparing the next generation of AI leaders. Learn more at https://ai.gatech.edu.
News Contact
Breon Martin
AI Marketing Communications Manager
Sep. 19, 2024
A new algorithm tested on NASA’s Perseverance Rover on Mars may lead to better forecasting of hurricanes, wildfires, and other extreme weather events that impact millions globally.
Georgia Tech Ph.D. student Austin P. Wright is first author of a paper that introduces Nested Fusion. The new algorithm improves scientists’ ability to search for past signs of life on the Martian surface.
In addition to supporting NASA’s Mars 2020 mission, scientists from other fields working with large, overlapping datasets can use Nested Fusion’s methods toward their studies.
Wright presented Nested Fusion at the 2024 International Conference on Knowledge Discovery and Data Mining (KDD 2024) where it was a runner-up for the best paper award. KDD is widely considered the world's most prestigious conference for knowledge discovery and data mining research.
“Nested Fusion is really useful for researchers in many different domains, not just NASA scientists,” said Wright. “The method visualizes complex datasets that can be difficult to get an overall view of during the initial exploratory stages of analysis.”
Nested Fusion combines datasets with different resolutions to produce a single, high-resolution visual distribution. Using this method, NASA scientists can more easily analyze multiple datasets from various sources at the same time. This can lead to faster studies of Mars’ surface composition to find clues of previous life.
The algorithm demonstrates how data science impacts traditional scientific fields like chemistry, biology, and geology.
Even further, Wright is developing Nested Fusion applications to model shifting climate patterns, plant and animal life, and other concepts in the earth sciences. The same method can combine overlapping datasets from satellite imagery, biomarkers, and climate data.
“Users have extended Nested Fusion and similar algorithms toward earth science contexts, which we have received very positive feedback,” said Wright, who studies machine learning (ML) at Georgia Tech.
“Cross-correlational analysis takes a long time to do and is not done in the initial stages of research when patterns appear and form new hypotheses. Nested Fusion enables people to discover these patterns much earlier.”
Wright is the data science and ML lead for PIXLISE, the software that NASA JPL scientists use to study data from the Mars Perseverance Rover.
Perseverance uses its Planetary Instrument for X-ray Lithochemistry (PIXL) to collect data on mineral composition of Mars’ surface. PIXL’s two main tools that accomplish this are its X-ray Fluorescence (XRF) Spectrometer and Multi-Context Camera (MCC).
When PIXL scans a target area, it creates two co-aligned datasets from the components. XRF collects a sample's fine-scale elemental composition. MCC produces images of a sample to gather visual and physical details like size and shape.
A single XRF spectrum corresponds to approximately 100 MCC imaging pixels for every scan point. Each tool’s unique resolution makes mapping between overlapping data layers challenging. However, Wright and his collaborators designed Nested Fusion to overcome this hurdle.
In addition to progressing data science, Nested Fusion improves NASA scientists' workflow. Using the method, a single scientist can form an initial estimate of a sample’s mineral composition in a matter of hours. Before Nested Fusion, the same task required days of collaboration between teams of experts on each different instrument.
“I think one of the biggest lessons I have taken from this work is that it is valuable to always ground my ML and data science problems in actual, concrete use cases of our collaborators,” Wright said.
“I learn from collaborators what parts of data analysis are important to them and the challenges they face. By understanding these issues, we can discover new ways of formalizing and framing problems in data science.”
Wright presented Nested Fusion at KDD 2024, held Aug. 25-29 in Barcelona, Spain. KDD is an official special interest group of the Association for Computing Machinery. The conference is one of the world’s leading forums for knowledge discovery and data mining research.
Nested Fusion won runner-up for the best paper in the applied data science track, which comprised of over 150 papers. Hundreds of other papers were presented at the conference’s research track, workshops, and tutorials.
Wright’s mentors, Scott Davidoff and Polo Chau, co-authored the Nested Fusion paper. Davidoff is a principal research scientist at the NASA Jet Propulsion Laboratory. Chau is a professor at the Georgia Tech School of Computational Science and Engineering (CSE).
“I was extremely happy that this work was recognized with the best paper runner-up award,” Wright said. “This kind of applied work can sometimes be hard to find the right academic home, so finding communities that appreciate this work is very encouraging.”
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Sep. 19, 2024
Georgia Tech alum Tucker von Holten, who mastered Spanish in high school and minored in German at the Institute, was surprised when his mother struggled to understand basic Spanish after more than a year of playing Duolingo.
So, the 2020 computer science graduate asked his former German professors in the School of Modern Languages if they would take part in the ideation and pilot of a new language learning technology — one that would support classroom language instruction, rather than trying to take its place.
“Language acquisition goes far beyond vocabulary and grammar,” said Associate Professor of German Britta Kallin. “We want students to experience the culture lived through the language.”
Von Holten agreed, and so he got to work, founding the educational technology company Spirant AI. Over the next year, he stayed in contact with Kallin and other faculty members while creating the Spirant Assistant, a language learning tool suite that harnesses the strengths of generative artificial intelligence (AI) to support language students and instructors.
“With the Spirant Assistant, we wanted to provide students with more robust tools for language learning,” said von Holten. “We also wanted to create a ‘digital teaching assistant’ for language instructors, who rarely have the luxury of TAs in their classrooms.”
The Initial Pilot of the Spirant Assistant
In Spring 2024, the Spirant Assistant was ready for its first classroom pilot — and the School of Modern Languages was up to the challenge. Kallin and her colleague, Assistant Professor of German Hyoun-A Joo, used the Spirant Assistant in their upper-level German classes at Georgia Tech.
Kallin and Joo collected feedback from students and consulted with von Holten to share what was going well and suggest ways the assistant could be refined.
“Tucker is uniquely qualified to create a language instruction tool. This project is informed by his experience as an alum, as someone with computer expertise, and as a person who knows what it’s like to excel in learning a language,” said Kallin.
Kallin said that her students and Hyoun-A Joo’s found the spring class experience a positive one.
“Students in our spring classes liked the writing and reading tools a lot, as well as the feedback and suggestions they got from the Spirant Assistant,” said Kallin. “We suggested a few adjustments, and Tucker has implemented those for the fall.”
Kallin and John Lyon, professor, school chair, and Charles Smithgall Jr. Institute Chair in the School of Modern Languages, are piloting an updated version of the Spirant Assistant in their advanced German classes this semester.
“Any new tool creates a new way of learning for our students and new ways of teaching for instructors,” said Kallin. “We’re learning how to implement the Spirant Assistant in ways that best support our students and the course design. It will be great to see how it evolves, and how our teaching might make progress as we use it more.”
What Does the Spirant Assistant Do?
When building the Spirant Assistant, von Holten and his team consulted with Modern Languages faculty about their needs. The result is a suite of different tools for instructors and students, all of which leverage the power of generative AI.
The Assistant’s primary tool is “First Pass,” an AI first-draft support function that reviews a student essay, applies the instructor’s rubric such as grammar, suggests corrections in English or in the target language, and suggests a grade, which the instructor can either approve or change.
“We were surprised to learn that instructors spend about half of their time grading,” said von Holten. “We wanted to create something that would help with that process.”
Von Holten is sensitive to potential concerns about the accuracy of a virtual AI grading assistant and emphasizes that First Pass is meant only to support the process of grading, not replace it. Instead, he likens the role of generative AI in the classroom to that of the calculator.
“The calculator fundamentally changed the way we teach and learn mathematics. Like the calculator, AI isn’t capable of human insight, reflection, or understanding. It’s a tool.”
The Spirant Assistant also offers support for students, including an AI reading tool and a “storyteller” function that creates stories in the language students are learning. Another of its capabilities is tailoring any given piece of writing to the student’s reading level, making authentic or literary texts more easily understandable for students.
“Instructors or students can use the storyteller to create a story that illustrates a set of vocabulary words, or that repeatedly uses a grammatical concept such as passive voice or subjunctive,” said von Holten.
“Most language learners have had the experience of trying to read text in a new language with three or four reference books open on the side — a dictionary, a verb conjugator, and a grammar reference,” he said. “So, we built the Spirant Assistant so that an instructor can input the text they want students to read, and the Assistant makes all of this reference information clickable, right in the text.”
What’s Next for the Spirant Assistant?
In its current iteration, the Spirant Assistant supports language learning and instruction in German and Spanish, with plans to expand its capabilities — and its influence — on the horizon.
“We’re very proud of our partnership with Georgia Tech. We’re dedicated to enhancing language education nationwide,” said von Holten. “We look forward to working with more universities to bring the Spirant Assistant’s transformative suite of tools to classrooms across the country.”
The School of Modern Languages is a unit of the Ivan Allen College of Liberal Arts.
To find out more about Georgia Tech’s policy on the responsible adoption and use of AI tools, you can visit the Office of Information Technology’s Artificial Intelligence page.
News Contact
Stephanie N. Kadel
Ivan Allen College of Liberal Arts
Sep. 13, 2024
Anna Ivanova, assistant professor in the School of Psychology, was recently named to the MIT Technology Review’s 35 Innovators Under 35 for 2024 for her work on language processing in the human brain and artificial intelligence applications.
A key pillar of Ivanova’s work involves large language models (LLM) commonly used in artificial intelligence tools like ChatGPT. By approaching the study of LLMs with cognitive science techniques, Ivanova hopes to bring us closer to more functional AIs — and a better understanding of the brain.
“I am happy that, these days, language and human cognition are topics that the world cares deeply about, thanks to recent developments in AI,” says Ivanova, who is also a member of Georgia Tech’s Neuro Next Initiative, a burgeoning interdisciplinary research hub for neuroscience, neurotechnology, and society. “Not only are these topics important, but they are also fun to study.”
Learn more about Ivanova’s research.
News Contact
Audra Davidson
Communications Program Manager
Neuro Next Initiative
Aug. 30, 2024
The Cloud Hub, a key initiative of the Institute for Data Engineering and Science (IDEaS) at Georgia Tech, recently concluded a successful Call for Proposals focused on advancing the field of Generative Artificial Intelligence (GenAI). This initiative, made possible by a generous gift funding from Microsoft, aims to push the boundaries of GenAI research by supporting projects that explore both foundational aspects and innovative applications of this cutting-edge technology.
Call for Proposals: A Gateway to Innovation
Launched in early 2024, the Call for Proposals invited researchers from across Georgia Tech to submit their innovative ideas on GenAI. The scope was broad, encouraging proposals that spanned foundational research, system advancements, and novel applications in various disciplines, including arts, sciences, business, and engineering. A special emphasis was placed on projects that addressed responsible and ethical AI use.
The response from the Georgia Tech research community was overwhelming, with 76 proposals submitted by teams eager to explore this transformative technology. After a rigorous selection process, eight projects were selected for support. Each awarded team will also benefit from access to Microsoft’s Azure cloud resources..
Recognizing Microsoft’s Generous Contribution
This successful initiative was made possible through the generous support of Microsoft, whose contribution of research resources has empowered Georgia Tech researchers to explore new frontiers in GenAI. By providing access to Azure’s advanced tools and services, Microsoft has played a pivotal role in accelerating GenAI research at Georgia Tech, enabling researchers to tackle some of the most pressing challenges and opportunities in this rapidly evolving field.
Looking Ahead: Pioneering the Future of GenAI
The awarded projects, set to commence in Fall 2024, represent a diverse array of research directions, from improving the capabilities of large language models to innovative applications in data management and interdisciplinary collaborations. These projects are expected to make significant contributions to the body of knowledge in GenAI and are poised to have a lasting impact on the industry and beyond.
IDEaS and the Cloud Hub are committed to supporting these teams as they embark on their research journeys. The outcomes of these projects will be shared through publications and highlighted on the Cloud Hub web portal, ensuring visibility for the groundbreaking work enabled by this initiative.
Congratulations to the Fall 2024 Winners
- Annalisa Bracco | EAS "Modeling the Dispersal and Connectivity of Marine Larvae with GenAI Agents" [proposal co-funded with support from the Brook Byers Institute for Sustainable Systems]
- Yunan Luo | CSE “Designing New and Diverse Proteins with Generative AI”
- Kartik Goyal | IC “Generative AI for Greco-Roman Architectural Reconstruction: From Partial Unstructured Archaeological Descriptions to Structured Architectural Plans”
- Victor Fung | CSE “Intelligent LLM Agents for Materials Design and Automated Experimentation”
- Noura Howell | LMC “Applying Generative AI for STEM Education: Supporting AI literacy and community engagement with marginalized youth”
- Neha Kumar | IC “Towards Responsible Integration of Generative AI in Creative Game Development”
- Maureen Linden | Design “Best Practices in Generative AI Used in the Creation of Accessible Alternative Formats for People with Disabilities”
- Surya Kalidindi | ME & MSE “Accelerating Materials Development Through Generative AI Based Dimensionality Expansion Techniques”
- Tuo Zhao | ISyE “Adaptive and Robust Alignment of LLMs with Complex Rewards”
News Contact
Christa M. Ernst - Research Communications Program Manager
christa.ernst@research.gatech.edu
Aug. 21, 2024
- Written by Benjamin Wright -
As Georgia Tech establishes itself as a national leader in AI research and education, some researchers on campus are putting AI to work to help meet sustainability goals in a range of areas including climate change adaptation and mitigation, urban farming, food distribution, and life cycle assessments while also focusing on ways to make sure AI is used ethically.
Josiah Hester, interim associate director for Community-Engaged Research in the Brook Byers Institute for Sustainable Systems (BBISS) and associate professor in the School of Interactive Computing, sees these projects as wins from both a research standpoint and for the local, national, and global communities they could affect.
“These faculty exemplify Georgia Tech's commitment to serving and partnering with communities in our research,” he says. “Sustainability is one of the most pressing issues of our time. AI gives us new tools to build more resilient communities, but the complexities and nuances in applying this emerging suite of technologies can only be solved by community members and researchers working closely together to bridge the gap. This approach to AI for sustainability strengthens the bonds between our university and our communities and makes lasting impacts due to community buy-in.”
Flood Monitoring and Carbon Storage
Peng Chen, assistant professor in the School of Computational Science and Engineering in the College of Computing, focuses on computational mathematics, data science, scientific machine learning, and parallel computing. Chen is combining these areas of expertise to develop algorithms to assist in practical applications such as flood monitoring and carbon dioxide capture and storage.
He is currently working on a National Science Foundation (NSF) project with colleagues in Georgia Tech’s School of City and Regional Planning and from the University of South Florida to develop flood models in the St. Petersburg, Florida area. As a low-lying state with more than 8,400 miles of coastline, Florida is one of the states most at risk from sea level rise and flooding caused by extreme weather events sparked by climate change.
Chen’s novel approach to flood monitoring takes existing high-resolution hydrological and hydrographical mapping and uses machine learning to incorporate real-time updates from social media users and existing traffic cameras to run rapid, low-cost simulations using deep neural networks. Current flood monitoring software is resource and time-intensive. Chen’s goal is to produce live modeling that can be used to warn residents and allocate emergency response resources as conditions change. That information would be available to the general public through a portal his team is working on.
“This project focuses on one particular community in Florida,” Chen says, “but we hope this methodology will be transferable to other locations and situations affected by climate change.”
In addition to the flood-monitoring project in Florida, Chen and his colleagues are developing new methods to improve the reliability and cost-effectiveness of storing carbon dioxide in underground rock formations. The process is plagued with uncertainty about the porosity of the bedrock, the optimal distribution of monitoring wells, and the rate at which carbon dioxide can be injected without over-pressurizing the bedrock, leading to collapse. The new simulations are fast, inexpensive, and minimize the risk of failure, which also decreases the cost of construction.
“Traditional high-fidelity simulation using supercomputers takes hours and lots of resources,” says Chen. “Now we can run these simulations in under one minute using AI models without sacrificing accuracy. Even when you factor in AI training costs, this is a huge savings in time and financial resources.”
Flood monitoring and carbon capture are passion projects for Chen, who sees an opportunity to use artificial intelligence to increase the pace and decrease the cost of problem-solving.
“I’m very excited about the possibility of solving grand challenges in the sustainability area with AI and machine learning models,” he says. “Engineering problems are full of uncertainty, but by using this technology, we can characterize the uncertainty in new ways and propagate it throughout our predictions to optimize designs and maximize performance.”
Urban Farming and Optimization
Yongsheng Chen works at the intersection of food, energy, and water. As the Bonnie W. and Charles W. Moorman Professor in the School of Civil and Environmental Engineering and director of the Nutrients, Energy, and Water Center for Agriculture Technology, Chen is focused on making urban agriculture technologically feasible, financially viable, and, most importantly, sustainable. To do that he’s leveraging AI to speed up the design process and optimize farming and harvesting operations.
Chen’s closed-loop hydroponic system uses anaerobically treated wastewater for fertilization and irrigation by extracting and repurposing nutrients as fertilizer before filtering the water through polymeric membranes with nano-scale pores. Advancing filtration and purification processes depends on finding the right membrane materials to selectively separate contaminants, including antibiotics and per- and polyfluoroalkyl substances (PFAS). Chen and his team are using AI and machine learning to guide membrane material selection and fabrication to make contaminant separation as efficient as possible. Similarly, AI and machine learning are assisting in developing carbon capture materials such as ionic liquids that can retain carbon dioxide generated during wastewater treatment and redirect it to hydroponics systems, boosting food productivity.
“A fundamental angle of our research is that we do not see municipal wastewater as waste,” explains Chen. “It is a resource we can treat and recover components from to supply irrigation, fertilizer, and biogas, all while reducing the amount of energy used in conventional wastewater treatment methods.”
In addition to aiding in materials development, which reduces design time and production costs, Chen is using machine learning to optimize the growing cycle of produce, maximizing nutritional value. His USDA-funded vertical farm uses autonomous robots to measure critical cultivation parameters and take pictures without destroying plants. This data helps determine optimum environmental conditions, fertilizer supply, and harvest timing, resulting in a faster-growing, optimally nutritious plant with less fertilizer waste and lower emissions.
Chen’s work has received considerable federal funding. As the Urban Resilience and Sustainability Thrust Leader within the NSF-funded AI Institute for Advances in Optimization (AI4OPT), he has received additional funding to foster international collaboration in digital agriculture with colleagues across the United States and in Japan, Australia, and India.
Optimizing Food Distribution
At the other end of the agricultural spectrum is postdoc Rosemarie Santa González in the H. Milton Stewart School of Industrial and Systems Engineering, who is conducting her research under the supervision of Professor Chelsea White and Professor Pascal Van Hentenryck, the director of Georgia Tech’s AI Hub as well as the director of AI4OPT.
Santa González is working with the Wisconsin Food Hub Cooperative to help traditional farmers get their products into the hands of consumers as efficiently as possible to reduce hunger and food waste. Preventing food waste is a priority for both the EPA and USDA. Current estimates are that 30 to 40% of the food produced in the United States ends up in landfills, which is a waste of resources on both the production end in the form of land, water, and chemical use, as well as a waste of resources when it comes to disposing of it, not to mention the impact of the greenhouses gases when wasted food decays.
To tackle this problem, Santa González and the Wisconsin Food Hub are helping small-scale farmers access refrigeration facilities and distribution chains. As part of her research, she is helping to develop AI tools that can optimize the logistics of the small-scale farmer supply chain while also making local consumers in underserved areas aware of what’s available so food doesn’t end up in landfills.
“This solution has to be accessible,” she says. “Not just in the sense that the food is accessible, but that the tools we are providing to them are accessible. The end users have to understand the tools and be able to use them. It has to be sustainable as a resource.”
Making AI accessible to people in the community is a core goal of the NSF’s AI Institute for Intelligent Cyberinfrastructure with Computational Learning in the Environment (ICICLE), one of the partners involved with the project.
“A large segment of the population we are working with, which includes historically marginalized communities, has a negative reaction to AI. They think of machines taking over, or data being stolen. Our goal is to democratize AI in these decision-support tools as we work toward the UN Sustainable Development Goal of Zero Hunger. There is so much power in these tools to solve complex problems that have very real results. More people will be fed and less food will spoil before it gets to people’s homes.”
Santa González hopes the tools they are building can be packaged and customized for food co-ops everywhere.
AI and Ethics
Like Santa González, Joe Bozeman III is also focused on the ethical and sustainable deployment of AI and machine learning, especially among marginalized communities. The assistant professor in the School of Civil and Environmental Engineering is an industrial ecologist committed to fostering ethical climate change adaptation and mitigation strategies. His SEEEL Lab works to make sure researchers understand the consequences of decisions before they move from academic concepts to policy decisions, particularly those that rely on data sets involving people and communities.
“With the administration of big data, there is a human tendency to assume that more data means everything is being captured, but that's not necessarily true,” he cautions. “More data could mean we're just capturing more of the data that already exists, while new research shows that we’re not including information from marginalized communities that have historically not been brought into the decision-making process. That includes underrepresented minorities, rural populations, people with disabilities, and neurodivergent people who may not interface with data collection tools.”
Bozeman is concerned that overlooking marginalized communities in data sets will result in decisions that at best ignore them and at worst cause them direct harm.
“Our lab doesn't wait for the negative harms to occur before we start talking about them,” explains Bozeman, who holds a courtesy appointment in the School of Public Policy. “Our lab forecasts what those harms will be so decision-makers and engineers can develop technologies that consider these things.”
He focuses on urbanization, the food-energy-water nexus, and the circular economy. He has found that much of the research in those areas is conducted in a vacuum without consideration for human engagement and the impact it could have when implemented.
Bozeman is lobbying for built-in tools and safeguards to mitigate the potential for harm from researchers using AI without appropriate consideration. He already sees a disconnect between the academic world and the public. Bridging that trust gap will require ethical uses of AI.
“We have to start rigorously including their voices in our decision-making to begin gaining trust with the public again. And with that trust, we can all start moving toward sustainable development. If we don't do that, I don't care how good our engineering solutions are, we're going to miss the boat entirely on bringing along the majority of the population.”
BBISS Support
Moving forward, Hester is excited about the impact the Brooks Byers Institute for Sustainable Systems can have on AI and sustainability research through a variety of support mechanisms.
“BBISS continues to invest in faculty development and training in community-driven research strategies, including the Community Engagement Faculty Fellows Program (with the Center for Sustainable Communities Research and Education), while empowering multidisciplinary teams to work together to solve grand engineering challenges with AI by supporting the AI+Climate Faculty Interest Group, as well as partnering with and providing administrative support for community-driven research projects.”
News Contact
Brent Verrill, Research Communications Program Manager, BBISS
Pagination
- Previous page
- 3 Page 3
- Next page