Sep. 19, 2024
A new algorithm tested on NASA’s Perseverance Rover on Mars may lead to better forecasting of hurricanes, wildfires, and other extreme weather events that impact millions globally.
Georgia Tech Ph.D. student Austin P. Wright is first author of a paper that introduces Nested Fusion. The new algorithm improves scientists’ ability to search for past signs of life on the Martian surface.
In addition to supporting NASA’s Mars 2020 mission, scientists from other fields working with large, overlapping datasets can use Nested Fusion’s methods toward their studies.
Wright presented Nested Fusion at the 2024 International Conference on Knowledge Discovery and Data Mining (KDD 2024) where it was a runner-up for the best paper award. KDD is widely considered the world's most prestigious conference for knowledge discovery and data mining research.
“Nested Fusion is really useful for researchers in many different domains, not just NASA scientists,” said Wright. “The method visualizes complex datasets that can be difficult to get an overall view of during the initial exploratory stages of analysis.”
Nested Fusion combines datasets with different resolutions to produce a single, high-resolution visual distribution. Using this method, NASA scientists can more easily analyze multiple datasets from various sources at the same time. This can lead to faster studies of Mars’ surface composition to find clues of previous life.
The algorithm demonstrates how data science impacts traditional scientific fields like chemistry, biology, and geology.
Even further, Wright is developing Nested Fusion applications to model shifting climate patterns, plant and animal life, and other concepts in the earth sciences. The same method can combine overlapping datasets from satellite imagery, biomarkers, and climate data.
“Users have extended Nested Fusion and similar algorithms toward earth science contexts, which we have received very positive feedback,” said Wright, who studies machine learning (ML) at Georgia Tech.
“Cross-correlational analysis takes a long time to do and is not done in the initial stages of research when patterns appear and form new hypotheses. Nested Fusion enables people to discover these patterns much earlier.”
Wright is the data science and ML lead for PIXLISE, the software that NASA JPL scientists use to study data from the Mars Perseverance Rover.
Perseverance uses its Planetary Instrument for X-ray Lithochemistry (PIXL) to collect data on mineral composition of Mars’ surface. PIXL’s two main tools that accomplish this are its X-ray Fluorescence (XRF) Spectrometer and Multi-Context Camera (MCC).
When PIXL scans a target area, it creates two co-aligned datasets from the components. XRF collects a sample's fine-scale elemental composition. MCC produces images of a sample to gather visual and physical details like size and shape.
A single XRF spectrum corresponds to approximately 100 MCC imaging pixels for every scan point. Each tool’s unique resolution makes mapping between overlapping data layers challenging. However, Wright and his collaborators designed Nested Fusion to overcome this hurdle.
In addition to progressing data science, Nested Fusion improves NASA scientists' workflow. Using the method, a single scientist can form an initial estimate of a sample’s mineral composition in a matter of hours. Before Nested Fusion, the same task required days of collaboration between teams of experts on each different instrument.
“I think one of the biggest lessons I have taken from this work is that it is valuable to always ground my ML and data science problems in actual, concrete use cases of our collaborators,” Wright said.
“I learn from collaborators what parts of data analysis are important to them and the challenges they face. By understanding these issues, we can discover new ways of formalizing and framing problems in data science.”
Wright presented Nested Fusion at KDD 2024, held Aug. 25-29 in Barcelona, Spain. KDD is an official special interest group of the Association for Computing Machinery. The conference is one of the world’s leading forums for knowledge discovery and data mining research.
Nested Fusion won runner-up for the best paper in the applied data science track, which comprised of over 150 papers. Hundreds of other papers were presented at the conference’s research track, workshops, and tutorials.
Wright’s mentors, Scott Davidoff and Polo Chau, co-authored the Nested Fusion paper. Davidoff is a principal research scientist at the NASA Jet Propulsion Laboratory. Chau is a professor at the Georgia Tech School of Computational Science and Engineering (CSE).
“I was extremely happy that this work was recognized with the best paper runner-up award,” Wright said. “This kind of applied work can sometimes be hard to find the right academic home, so finding communities that appreciate this work is very encouraging.”
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Aug. 30, 2024
The Cloud Hub, a key initiative of the Institute for Data Engineering and Science (IDEaS) at Georgia Tech, recently concluded a successful Call for Proposals focused on advancing the field of Generative Artificial Intelligence (GenAI). This initiative, made possible by a generous gift funding from Microsoft, aims to push the boundaries of GenAI research by supporting projects that explore both foundational aspects and innovative applications of this cutting-edge technology.
Call for Proposals: A Gateway to Innovation
Launched in early 2024, the Call for Proposals invited researchers from across Georgia Tech to submit their innovative ideas on GenAI. The scope was broad, encouraging proposals that spanned foundational research, system advancements, and novel applications in various disciplines, including arts, sciences, business, and engineering. A special emphasis was placed on projects that addressed responsible and ethical AI use.
The response from the Georgia Tech research community was overwhelming, with 76 proposals submitted by teams eager to explore this transformative technology. After a rigorous selection process, eight projects were selected for support. Each awarded team will also benefit from access to Microsoft’s Azure cloud resources..
Recognizing Microsoft’s Generous Contribution
This successful initiative was made possible through the generous support of Microsoft, whose contribution of research resources has empowered Georgia Tech researchers to explore new frontiers in GenAI. By providing access to Azure’s advanced tools and services, Microsoft has played a pivotal role in accelerating GenAI research at Georgia Tech, enabling researchers to tackle some of the most pressing challenges and opportunities in this rapidly evolving field.
Looking Ahead: Pioneering the Future of GenAI
The awarded projects, set to commence in Fall 2024, represent a diverse array of research directions, from improving the capabilities of large language models to innovative applications in data management and interdisciplinary collaborations. These projects are expected to make significant contributions to the body of knowledge in GenAI and are poised to have a lasting impact on the industry and beyond.
IDEaS and the Cloud Hub are committed to supporting these teams as they embark on their research journeys. The outcomes of these projects will be shared through publications and highlighted on the Cloud Hub web portal, ensuring visibility for the groundbreaking work enabled by this initiative.
Congratulations to the Fall 2024 Winners
- Annalisa Bracco | EAS "Modeling the Dispersal and Connectivity of Marine Larvae with GenAI Agents" [proposal co-funded with support from the Brook Byers Institute for Sustainable Systems]
- Yunan Luo | CSE “Designing New and Diverse Proteins with Generative AI”
- Kartik Goyal | IC “Generative AI for Greco-Roman Architectural Reconstruction: From Partial Unstructured Archaeological Descriptions to Structured Architectural Plans”
- Victor Fung | CSE “Intelligent LLM Agents for Materials Design and Automated Experimentation”
- Noura Howell | LMC “Applying Generative AI for STEM Education: Supporting AI literacy and community engagement with marginalized youth”
- Neha Kumar | IC “Towards Responsible Integration of Generative AI in Creative Game Development”
- Maureen Linden | Design “Best Practices in Generative AI Used in the Creation of Accessible Alternative Formats for People with Disabilities”
- Surya Kalidindi | ME & MSE “Accelerating Materials Development Through Generative AI Based Dimensionality Expansion Techniques”
- Tuo Zhao | ISyE “Adaptive and Robust Alignment of LLMs with Complex Rewards”
News Contact
Christa M. Ernst - Research Communications Program Manager
christa.ernst@research.gatech.edu
Jun. 28, 2024
From weather prediction to drug discovery, math powers the models used in computer simulations. To help these vital tools with their calculations, global experts recently met at Georgia Tech to share ways to make math easier for computers.
Tech hosted the 2024 International Conference on Preconditioning Techniques for Scientific and Industrial Applications (Precond 24), June 10-12.
Preconditioning accelerates matrix computations, a kind of math used in most large-scale models. These computer models become faster, more efficient, and more accessible with help from preconditioned equations.
“Preconditioning transforms complex numerical problems into more easily solved ones,” said Edmond Chow, a professor at Georgia Tech and co-chair of Precond 24’s local organization and program committees.
“The new problem wields a better condition number, giving rise to the name preconditioning.”
Researchers from 13 countries presented their work through 20 mini-symposia and seven invited talks at Precond 24. Their work showcased the practicality of preconditioners.
Vandana Dwarka, an assistant professor at Delft University of Technology, shared newly developed preconditioners for electromagnetic simulations. This technology can be used in further applications ranging from imaging to designing nuclear fusion devices.
Xiaozhe Hu presented a physics-based preconditioner that simulates biophysical processes in the brain, such as blood flow and metabolic waste clearance. Hu brought this research from Tufts University, where he is an associate professor.
Tucker Hartland, a postdoctoral researcher at Lawrence Livermore National Laboratory, discussed preconditioning in contact mechanics. This work improves the modeling of interactions between physical objects that touch each other. Many fields stand to benefit from Hartland’s study, including mechanical engineering, civil engineering, and materials science.
A unique aspect of this year’s conference was an emphasis on machine learning (ML). Between a panel discussion, tutorial, and several talks, experts detailed how to employ ML for preconditioning and how preconditioning can train ML models.
Precond 24 invited seven speakers from institutions around the world to share their research with conference attendees. The presenters were:
- Monica Dessole, CERN, Switzerland
- Selime Gurol, CERFACS, France
- Alexander Heinlein, Delft University of Technology, Netherlands
- Rui Peng Li, Lawrence Livermore National Laboratory, USA
- Will Pazner, Portland State University, USA
- Tyrone Rees, Science and Technology Facilities Council, UK
- Jacob B. Schroder, University of New Mexico, USA
Along with hosting Precond 24, several Georgia Tech researchers participated in the conference through presentations.
Ph.D. students Hua Huang and Shikhar Shah each presented a paper on the conference’s first day. Alumnus Srinivas Eswar (Ph.D. CS 2022) returned to Atlanta to share research from his current role at Argonne National Laboratory. Chow chaired the ML panel and a symposium on preconditioners for matrices.
“It was an engaging and rewarding experience meeting so many people from this very tight-knit community,” said Shah, who studies computational science and engineering (CSE). “Getting to see talks close to my research provided me with a lot of inspiration and direction for future work.”
Precond 2024 was the thirteenth meeting of the conference, which occurs every two years.
The conference returned to Atlanta this year for the first time since 2005. Atlanta joins Minneapolis as one of only two cities in the world to host Precond more than once. Precond 24 marked the sixth time the conference met in the U.S.
Georgia Tech and Emory University’s Department of Mathematics organized and sponsored Precond 24. The U.S. Department of Energy Office of Science co-sponsored the conference with Tech and Emory.
Georgia Tech entities swarmed together in support of Precond 24. The Office of the Associate Vice President for Research Operations and Infrastructure, College of Computing, and School of CSE co-sponsored the conference.
“The enthusiasm at the conference has been very gratifying. So many people organized sessions at the conference and contributed to the very strong attendance,” Chow said.
“This is a testament to the continued importance of preconditioning and related numerical methods in a rapidly changing technological world.”
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
May. 15, 2024
Georgia Tech researchers say non-English speakers shouldn’t rely on chatbots like ChatGPT to provide valuable healthcare advice.
A team of researchers from the College of Computing at Georgia Tech has developed a framework for assessing the capabilities of large language models (LLMs).
Ph.D. students Mohit Chandra and Yiqiao (Ahren) Jin are the co-lead authors of the paper Better to Ask in English: Cross-Lingual Evaluation of Large Language Models for Healthcare Queries.
Their paper’s findings reveal a gap between LLMs and their ability to answer health-related questions. Chandra and Jin point out the limitations of LLMs for users and developers but also highlight their potential.
Their XLingEval framework cautions non-English speakers from using chatbots as alternatives to doctors for advice. However, models can improve by deepening the data pool with multilingual source material such as their proposed XLingHealth benchmark.
“For users, our research supports what ChatGPT’s website already states: chatbots make a lot of mistakes, so we should not rely on them for critical decision-making or for information that requires high accuracy,” Jin said.
“Since we observed this language disparity in their performance, LLM developers should focus on improving accuracy, correctness, consistency, and reliability in other languages,” Jin said.
Using XLingEval, the researchers found chatbots are less accurate in Spanish, Chinese, and Hindi compared to English. By focusing on correctness, consistency, and verifiability, they discovered:
- Correctness decreased by 18% when the same questions were asked in Spanish, Chinese, and Hindi.
- Answers in non-English were 29% less consistent than their English counterparts.
- Non-English responses were 13% overall less verifiable.
XLingHealth contains question-answer pairs that chatbots can reference, which the group hopes will spark improvement within LLMs.
The HealthQA dataset uses specialized healthcare articles from the popular healthcare website Patient. It includes 1,134 health-related question-answer pairs as excerpts from original articles.
LiveQA is a second dataset containing 246 question-answer pairs constructed from frequently asked questions (FAQs) platforms associated with the U.S. National Institutes of Health (NIH).
For drug-related questions, the group built a MedicationQA component. This dataset contains 690 questions extracted from anonymous consumer queries submitted to MedlinePlus. The answers are sourced from medical references, such as MedlinePlus and DailyMed.
In their tests, the researchers asked over 2,000 medical-related questions to ChatGPT-3.5 and MedAlpaca. MedAlpaca is a healthcare question-answer chatbot trained in medical literature. Yet, more than 67% of its responses to non-English questions were irrelevant or contradictory.
“We see far worse performance in the case of MedAlpaca than ChatGPT,” Chandra said.
“The majority of the data for MedAlpaca is in English, so it struggled to answer queries in non-English languages. GPT also struggled, but it performed much better than MedAlpaca because it had some sort of training data in other languages.”
Ph.D. student Gaurav Verma and postdoctoral researcher Yibo Hu co-authored the paper.
Jin and Verma study under Srijan Kumar, an assistant professor in the School of Computational Science and Engineering, and Hu is a postdoc in Kumar’s lab. Chandra is advised by Munmun De Choudhury, an associate professor in the School of Interactive Computing.
The team will present their paper at The Web Conference, occurring May 13-17 in Singapore. The annual conference focuses on the future direction of the internet. The group’s presentation is a complimentary match, considering the conference's location.
English and Chinese are the most common languages in Singapore. The group tested Spanish, Chinese, and Hindi because they are the world’s most spoken languages after English. Personal curiosity and background played a part in inspiring the study.
“ChatGPT was very popular when it launched in 2022, especially for us computer science students who are always exploring new technology,” said Jin. “Non-native English speakers, like Mohit and I, noticed early on that chatbots underperformed in our native languages.”
School of Interactive Computing communications officer Nathan Deen and School of Computational Science and Engineering communications officer Bryant Wine contributed to this report.
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Nathan Deen, Communications Officer
ndeen6@cc.gatech.edu
May. 06, 2024
Cardiologists and surgeons could soon have a new mobile augmented reality (AR) tool to improve collaboration in surgical planning.
ARCollab is an iOS AR application designed for doctors to interact with patient-specific 3D heart models in a shared environment. It is the first surgical planning tool that uses multi-user mobile AR in iOS.
The application’s collaborative feature overcomes limitations in traditional surgical modeling and planning methods. This offers patients better, personalized care from doctors who plan and collaborate with the tool.
Georgia Tech researchers partnered with Children’s Healthcare of Atlanta (CHOA) in ARCollab’s development. Pratham Mehta, a computer science major, led the group’s research.
“We have conducted two trips to CHOA for usability evaluations with cardiologists and surgeons. The overall feedback from ARCollab users has been positive,” Mehta said.
“They all enjoyed experimenting with it and collaborating with other users. They also felt like it had the potential to be useful in surgical planning.”
ARCollab’s collaborative environment is the tool’s most novel feature. It allows surgical teams to study and plan together in a virtual workspace, regardless of location.
ARCollab supports a toolbox of features for doctors to inspect and interact with their patients' AR heart models. With a few finger gestures, users can scale and rotate, “slice” into the model, and modify a slicing plane to view omnidirectional cross-sections of the heart.
Developing ARCollab on iOS works twofold. This streamlines deployment and accessibility by making it available on the iOS App Store and Apple devices. Building ARCollab on Apple’s peer-to-peer network framework ensures the functionality of the AR components. It also lessens the learning curve, especially for experienced AR users.
ARCollab overcomes traditional surgical planning practices of using physical heart models. Producing physical models is time-consuming, resource-intensive, and irreversible compared to digital models. It is also difficult for surgical teams to plan together since they are limited to studying a single physical model.
Digital and AR modeling is growing as an alternative to physical models. CardiacAR is one such tool the group has already created.
However, digital platforms lack multi-user features essential for surgical teams to collaborate during planning. ARCollab’s multi-user workspace progresses the technology’s potential as a mass replacement for physical modeling.
“Over the past year and a half, we have been working on incorporating collaboration into our prior work with CardiacAR,” Mehta said.
“This involved completely changing the codebase, rebuilding the entire app and its features from the ground up in a newer AR framework that was better suited for collaboration and future development.”
Its interactive and visualization features, along with its novelty and innovation, led the Conference on Human Factors in Computing Systems (CHI 2024) to accept ARCollab for presentation. The conference occurs May 11-16 in Honolulu.
CHI is considered the most prestigious conference for human-computer interaction and one of the top-ranked conferences in computer science.
M.S. student Harsha Karanth and alumnus Alex Yang (CS 2022, M.S. CS 2023) co-authored the paper with Mehta. They study under Polo Chau, an associate professor in the School of Computational Science and Engineering.
The Georgia Tech group partnered with Timothy Slesnick and Fawwaz Shaw from CHOA on ARCollab’s development.
“Working with the doctors and having them test out versions of our application and give us feedback has been the most important part of the collaboration with CHOA,” Mehta said.
“These medical professionals are experts in their field. We want to make sure to have features that they want and need, and that would make their job easier.”
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
May. 06, 2024
Thanks to a Georgia Tech researcher's new tool, application developers can now see potential harmful attributes in their prototypes.
Farsight is a tool designed for developers who use large language models (LLMs) to create applications powered by artificial intelligence (AI). Farsight alerts prototypers when they write LLM prompts that could be harmful and misused.
Downstream users can expect to benefit from better quality and safer products made with Farsight’s assistance. The tool’s lasting impact, though, is that it fosters responsible AI awareness by coaching developers on the proper use of LLMs.
Machine Learning Ph.D. candidate Zijie (Jay) Wang is Farsight’s lead architect. He will present the paper at the upcoming Conference on Human Factors in Computing Systems (CHI 2024). Farsight ranked in the top 5% of papers accepted to CHI 2024, earning it an honorable mention for the conference’s best paper award.
“LLMs have empowered millions of people with diverse backgrounds, including writers, doctors, and educators, to build and prototype powerful AI apps through prompting. However, many of these AI prototypers don’t have training in computer science, let alone responsible AI practices,” said Wang.
“With a growing number of AI incidents related to LLMs, it is critical to make developers aware of the potential harms associated with their AI applications.”
Wang referenced an example when two lawyers used ChatGPT to write a legal brief. A U.S. judge sanctioned the lawyers because their submitted brief contained six fictitious case citations that the LLM fabricated.
With Farsight, the group aims to improve developers’ awareness of responsible AI use. It achieves this by highlighting potential use cases, affected stakeholders, and possible harm associated with an application in the early prototyping stage.
A user study involving 42 prototypers showed that developers could better identify potential harms associated with their prompts after using Farsight. The users also found the tool more helpful and usable than existing resources.
Feedback from the study showed Farsight encouraged developers to focus on end-users and think beyond immediate harmful outcomes.
“While resources, like workshops and online videos, exist to help AI prototypers, they are often seen as tedious, and most people lack the incentive and time to use them,” said Wang.
“Our approach was to consolidate and display responsible AI resources in the same space where AI prototypers write prompts. In addition, we leverage AI to highlight relevant real-life incidents and guide users to potential harms based on their prompts.”
Farsight employs an in-situ user interface to show developers the potential negative consequences of their applications during prototyping.
Alert symbols for “neutral,” “caution,” and “warning” notify users when prompts require more attention. When a user clicks the alert symbol, an awareness sidebar expands from one side of the screen.
The sidebar shows an incident panel with actual news headlines from incidents relevant to the harmful prompt. The sidebar also has a use-case panel that helps developers imagine how different groups of people can use their applications in varying contexts.
Another key feature is the harm envisioner. This functionality takes a user’s prompt as input and assists them in envisioning potential harmful outcomes. The prompt branches into an interactive node tree that lists use cases, stakeholders, and harms, like “societal harm,” “allocative harm,” “interpersonal harm,” and more.
The novel design and insightful findings from the user study resulted in Farsight’s acceptance for presentation at CHI 2024.
CHI is considered the most prestigious conference for human-computer interaction and one of the top-ranked conferences in computer science.
CHI is affiliated with the Association for Computing Machinery. The conference takes place May 11-16 in Honolulu.
Wang worked on Farsight in Summer 2023 while interning at Google + AI Research group (PAIR).
Farsight’s co-authors from Google PAIR include Chinmay Kulkarni, Lauren Wilcox, Michael Terry, and Michael Madaio. The group possesses closer ties to Georgia Tech than just through Wang.
Terry, the current co-leader of Google PAIR, earned his Ph.D. in human-computer interaction from Georgia Tech in 2005. Madaio graduated from Tech in 2015 with a M.S. in digital media. Wilcox was a full-time faculty member in the School of Interactive Computing from 2013 to 2021 and serves in an adjunct capacity today.
Though not an author, one of Wang’s influences is his advisor, Polo Chau. Chau is an associate professor in the School of Computational Science and Engineering. His group specializes in data science, human-centered AI, and visualization research for social good.
“I think what makes Farsight interesting is its unique in-workflow and human-AI collaborative approach,” said Wang.
“Furthermore, Farsight leverages LLMs to expand prototypers’ creativity and brainstorm a wide range of use cases, stakeholders, and potential harms.”
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Apr. 23, 2024
The College of Computing’s countdown to commencement began on April 11 when students, faculty, and staff converged at the 33rd Annual Awards Celebration.
The banquet celebrated the college community for an exemplary academic year and recognized the most distinguished individuals of 2023-2024. For Alex Orso, the reception was a high-water mark in his role as interim dean.
“I always say that the best part about my job is to brag about the achievements and accolades of my colleagues,” said Orso.
“It is my distinct honor and privilege to recognize these award winners and the collective success of the College of Computing.”
Orso’s colleagues from the School of Computational Science and Engineering (CSE) were among the celebration’s honorees. School of CSE students, faculty, and alumni earning awards this year include:
- Grace Driskill, M.S. CSE student - The Donald V. Jackson Fellowship
- Harshvardhan Baldwa, M.S. CSE student - The Marshal D. Williamson Fellowship
- Mansi Phute, M.S. CS student- The Marshal D. Williamson Fellowship
- Assistant Professor Chao Zhang- Outstanding Junior Faculty Research Award
- Nazanin Tabatbaei, teaching assistant in Associate Professor Polo Chau’s CSE 6242 Data & Visual Analytics course- Outstanding Instructional Associate Teaching Award
- Rodrigo Borela (Ph.D. CSE-CEE 2021), School of Computing Instruction Lecturer and CSE program alumnus - William D. "Bill" Leahy Jr. Outstanding Instructor Award
- Pratham Metha, undergraduate student in Chau’s research group- Outstanding Legacy Leadership Award
- Alexander Rodriguez (Ph.D. CS 2023), School of CSE alumnus - Outstanding Doctoral Dissertation Award
At the Institute level, Georgia Tech recognized Driskill, Baldwa, and Phute for their awards on April 10 at the annual Student Honors Celebration.
Driskill’s classroom achievement earned her a spot on the 2024 All-ACC Indoor Track and Field Academic Team. This follows her selection for the 2023 All-ACC Academic Team for cross country.
Georgia Tech’s Center for Teaching and Learning released in summer 2023 the Class of 1934 Honor Roll for spring semester courses. School of CSE awardees included Assistant Professor Srijan Kumar (CSE 6240: Web Search & Text Mining), Lecturer Max Mahdi Roozbahani (CS 4641: Machine Learning), and alumnus Mengmeng Liu (CSE 6242: Data & Visual Analytics).
Accolades and recognition of School of CSE researchers for 2023-2024 expounded off campus as well.
School of CSE researchers received awards off campus throughout the year, a testament to the reach and impact of their work.
School of CSE Ph.D. student Gaurav Verma kicked off the year by receiving the J.P. Morgan Chase AI Research Ph.D. Fellowship. Verma was one of only 13 awardees from around the world selected for the 2023 class.
Along with seeing many of his students receive awards this year, Polo Chau attained a 2023 Google Award for Inclusion Research. Later in the year, the Institute promoted Chau to professor, which takes effect in the 2024-2025 academic year.
Schmidt Sciences selected School of CSE Assistant Professor Kai Wang as an AI2050 Early Career Fellow to advance artificial intelligence research for social good. By being part of the fellowship’s second cohort, Wang is the first ever Georgia Tech faculty to receive the award.
School of CSE Assistant Professor Yunan Luo received two significant awards to advance his work in computational biology. First, Luo received the Maximizing Investigator’s Research Award (MIRA) from the National Institutes of Health, which provides $1.8 million in funding for five years. Next, he received the 2023 Molecule Make Lab Institute (MMLI) seed grant.
Regents’ Professor Surya Kalidindi, jointly appointed with the George W. Woodruff School of Mechanical Engineering and School of CSE, was named a fellow to the 2023 class of the Department of Defense’s Laboratory-University Collaboration Initiative (LUCI).
2023-2024 was a monumental year for Assistant Professor Elizabeth Qian, jointly appointed with the Daniel Guggenheim School of Aerospace Engineering and the School of CSE.
The Air Force Office of Scientific Research selected Qian for the 2024 class of their Young Investigator Program. Earlier in the year, she received a grant under the Department of Energy’s Energy Earthshots Initiative.
Qian began the year by joining 81 other early-career engineers at the National Academy of Engineering’s Grainger Foundation Frontiers of Engineering 2023 Symposium. She also received the Hans Fischer Fellowship from the Institute for Advance Study at the Technical University of Munich.
It was a big academic year for Associate Professor Elizabeth Cherry. Cherry was reelected to a three-year term as a council member-at-large of the Society of Industrial and Applied Mathematics (SIAM). Cherry is also co-chair of the SIAM organizing committee for next year’s Conference on Computational Science and Engineering (CSE25).
Cherry continues to serve as the School of CSE’s associate chair for academic affairs. These leadership contributions led to her being named to the 2024 ACC Academic Leaders Network (ACC ALN) Fellows program.
School of CSE Professor and Associate Chair Edmond Chow was co-author of a paper that received the Test of Time Award at Supercomputing 2023 (SC23). Right before SC23, Chow’s Ph.D. student Hua Huang was selected as an honorable mention for the 2023 ACM-IEEE CS George Michael Memorial HPC Fellowship.
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Apr. 17, 2024
Computing research at Georgia Tech is getting faster thanks to a new state-of-the-art processing chip named after a female computer programming pioneer.
Tech is one of the first research universities in the country to receive the GH200 Grace Hopper Superchip from NVIDIA for testing, study, and research.
Designed for large-scale artificial intelligence (AI) and high-performance computing applications, the GH200 is intended for large language model (LLM) training, recommender systems, graph neural networks, and other tasks.
Alexey Tumanov and Tushar Krishna procured Georgia Tech’s first pair of Grace Hopper chips. Spencer Bryngelson attained four more GH200s, which will arrive later this month.
“We are excited about this new design that puts everything onto one chip and accessible to both processors,” said Will Powell, a College of Computing research technologist.
“The Superchip’s design increases computation efficiency where data doesn’t have to move as much and all the memory is on the chip.”
A key feature of the new processing chip is that the central processing unit (CPU) and graphics processing unit (GPU) are on the same board.
NVIDIA’s NVLink Chip-2-Chip (C2C) interconnect joins the two units together. C2C delivers up to 900 gigabytes per second of total bandwidth, seven times faster than PCIe Gen5 connections used in newer accelerated systems.
As a result, the two components share memory and process data with more speed and better power efficiency. This feature is one that the Georgia Tech researchers want to explore most.
Tumanov, an assistant professor in the School of Computer Science, and his Ph.D. student Amey Agrawal, are testing machine learning (ML) and LLM workloads on the chip. Their work with the GH200 could lead to more sustainable computing methods that keep up with the exponential growth of LLMs.
The advent of household LLMs, like ChatGPT and Gemini, pushes the limit of current architectures based on GPUs. The chip’s design overcomes known CPU-GPU bandwidth limitations. Tumanov’s group will put that design to the test through their studies.
Krishna is an associate professor in the School of Electrical and Computer Engineering and associate director of the Center for Research into Novel Computing Hierarchies (CRNCH).
His research focuses on optimizing data movement in modern computing platforms, including AI/ML accelerator systems. Ph.D. student Hao Kang uses the GH200 to analyze LLMs exceeding 30 billion parameters. This study will enable labs to explore deep learning optimizations with the new chip.
Bryngelson, an assistant professor in the School of Computational Science and Engineering, will use the chip to compute and simulate fluid and solid mechanics phenomena. His lab can use the CPU to reorder memory and perform disk writes while the GPU does parallel work. This capability is expected to significantly reduce the computational burden for some applications.
“Traditional CPU to GPU communication is slower and introduces latency issues because data passes back and forth over a PCIe bus,” Powell said. “Since they can access each other’s memory and share in one hop, the Superchip’s architecture boosts speed and efficiency.”
Grace Hopper is the inspirational namesake for the chip. She pioneered many developments in computer science that formed the foundation of the field today.
Hopper invented the first compiler, a program that translates computer source code into a target language. She also wrote the earliest programming languages, including COBOL, which is still used today in data processing.
Hopper joined the U.S. Navy Reserve during World War II, tasked with programming the Mark I computer. She retired as a rear admiral in August 1986 after 42 years of military service.
Georgia Tech researchers hope to preserve Hopper’s legacy using the technology that bears her name and spirit for innovation to make new discoveries.
“NVIDIA and other vendors show no sign of slowing down refinement of this kind of design, so it is important that our students understand how to get the most out of this architecture,” said Powell.
“Just having all these technologies isn’t enough. People must know how to build applications in their coding that actually benefit from these new architectures. That is the skill.”
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Mar. 19, 2024
Computer science educators will soon gain valuable insights from computational epidemiology courses, like one offered at Georgia Tech.
B. Aditya Prakash is part of a research group that will host a workshop on how topics from computational epidemiology can enhance computer science classes.
These lessons would produce computer science graduates with improved skills in data science, modeling, simulation, artificial intelligence (AI), and machine learning (ML).
Because epidemics transcend the sphere of public health, these topics would groom computer scientists versed in issues from social, financial, and political domains.
The group’s virtual workshop takes place on March 20 at the technical symposium for the Special Interest Group on Computer Science Education (SIGCSE). SIGCSE is one of 38 special interest groups of the Association for Computing Machinery (ACM). ACM is the world’s largest scientific and educational computing society.
“We decided to do a tutorial at SIGCSE because we believe that computational epidemiology concepts would be very useful in general computer science courses,” said Prakash, an associate professor in the School of Computational Science and Engineering (CSE).
“We want to give an introduction to concepts, like what computational epidemiology is, and how topics, such as algorithms and simulations, can be integrated into computer science courses.”
Prakash kicks off the workshop with an overview of computational epidemiology. He will use examples from his CSE 8803: Data Science for Epidemiology course to introduce basic concepts.
This overview includes a survey of models used to describe behavior of diseases. Models serve as foundations that run simulations, ultimately testing hypotheses and making predictions regarding disease spread and impact.
Prakash will explain the different kinds of models used in epidemiology, such as traditional mechanistic models and more recent ML and AI based models.
Prakash’s discussion includes modeling used in recent epidemics like Covid-19, Zika, H1N1 bird flu, and Ebola. He will also cover examples from the 19th and 20th centuries to illustrate how epidemiology has advanced using data science and computation.
“I strongly believe that data and computation have a very important role to play in the future of epidemiology and public health is computational,” Prakash said.
“My course and these workshops give that viewpoint, and provide a broad framework of data science and computational thinking that can be useful.”
While humankind has studied disease transmission for millennia, computational epidemiology is a new approach to understanding how diseases can spread throughout communities.
The Covid-19 pandemic helped bring computational epidemiology to the forefront of public awareness. This exposure has led to greater demand for further application from computer science education.
Prakash joins Baltazar Espinoza and Natarajan Meghanathan in the workshop presentation. Espinoza is a research assistant professor at the University of Virginia. Meghanathan is a professor at Jackson State University.
The group is connected through Global Pervasive Computational Epidemiology (GPCE). GPCE is a partnership of 13 institutions aimed at advancing computational foundations, engineering principles, and technologies of computational epidemiology.
The National Science Foundation (NSF) supports GPCE through the Expeditions in Computing program. Prakash himself is principal investigator of other NSF-funded grants in which material from these projects appear in his workshop presentation.
[Related: Researchers to Lead Paradigm Shift in Pandemic Prevention with NSF Grant]
Outreach and broadening participation in computing are tenets of Prakash and GPCE because of how widely epidemics can reach. The SIGCSE workshop is one way that the group employs educational programs to train the next generation of scientists around the globe.
“Algorithms, machine learning, and other topics are fundamental graduate and undergraduate computer science courses nowadays,” Prakash said.
“Using examples like projects, homework questions, and data sets, we want to show that the topics and ideas from computational epidemiology help students see a future where they apply their computer science education to pressing, real world challenges.”
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Mar. 14, 2024
Schmidt Sciences has selected Kai Wang as one of 19 researchers to receive this year’s AI2050 Early Career Fellowship. In doing so, Wang becomes the first AI2050 fellow to represent Georgia Tech.
“I am excited about this fellowship because there are so many people at Georgia Tech using AI to create social impact,” said Wang, an assistant professor in the School of Computational Science and Engineering (CSE).
“I feel so fortunate to be part of this community and to help Georgia Tech bring more impact on society.”
AI2050 has allocated up to $5.5 million to support the cohort. Fellows receive up to $300,000 over two years and will join the Schmidt Sciences network of experts to advance their research in artificial intelligence (AI).
Wang’s AI2050 project centers on leveraging decision-focused AI to address challenges facing health and environmental sustainability. His goal is to strengthen and deploy decision-focused AI in collaboration with stakeholders to solve broad societal problems.
Wang’s method to decision-focused AI integrates machine learning with optimization to train models based on decision quality. These models borrow knowledge from decision-making processes in high-stakes domains to improve overall performance.
Part of Wang’s approach is to work closely with non-profit and non-governmental organizations. This collaboration helps Wang better understand problems at the point-of-need and gain knowledge from domain experts to custom-build AI models.
“It is very important to me to see my research impacting human lives and society,” Wang said. That reinforces my interest and motivation in using AI for social impact.”
[Related: Wang, New Faculty Bolster School’s Machine Learning Expertise]
This year’s cohort is only the second in the fellowship’s history. Wang joins a class that spans four countries, six disciplines, and seventeen institutions.
AI2050 commits $125 million over five years to identify and support talented individuals seeking solutions to ensure society benefits from AI. Last year’s AI2050 inaugural class of 15 early career fellows received $4 million.
The namesake of AI2050 comes from the central motivating question that fellows answer through their projects:
It’s 2050. AI has turned out to be hugely beneficial to society. What happened? What are the most important problems we solved and the opportunities and possibilities we realized to ensure this outcome?
AI2050 encourages young researchers to pursue bold and ambitious work on difficult challenges and promising opportunities in AI. These projects involve research that is multidisciplinary, risky, and hard to fund through traditional means.
Schmidt Sciences, LLC is a 501(c)3 non-profit organization supported by philanthropists Eric and Wendy Schmidt. Schmidt Sciences aims to accelerate and deepen understanding of the natural world and develop solutions to real-world challenges for public benefit.
Schmidt Sciences identify under-supported or unconventional areas of exploration and discovery with potential for high impact. Focus areas include AI and advanced computing, astrophysics and space, biosciences, climate, and cross-science.
“I am most grateful for the advice from my mentors, colleagues, and collaborators, and of course AI2050 for choosing me for this prestigious fellowship,” Wang said. “The School of CSE has given me so much support, including career advice from junior and senior level faculty.”
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Pagination
- Previous page
- 2 Page 2
- Next page