Nov. 11, 2024
A first-of-its-kind algorithm developed at Georgia Tech is helping scientists study interactions between electrons. This innovation in modeling technology can lead to discoveries in physics, chemistry, materials science, and other fields.
The new algorithm is faster than existing methods while remaining highly accurate. The solver surpasses the limits of current models by demonstrating scalability across chemical system sizes ranging from large to small.
Computer scientists and engineers benefit from the algorithm’s ability to balance processor loads. This work allows researchers to tackle larger, more complex problems without the prohibitive costs associated with previous methods.
Its ability to solve block linear systems drives the algorithm’s ingenuity. According to the researchers, their approach is the first known use of a block linear system solver to calculate electronic correlation energy.
The Georgia Tech team won’t need to travel far to share their findings with the broader high-performance computing community. They will present their work in Atlanta at the 2024 International Conference for High Performance Computing, Networking, Storage and Analysis (SC24).
[MICROSITE: Georgia Tech at SC24]
“The combination of solving large problems with high accuracy can enable density functional theory simulation to tackle new problems in science and engineering,” said Edmond Chow, professor and associate chair of Georgia Tech’s School of Computational Science and Engineering (CSE).
Density functional theory (DFT) is a modeling method for studying electronic structure in many-body systems, such as atoms and molecules.
An important concept DFT models is electronic correlation, the interaction between electrons in a quantum system. Electron correlation energy is the measure of how much the movement of one electron is influenced by presence of all other electrons.
Random phase approximation (RPA) is used to calculate electron correlation energy. While RPA is very accurate, it becomes computationally more expensive as the size of the system being calculated increases.
Georgia Tech’s algorithm enhances electronic correlation energy computations within the RPA framework. The approach circumvents inefficiencies and achieves faster solution times, even for small-scale chemical systems.
The group integrated the algorithm into existing work on SPARC, a real-space electronic structure software package for accurate, efficient, and scalable solutions of DFT equations. School of Civil and Environmental Engineering Professor Phanish Suryanarayana is SPARC’s lead researcher.
The group tested the algorithm on small chemical systems of silicon crystals numbering as few as eight atoms. The method achieved faster calculation times and scaled to larger system sizes than direct approaches.
“This algorithm will enable SPARC to perform electronic structure calculations for realistic systems with a level of accuracy that is the gold standard in chemical and materials science research,” said Suryanarayana.
RPA is expensive because it relies on quartic scaling. When the size of a chemical system is doubled, the computational cost increases by a factor of 16.
Instead, Georgia Tech’s algorithm scales cubically by solving block linear systems. This capability makes it feasible to solve larger problems at less expense.
Solving block linear systems presents a challenging trade-off in solving different block sizes. While larger blocks help reduce the number of steps of the solver, using them demands higher computational cost per step on computer processors.
Tech’s solution is a dynamic block size selection solver. The solver allows each processor to independently select block sizes to calculate. This solution further assists in scaling, and improves processor load balancing and parallel efficiency.
“The new algorithm has many forms of parallelism, making it suitable for immense numbers of processors,” Chow said. “The algorithm works in a real-space, finite-difference DFT code. Such a code can scale efficiently on the largest supercomputers.”
Georgia Tech alumni Shikhar Shah (Ph.D. CSE 2024), Hua Huang (Ph.D. CSE 2024), and Ph.D. student Boqin Zhang led the algorithm’s development. The project was the culmination of work for Shah and Huang, who completed their degrees this summer. John E. Pask, a physicist at Lawrence Livermore National Laboratory, joined the Tech researchers on the work.
Shah, Huang, Zhang, Suryanarayana, and Chow are among more than 50 students, faculty, research scientists, and alumni affiliated with Georgia Tech who are scheduled to give more than 30 presentations at SC24. The experts will present their research through papers, posters, panels, and workshops.
SC24 takes place Nov. 17-22 at the Georgia World Congress Center in Atlanta.
“The project’s success came from combining expertise from people with diverse backgrounds ranging from numerical methods to chemistry and materials science to high-performance computing,” Chow said.
“We could not have achieved this as individual teams working alone.”
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Oct. 24, 2024
The U.S. Department of Energy (DOE) has awarded Georgia Tech researchers a $4.6 million grant to develop improved cybersecurity protection for renewable energy technologies.
Associate Professor Saman Zonouz will lead the project and leverage the latest artificial technology (AI) to create Phorensics. The new tool will anticipate cyberattacks on critical infrastructure and provide analysts with an accurate reading of what vulnerabilities were exploited.
“This grant enables us to tackle one of the crucial challenges facing national security today: our critical infrastructure resilience and post-incident diagnostics to restore normal operations in a timely manner,” said Zonouz.
“Together with our amazing team, we will focus on cyber-physical data recovery and post-mortem forensics analysis after cybersecurity incidents in emerging renewable energy systems.”
As the integration of renewable energy technology into national power grids increases, so does their vulnerability to cyberattacks. These threats put energy infrastructure at risk and pose a significant danger to public safety and economic stability. The AI behind Phorensics will allow analysts and technicians to scale security efforts to keep up with a growing power grid that is becoming more complex.
This effort is part of the Security of Engineering Systems (SES) initiative at Georgia Tech’s School of Cybersecurity and Privacy (SCP). SES has three pillars: research, education, and testbeds, with multiple ongoing large, sponsored efforts.
“We had a successful hiring season for SES last year and will continue filling several open tenure-track faculty positions this upcoming cycle,” said Zonouz.
“With top-notch cybersecurity and engineering schools at Georgia Tech, we have begun the SES journey with a dedicated passion to pursue building real-world solutions to protect our critical infrastructures, national security, and public safety.”
Zonouz is the director of the Cyber-Physical Systems Security Laboratory (CPSec) and is jointly appointed by Georgia Tech’s School of Cybersecurity and Privacy (SCP) and the School of Electrical and Computer Engineering (ECE).
The three Georgia Tech researchers joining him on this project are Brendan Saltaformaggio, associate professor in SCP and ECE; Taesoo Kim, jointly appointed professor in SCP and the School of Computer Science; and Animesh Chhotaray, research scientist in SCP.
Katherine Davis, associate professor at the Texas A&M University Department of Electrical and Computer Engineering, has partnered with the team to develop Phorensics. The team will also collaborate with the NREL National Lab, and industry partners for technology transfer and commercialization initiatives.
The Energy Department defines renewable energy as energy from unlimited, naturally replenished resources, such as the sun, tides, and wind. Renewable energy can be used for electricity generation, space and water heating and cooling, and transportation.
News Contact
John Popham
Communications Officer II
College of Computing | School of Cybersecurity and Privacy
Oct. 17, 2024
Two new assistant professors joined the School of Computational Science and Engineering (CSE) faculty this fall. Lu Mi comes to Georgia Tech from the Allen Institute for Brain Science in Seattle, where she was a Shanahan Foundation Fellow.
We sat down with Mi to learn more about her background and to introduce her to the Georgia Tech and College of Computing communities.
Faculty: Lu Mi, assistant professor, School of CSE
Research Interests: Computational Neuroscience, Machine Learning
Education: Ph.D. in Computer Science from the Massachusetts Institute of Technology; B.S. in Measurement, Control, and Instruments from Tsinghua University
Hometown: Sichuan, China (home of the giant pandas)
How have your first few months at Georgia Tech gone so far?
I’ve really enjoyed my time at Georgia Tech. Developing a new course has been both challenging and rewarding. I’ve learned a lot from the process and conversations with students. My colleagues have been incredibly welcoming, and I’ve had the opportunity to work with some very smart and motivated students here at Georgia Tech.
You hit the ground running this year by teaching your CSE 8803 course on brain-inspired machine intelligence. What important concepts do you teach in this class?
This course focuses on comparing biological neural networks with artificial neural networks. We explore questions like: How does the brain encode information, perform computations, and learn? What can neuroscience and artificial intelligence (AI) learn from each other? Key topics include spiking neural networks, neural coding, and biologically plausible learning rules. By the end of the course, I expect students to have a solid understanding of neural algorithms and the emerging NeuroAI field.
When and how did you become interested in computational neuroscience in the first place?
I’ve been fascinated by how the brain works since I was young. My formal engagement with the field began during my Ph.D. research, where we developed algorithms to help neuroscientists map large-scale synaptic wiring diagrams in the brain. Since then, I’ve had the opportunity to collaborate with researchers at institutions like Harvard, the Janelia Research Campus, the Allen Institute for Brain Science, and the University of Washington on various exciting projects in this field.
What about your experience and research are you currently most proud of?
I’m particularly proud of the framework we developed to integrate black-box machine learning models with biologically realistic mechanistic models. We use advanced deep-learning techniques to infer unobserved information and combine this with prior knowledge from mechanistic models. This allows us to test hypotheses by applying different model variants. I believe this framework holds great potential to address a wide range of scientific questions, leveraging the power of AI.
What about Georgia Tech convinced you to accept a faculty position?
Georgia Tech CSE felt like a perfect fit for my background and research interests, particularly within the AI4Science initiative and the development of computational tools for biology and neuroscience. My work overlaps with several colleagues here, and I’m excited to collaborate with them. Georgia Tech also has a vibrant and impactful Neuro Next Initiative community, which is another great attraction.
What are your hobbies and interests when not researching and teaching?
I enjoy photography and love spending time with my two corgi dogs, especially taking them for walks.
What have you enjoyed most so far about living in Atlanta?
I’ve really appreciated the peaceful, green environment with so many trees. I’m also looking forward to exploring more outdoor activities, like fishing and golfing.
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Oct. 16, 2024
A new surgery planning tool powered by augmented reality (AR) is in development for doctors who need closer collaboration when planning heart operations. Promising results from a recent usability test have moved the platform one step closer to everyday use in hospitals worldwide.
Georgia Tech researchers partnered with medical experts from Children’s Healthcare of Atlanta (CHOA) to develop and test ARCollab. The iOS-based app leverages advanced AR technologies to let doctors collaborate together and interact with a patient’s 3D heart model when planning surgeries.
The usability evaluation demonstrates the app’s effectiveness, finding that ARCollab is easy to use and understand, fosters collaboration, and improves surgical planning.
“This tool is a step toward easier collaborative surgical planning. ARCollab could reduce the reliance on physical heart models, saving hours and even days of time while maintaining the collaborative nature of surgical planning,” said M.S. student Pratham Mehta, the app’s lead researcher.
“Not only can it benefit doctors when planning for surgery, it may also serve as a teaching tool to explain heart deformities and problems to patients.”
Two cardiologists and three cardiothoracic surgeons from CHOA tested ARCollab. The two-day study ended with the doctors taking a 14-question survey assessing the app’s usability. The survey also solicited general feedback and top features.
The Georgia Tech group determined from the open-ended feedback that:
- ARCollab enables new collaboration capabilities that are easy to use and facilitate surgical planning.
- Anchoring the model to a physical space is important for better interaction.
- Portability and real-time interaction are crucial for collaborative surgical planning.
Users rated each of the 14 questions on a 7-point Likert scale, with one being “strongly disagree” and seven being “strongly agree.” The 14 questions were organized into five categories: overall, multi-user, model viewing, model slicing, and saving and loading models.
The multi-user category attained the highest rating with an average of 6.65. This included a unanimous 7.0 rating that it was easy to identify who was controlling the heart model in ARCollab. The scores also showed it was easy for users to connect with devices, switch between viewing and slicing, and view other users’ interactions.
The model slicing category received the lowest, but formidable, average of 5.5. These questions assessed ease of use and understanding of finger gestures and usefulness to toggle slice direction.
Based on feedback, the researchers will explore adding support for remote collaboration. This would assist doctors in collaborating when not in a shared physical space. Another improvement is extending the save feature to support multiple states.
“The surgeons and cardiologists found it extremely beneficial for multiple people to be able to view the model and collaboratively interact with it in real-time,” Mehta said.
The user study took place in a CHOA classroom. CHOA also provided a 3D heart model for the test using anonymous medical imaging data. Georgia Tech’s Institutional Review Board (IRB) approved the study and the group collected data in accordance with Institute policies.
The five test participants regularly perform cardiovascular surgical procedures and are employed by CHOA.
The Georgia Tech group provided each participant with an iPad Pro with the latest iOS version and the ARCollab app installed. Using commercial devices and software meets the group’s intentions to make the tool universally available and deployable.
“We plan to continue iterating ARCollab based on the feedback from the users,” Mehta said.
“The participants suggested the addition of a ‘distance collaboration’ mode, enabling doctors to collaborate even if they are not in the same physical environment. This allows them to facilitate surgical planning sessions from home or otherwise.”
The Georgia Tech researchers are presenting ARCollab and the user study results at IEEE VIS 2024, the Institute of Electrical and Electronics Engineers (IEEE) visualization conference.
IEEE VIS is the world’s most prestigious conference for visualization research and the second-highest rated conference for computer graphics. It takes place virtually Oct. 13-18, moved from its venue in St. Pete Beach, Florida, due to Hurricane Milton.
The ARCollab research group's presentation at IEEE VIS comes months after they shared their work at the Conference on Human Factors in Computing Systems (CHI 2024).
Undergraduate student Rahul Narayanan and alumni Harsha Karanth (M.S. CS 2024) and Haoyang (Alex) Yang (CS 2022, M.S. CS 2023) co-authored the paper with Mehta. They study under Polo Chau, a professor in the School of Computational Science and Engineering.
The Georgia Tech group partnered with Dr. Timothy Slesnick and Dr. Fawwaz Shaw from CHOA on ARCollab’s development and user testing.
"I'm grateful for these opportunities since I get to showcase the team's hard work," Mehta said.
“I can meet other like-minded researchers and students who share these interests in visualization and human-computer interaction. There is no better form of learning.”
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Oct. 09, 2024
New cybersecurity research initiatives into generative artificial intelligence (AI) tools will soon be underway at Georgia Tech, thanks to the efforts of a new assistant professor in the School of Cybersecurity and Privacy (SCP).
While some researchers seek ways to integrate AI into security practices, Teodora Baluta studies the algorithms and datasets used to train new AI tools to assess their security in theory and practice.
Specifically, she investigates whether the outputs from generative AI tools are abusing data or producing text based on stolen data. As one of Georgia Tech’s newest faculty, Baluta is determined to build on the research she completed during her Ph.D. at the National University of Singapore.
She plans to expand her past works by continuing to analyze existing AI technologies and researching ways to build better machine learning systems with security measures already in place.
“One thing that excites me about joining SCP is its network of experts that can weigh in on aspects that are outside of my field,” said Baluta. “I am really looking forward to building on my past works by studying the bigger security picture of AI and machine learning.”
As a new faculty member, Baluta is looking for Ph.D. students interested in joining her in these new research initiatives.
“We’re going to be looking at topics such as the mathematical possibility of detecting deep fakes, uncovering the malicious intent behind AI use, and how to build better AI models with security and privacy safeguards,” she said.
Baluta’s research has been recognized by Google’s Ph.D. fellowship program and Georgia Tech’s EECS Rising Stars Workshop in 2023. As a Ph.D. student, she earned the Dean’s Graduate Research Excellence Award and the President’s Graduate Fellowship at the National University of Singapore. She was also selected as a finalist for the Microsoft Research Ph.D. Fellowship, Asia-Pacific.
News Contact
John Popham
Communications Officer II
School of Cybersecurity and Privacy
Oct. 01, 2024
Even though artificial intelligence (AI) is not advanced enough to help the average person build weapons of mass destruction, federal agencies know it could be possible and are keeping pace with next generation technologies through rigorous research and strategic partnerships.
It is a delicate balance, but as the leader of the Department of Homeland Security (DHS), Countering Weapons of Mass Destruction Office (CWMD) told a room full of Georgia Tech students, faculty, and staff, there is no room for error.
“You have to be right all the time, the bad guys only have to be right once,” said Mary Ellen Callahan, assistant secretary for CWMD.
As a guest of John Tien, former DHS deputy secretary and professor of practice in the School of Cybersecurity and Privacy as well as the Sam Nunn School of International Affairs, Callahan was at Georgia Tech for three separate speaking engagements in late September.
"Assistant Secretary Callahan's contributions were remarkable in so many ways,” said Tien. “Most importantly, I love how she demonstrated to our students that the work in the fields of cybersecurity, privacy, and homeland security is an honorable, interesting, and substantive way to serve the greater good of keeping the American people safe and secure. As her former colleague at the U.S. Department of Homeland Security, I was proud to see her represent her CWMD team, DHS, and the Biden-Harris Administration in the way she did, with humility, personality, and leadership."
While the thought of AI-assisted WMDs is terrifying to think about, it is just a glimpse into what Callahan’s office handles on a regular basis. The assistant secretary walked her listeners through how CWMD works with federal and local law enforcement on how to identify and detect the signs of potential chemical, biological, radiological, or nuclear (CBRN) weapons.
“There's a whole cadre of professionals who spend every day preparing for the worst day in U.S. history,” said Callahan. “They are doing everything in their power to make sure that that does not happen.”
CWMD is also researching ways to implement AI technologies into current surveillance systems to help identify and respond to threats faster. For example, an AI-backed bio-hazard surveillance systems would allow analysts to characterize and contextualize the risk of potential bio-hazard threats in a timely manner.
Callahan’s office spearheaded a report exploring the advantages and risks of AI in, “Reducing the Risks at the Intersection of Artificial Intelligence and Chemical, Biological, Radiological, and Nuclear Threats,” which was released to the public earlier this year.
The report was a multidisciplinary effort that was created in collaboration with the White House Office of Science and Technology Policy, Department of Energy, academic institutions, private industries, think tanks, and third-party evaluators.
During his introduction of assistant secretary, SCP Chair Michael Bailey told those seated in the Coda Atrium that Callahan’s career is an incredible example of the interdisciplinary nature he hopes the school’s students and faculty can use as a roadmap.
“Important, impactful, and interdisciplinary research can be inspired by everyday problems,” he said. "We believe that building a secure future requires revolutionizing security education and being vigilant, and together, we can achieve this goal."
While on campus Tuesday, Callahan gave a special guest lecture to the students in “CS 3237 Human Dimension of Cybersecurity: People, Organizations, Societies,” and “CS 4267 - Critical Infrastructures.” Following the lecture, she gave a prepared speech to students, faculty, and staff.
Lastly, she participated in a moderated panel discussion with SCP J.Z. Liang Chair Peter Swire and Jerry Perullo, SCP professor of practice and former CISO of International Continental Exchange as well as the New York Stock Exchange. The panel was moderated by Tien.
News Contact
John Popham, Communications Officer II
School of Cybersecurity and Privacy | Georgia Institute of Technology
scp.cc.gatech.edu | in/jp-popham on LinkedIn
Get the latest SCP updates by joining our mailing list!
Sep. 26, 2024
Is it a building or a street? How tall is the building? Are there powerlines nearby?
These are details autonomous flying vehicles would need to know to function safely. However, few aerial image datasets exist that can adequately train the computer vision algorithms that would pilot these vehicles.
That’s why Georgia Tech researchers created a new benchmark dataset of computer-generated aerial images.
Judy Hoffman, an assistant professor in Georgia Tech’s School of Interactive Computing, worked with students in her lab to create SKYSCENES. The dataset contains over 33,000 aerial images of cities curated from a computer simulation program.
Hoffman said sufficient training datasets could unlock the potential of autonomous flying vehicles. Constructing those datasets is a challenge the computer vision research community has been working for years to overcome.
“You can’t crowdsource it the same way you would standard internet images,” Hoffman said. “Trying to collect it manually would be very slow and expensive — akin to what the self-driving industry is doing driving around vehicles, but now you’re talking about drones flying around.
“We must fix those problems to have models that work reliably and safely for flying vehicles.”
Many existing datasets aren’t annotated well enough for algorithms to distinguish objects in the image. For example, the algorithms may not recognize the surface of a building from the surface of a street.
Working with Hoffman, Ph.D. student Sahil Khose tried a new approach — constructing a synthetic image data set from a ground-view, open-source simulator known as CARLA.
CARLA was originally designed to provide ground-view simulation for self-driving vehicles. It creates an open-world virtual reality that allows users to drive around in computer-generated cities.
Khose and his collaborators adjusted CARLA’s interface to support aerial views that mimic views one might get from unmanned aerial vehicles (UAVs).
What's the Forecast?
The team also created new virtual scenarios to mimic the real world by accounting for changes in weather, times of day, various altitudes, and population per city. The algorithms will struggle to recognize the objects in the frame consistently unless those details are incorporated into the training data.
“CARLA’s flexibility offers a wide range of environmental configurations, and we take several important considerations into account while curating SKYSCENES images from CARLA,” Khose said. “Those include strategies for obtaining diverse synthetic data, embedding real-world irregularities, avoiding correlated images, addressing skewed class representations, and reproducing precise viewpoints.”
SKYSCENES is not the largest dataset of aerial images to be released, but a paper co-authored by Khose shows that it performs better than existing models.
Khose said models trained on this dataset exhibit strong generalization to real-world scenarios, and integration with real-world data enhances their performance. The dataset also controls variability, which is essential to perform various tasks.
“This dataset drives advancements in multi-view learning, domain adaptation, and multimodal approaches, with major implications for applications like urban planning, disaster response, and autonomous drone navigation,” Khose said. “We hope to bridge the gap for synthetic-to-real adaptation and generalization for aerial images.”
Seeing the Whole Picture
For algorithms, generalization is the ability to perform tasks based on new data that expands beyond the specific examples on which they were trained.
“If you have 200 images, and you train a model on those images, they’ll do well at recognizing what you want them to recognize in that closed-world initial setting,” Hoffman said. “But if we were to take aerial vehicles and fly them around cities at various times of the day or in other weather conditions, they would start to fail.”
That’s why Khose designed algorithms to enhance the quality of the curated images.
“These images are captured from 100 meters above ground, which means the objects appear small and are challenging to recognize,” he said. “We focused on developing algorithms specifically designed to address this.”
Those algorithms elevate the ability of ML models to recognize small objects, improving their performance in navigating new environments.
“Our annotations help the models capture a more comprehensive understanding of the entire scene — where the roads are, where the buildings are, and know they are buildings and not just an obstacle in the way,” Hoffman said. “It gives a richer set of information when planning a flight.
“To work safely, many autonomous flight plans might require a map given to them beforehand. If you have successful vision systems that understand exactly what the obstacles in the real world are, you could navigate in previously unseen environments.”
For more information about Georgia Tech Research at ECCV 2024, click here.
News Contact
Nathan Deen
Communications Officer
School of Interactive Computing
Sep. 24, 2024
A year ago, Ray Hung, a master’s student in computer science, assisted Professor Thad Starner in constructing an artificial intelligence (AI)-powered anti-plagiarism tool for Starner’s 900-student Intro to Artificial Intelligence (CS3600) course.
While the tool proved effective, Hung began considering ways to deter plagiarism and improve the education system.
Plagiarism can be prevalent in online exams, so Hung looked at oral examinations commonly used in European education systems and rooted in the Socratic method.
One of the advantages of oral assessments is they naturally hinder cheating. Consulting ChatGPT wouldn’t benefit a student unless the student memorizes the entire answer. Even then, follow-up questions would reveal a lack of genuine understanding.
Hung drew inspiration from the 2009 reboot of Star Trek, particularly the opening scene in which a young Spock provides oral answers to questions prompted by AI.
“I think we can do something similar,” Hung said. “Research has shown that oral assessment improves people’s material understanding, critical thinking, and communication skills.
“The problem is that it’s not scalable with human teachers. A professor may have 600 students. Even with teaching assistants, it’s not practical to conduct oral assessments. But with AI, it’s now possible.”
Hung developed The Socratic Mind with Starner, Scheller College of Business Assistant Professor Eunhee Sohn, and researchers from the Georgia Tech Center for 21st Century Universities (C21U).
The Socratic Mind is a scalable, AI-powered oral assessment platform leveraging Socratic questioning to challenge students to explain, justify, and defend their answers to showcase their understanding.
“We believe that if you truly understand something, you should be able to explain it,” Hung said.
“There is a deeper need for fostering genuine understanding and cultivating high-order thinking skills. I wanted to promote an education paradigm in which critical thinking, material understanding, and communication skills play integral roles and are at the forefront of our education.”
Hung entered his project into the Learning Engineering Tools Competition, one of the largest education technology competitions in the world. Hung and his collaborators were among five teams that won a Catalyst Award and received a $50,000 prize.
Benefits for Students
The Socratic Mind will be piloted in several classes this semester with about 2,000 students participating. One of those classes is the Intro to Computing (CS1301) class taught by College of Computing Professor David Joyner.
Hung said The Socratic Mind will be a resource students can use to prepare to defend their dissertation or to teach a class if they choose to pursue a Ph.D. Anyone struggling with public speaking or preparing for job interviews will find the tool helpful.
“Many users are interested in AI roleplay to practice real-world conversations,” he said. “The AI can roleplay a manager if you want to discuss a promotion. It can roleplay as an interviewer if you have a job interview. There are a lot of uses for oral assessment platforms where you can practice talking with an AI.
“I hope this tool helps students find their education more valuable and help them become better citizens, workers, entrepreneurs, or whoever they want to be in the future.”
Hung said the chatbot is not only conversational but also adverse to human persuasion because it follows the Socratic method of asking follow-up questions.
“ChatGPT and most other large language models are trained as helpful, harmless assistants,” he said. “If you argue with it and hold your position strong enough, you can coerce it to agree. We don’t want that.
“The Socratic Mind AI will follow up with you in real-time about what you just said, so it’s not a one-way conversation. It’s interactive and engaging and mimics human communication well.”
Educational Overhaul
C21U Director of Research in Education Innovation Jonna Lee and C21U Research Scientist Meryem Soylu will measure The Socratic Mind’s effectiveness during the pilot and determine its scalability.
“I thought it would be interesting to develop this further from a learning engineering perspective because it’s about systematic problem solving, and we want to create scalable solutions with technologies,” Lee said.
“I hope we can find actionable insights about how this AI tool can help transform classroom learning and assessment practices compared to traditional methods. We see the potential for personalized learning for various student populations, including non-traditional lifetime learners."
Hung said The Socratic Mind has the potential to revolutionize the U.S. education system depending on how the system chooses to incorporate AI.
Recognizing the advancement of AI is likely an unstoppable trend. Hung advocates leveraging AI to enhance learning and unlock human potential rather than focusing on restrictions.
“We are in an era in which information is abundant, but wisdom is scarce,” Hung said. “Shallow and rapid interactions drive social media, for example. We think it’s a golden time to elevate people’s critical thinking and communication skills.”
For more information about The Socratic Mind and to try a demo, visit the project's website.
News Contact
Nathan Deen
Communications Officer
School of interactive Computing
Sep. 19, 2024
A new algorithm tested on NASA’s Perseverance Rover on Mars may lead to better forecasting of hurricanes, wildfires, and other extreme weather events that impact millions globally.
Georgia Tech Ph.D. student Austin P. Wright is first author of a paper that introduces Nested Fusion. The new algorithm improves scientists’ ability to search for past signs of life on the Martian surface.
In addition to supporting NASA’s Mars 2020 mission, scientists from other fields working with large, overlapping datasets can use Nested Fusion’s methods toward their studies.
Wright presented Nested Fusion at the 2024 International Conference on Knowledge Discovery and Data Mining (KDD 2024) where it was a runner-up for the best paper award. KDD is widely considered the world's most prestigious conference for knowledge discovery and data mining research.
“Nested Fusion is really useful for researchers in many different domains, not just NASA scientists,” said Wright. “The method visualizes complex datasets that can be difficult to get an overall view of during the initial exploratory stages of analysis.”
Nested Fusion combines datasets with different resolutions to produce a single, high-resolution visual distribution. Using this method, NASA scientists can more easily analyze multiple datasets from various sources at the same time. This can lead to faster studies of Mars’ surface composition to find clues of previous life.
The algorithm demonstrates how data science impacts traditional scientific fields like chemistry, biology, and geology.
Even further, Wright is developing Nested Fusion applications to model shifting climate patterns, plant and animal life, and other concepts in the earth sciences. The same method can combine overlapping datasets from satellite imagery, biomarkers, and climate data.
“Users have extended Nested Fusion and similar algorithms toward earth science contexts, which we have received very positive feedback,” said Wright, who studies machine learning (ML) at Georgia Tech.
“Cross-correlational analysis takes a long time to do and is not done in the initial stages of research when patterns appear and form new hypotheses. Nested Fusion enables people to discover these patterns much earlier.”
Wright is the data science and ML lead for PIXLISE, the software that NASA JPL scientists use to study data from the Mars Perseverance Rover.
Perseverance uses its Planetary Instrument for X-ray Lithochemistry (PIXL) to collect data on mineral composition of Mars’ surface. PIXL’s two main tools that accomplish this are its X-ray Fluorescence (XRF) Spectrometer and Multi-Context Camera (MCC).
When PIXL scans a target area, it creates two co-aligned datasets from the components. XRF collects a sample's fine-scale elemental composition. MCC produces images of a sample to gather visual and physical details like size and shape.
A single XRF spectrum corresponds to approximately 100 MCC imaging pixels for every scan point. Each tool’s unique resolution makes mapping between overlapping data layers challenging. However, Wright and his collaborators designed Nested Fusion to overcome this hurdle.
In addition to progressing data science, Nested Fusion improves NASA scientists' workflow. Using the method, a single scientist can form an initial estimate of a sample’s mineral composition in a matter of hours. Before Nested Fusion, the same task required days of collaboration between teams of experts on each different instrument.
“I think one of the biggest lessons I have taken from this work is that it is valuable to always ground my ML and data science problems in actual, concrete use cases of our collaborators,” Wright said.
“I learn from collaborators what parts of data analysis are important to them and the challenges they face. By understanding these issues, we can discover new ways of formalizing and framing problems in data science.”
Wright presented Nested Fusion at KDD 2024, held Aug. 25-29 in Barcelona, Spain. KDD is an official special interest group of the Association for Computing Machinery. The conference is one of the world’s leading forums for knowledge discovery and data mining research.
Nested Fusion won runner-up for the best paper in the applied data science track, which comprised of over 150 papers. Hundreds of other papers were presented at the conference’s research track, workshops, and tutorials.
Wright’s mentors, Scott Davidoff and Polo Chau, co-authored the Nested Fusion paper. Davidoff is a principal research scientist at the NASA Jet Propulsion Laboratory. Chau is a professor at the Georgia Tech School of Computational Science and Engineering (CSE).
“I was extremely happy that this work was recognized with the best paper runner-up award,” Wright said. “This kind of applied work can sometimes be hard to find the right academic home, so finding communities that appreciate this work is very encouraging.”
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Aug. 20, 2024
For three days, a cybercriminal unleashed a crippling ransomware attack on the futuristic city of Northbridge. The attack shut down the city’s infrastructure and severely impacted public services, until Georgia Tech cybersecurity experts stepped in to stop it.
This scenario played out this weekend at the DARPA AI Cyber Challenge (AIxCC) semi-final competition held at DEF CON 32 in Las Vegas. Team Atlanta, which included the Georgia Tech experts, were among the contest’s winners.
Team Atlanta will now compete against six other teams in the final round that takes place at DEF CON 33 in August 2025. The finalists will keep their AI system and improve it over the next 12 months using the $2 million semi-final prize.
The AI systems in the finals must be open sourced and ready for immediate, real-world launch. The AIxCC final competition will award a $4 million grand prize to the ultimate champion.
Team Atlanta is made up of past and present Georgia Tech students and was put together with the help of SCP Professor Taesoo Kim. Not only did the team secure a spot in the final competition, they found a zero-day vulnerability in the contest.
“I am incredibly proud to announce that Team Atlanta has qualified for the finals in the DARPA AIxCC competition,” said Taesoo Kim, professor in the School of Cybersecurity and Privacy and a vice president of Samsung Research.
“This achievement is the result of exceptional collaboration across various organizations, including the Georgia Tech Research Institute (GTRI), industry partners like Samsung, and international academic institutions such as KAIST and POSTECH.”
After noticing discrepancies in the competition score board, the team discovered and reported a bug in the competition itself. The type of vulnerability they discovered is known as a zero-day vulnerability, because vendors have zero days to fix the issue.
While this didn’t earn Team Atlanta additional points, the competition organizer acknowledged the team and their finding during the closing ceremony.
“Our team, deeply rooted in Atlanta and largely composed of Georgia Tech alumni, embodies the innovative spirit and community values that define our city,” said Kim.
“With over 30 dedicated students and researchers, we have demonstrated the power of cross-disciplinary teamwork in the semi-final event. As we advance to the finals, we are committed to pushing the boundaries of cybersecurity and artificial intelligence, and I firmly believe the resulting systems from this competition will transform the security landscape in the coming year!”
The team tested their cyber reasoning system (CRS), dubbed Atlantis, on software used for data management, website support, healthcare systems, supply chains, electrical grids, transportation, and other critical infrastructures.
Atlantis is a next-generation, bug-finding and fixing system that can hunt bugs in multiple coding languages. The system immediately issues accurate software patches without any human intervention.
AIxCC is a Pentagon-backed initiative that was announced in August 2023 and will award up to $20 million in prize money throughout the competition. Team Atlanta was among the 42 teams that qualified for the semi-final competition earlier this year.
News Contact
John Popham
Communications Officer II at the School of Cybersecurity and Privacy
Pagination
- Previous page
- 2 Page 2
- Next page