Mar. 04, 2024
Hyundai Motor Group Innovation Center Singapore hosted the Meta-Factory Conference Jan. 23 – 24. It brought together academic leaders, industry experts, and manufacturing companies to discuss technology and the next generation of integrated manufacturing facilities.
Seth Hutchinson, executive director of the Institute for Robotics and Intelligent Machines at Georgia Tech, delivered a keynote lecture on “The Impacts of Today’s Robotics Innovation on the Relationship Between Robots and Their Human Co-Workers in Manufacturing Applications” — an overview of current state-of-the-art robotic technologies and future research trends for developing robotics aimed at interactions with human workers in manufacturing.
In addition to the keynote, Hutchinson also participated in the Hyundai Motor Group's Smart Factory Executive Technology Advisory Committee (E-TAC) panel on comprehensive future manufacturing directions and toured the new Hyundai Meta-Factory to observe how digital-twin technology is being applied in their human-robot collaborative manufacturing environment.
Hutchinson is a professor in the School of Interactive Computing. He received his Ph.D. from Purdue University in 1988, and in 1990 joined the University of Illinois Urbana-Champaign, where he was professor of electrical and computer engineering until 2017 and is currently professor emeritus. He has served on the Hyundai Motor Group's Smart Factory E-TAC since 2022.
Hyundai Motor Group Innovation Center Singapore is Hyundai Motor Group’s open innovation hub to support research and development of human-centered smart manufacturing processes using advanced technologies such as artificial intelligence, the Internet of Things, and robotics.
- Christa M. Ernst
Related Links
- Hyundai Newsroom Article: Link
- Event Link: https://mfc2024.com/
- Keynote Speakers: https://mfc2024.com/keynotes/
News Contact
Christa M. Ernst - Research Communications Program Manager
christa.ernst@research.gatech.edu
Feb. 05, 2024
Scientists are always looking for better computer models that simulate the complex systems that define our world. To meet this need, a Georgia Tech workshop held Jan. 16 illustrated how new artificial intelligence (AI) research could usher the next generation of scientific computing.
The workshop focused AI technology toward optimization of complex systems. Presentations of climatological and electromagnetic simulations showed these techniques resulted in more efficient and accurate computer modeling. The workshop also progressed AI research itself since AI models typically are not well-suited for optimization tasks.
The School of Computational Science and Engineering (CSE) and Institute for Data Engineering and Science jointly sponsored the workshop.
School of CSE Assistant Professors Peng Chen and Raphaël Pestourie led the workshop’s organizing committee and moderated the workshop’s two panel discussions. The duo also pitched their own research, highlighting potential of scientific AI.
Chen shared his work on derivative-informed neural operators (DINOs). DINOs are a class of neural networks that use derivative information to approximate solutions of partial differential equations. The derivative enhancement results in neural operators that are more accurate and efficient.
During his talk, Chen showed how DINOs makes better predictions with reliable derivatives. These have potential to solve data assimilation problems in weather and flooding prediction. Other applications include allocating sensors for early tsunami warnings and designing new self-assembly materials.
All these models contain elements of uncertainty where data is unknown, noisy, or changes over time. Not only is DINOs a powerful tool to quantify uncertainty, but it also requires little training data to become functional.
“Recent advances in AI tools have become critical in enhancing societal resilience and quality, particularly through their scientific uses in environmental, climatic, material, and energy domains,” Chen said.
“These tools are instrumental in driving innovation and efficiency in these and many other vital sectors.”
[Related: Machine Learning Key to Proposed App that Could Help Flood-prone Communities]
One challenge in studying complex systems is that it requires many simulations to generate enough data to learn from and make better predictions. But with limited data on hand, it is costly to run enough simulations to produce new data.
At the workshop, Pestourie presented his physics-enhanced deep surrogates (PEDS) as a solution to this optimization problem.
PEDS employs scientific AI to make efficient use of available data while demanding less computational resources. PEDS demonstrated to be up to three times more accurate than models using neural networks while needing less training data by at least a factor of 100.
PEDS yielded these results in tests on diffusion, reaction-diffusion, and electromagnetic scattering models. PEDS performed well in these experiments geared toward physics-based applications because it combines a physics simulator with a neural network generator.
“Scientific AI makes it possible to systematically leverage models and data simultaneously,” Pestourie said. “The more adoption of scientific AI there will be by domain scientists, the more knowledge will be created for society.”
[Related: Technique Could Efficiently Solve Partial Differential Equations for Numerous Applications]
Study and development of AI applications at these scales require use of the most powerful computers available. The workshop invited speakers from national laboratories who showcased supercomputing capabilities available at their facilities. These included Oak Ridge National Laboratory, Sandia National Laboratories, and Pacific Northwest National Laboratory.
The workshop hosted Georgia Tech faculty who represented the Colleges of Computing, Design, Engineering, and Sciences. Among these were workshop co-organizers Yan Wang and Ebeneser Fanijo. Wang is a professor in the George W. Woodruff School of Mechanical Engineering and Fanjio is an assistant professor in the School of Building Construction.
The workshop welcomed academics outside of Georgia Tech to share research occurring at their institutions. These speakers hailed from Emory University, Clemson University, and the University of California, Berkeley.
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Dec. 20, 2023
A new machine learning method could help engineers detect leaks in underground reservoirs earlier, mitigating risks associated with geological carbon storage (GCS). Further study could advance machine learning capabilities while improving safety and efficiency of GCS.
The feasibility study by Georgia Tech researchers explores using conditional normalizing flows (CNFs) to convert seismic data points into usable information and observable images. This potential ability could make monitoring underground storage sites more practical and studying the behavior of carbon dioxide plumes easier.
The 2023 Conference on Neural Information Processing Systems (NeurIPS 2023) accepted the group’s paper for presentation. They presented their study on Dec. 16 at the conference’s workshop on Tackling Climate Change with Machine Learning.
“One area where our group excels is that we care about realism in our simulations,” said Professor Felix Herrmann. “We worked on a real-sized setting with the complexities one would experience when working in real-life scenarios to understand the dynamics of carbon dioxide plumes.”
CNFs are generative models that use data to produce images. They can also fill in the blanks by making predictions to complete an image despite missing or noisy data. This functionality is ideal for this application because data streaming from GCS reservoirs are often noisy, meaning it’s incomplete, outdated, or unstructured data.
The group found in 36 test samples that CNFs could infer scenarios with and without leakage using seismic data. In simulations with leakage, the models generated images that were 96% similar to ground truths. CNFs further supported this by producing images 97% comparable to ground truths in cases with no leakage.
This CNF-based method also improves current techniques that struggle to provide accurate information on the spatial extent of leakage. Conditioning CNFs to samples that change over time allows it to describe and predict the behavior of carbon dioxide plumes.
This study is part of the group’s broader effort to produce digital twins for seismic monitoring of underground storage. A digital twin is a virtual model of a physical object. Digital twins are commonplace in manufacturing, healthcare, environmental monitoring, and other industries.
“There are very few digital twins in earth sciences, especially based on machine learning,” Herrmann explained. “This paper is just a prelude to building an uncertainty aware digital twin for geological carbon storage.”
Herrmann holds joint appointments in the Schools of Earth and Atmospheric Sciences (EAS), Electrical and Computer Engineering, and Computational Science and Engineering (CSE).
School of EAS Ph.D. student Abhinov Prakash Gahlot is the paper’s first author. Ting-Ying (Rosen) Yu (B.S. ECE 2023) started the research as an undergraduate group member. School of CSE Ph.D. students Huseyin Tuna Erdinc, Rafael Orozco, and Ziyi (Francis) Yin co-authored with Gahlot and Herrmann.
NeurIPS 2023 took place Dec. 10-16 in New Orleans. Occurring annually, it is one of the largest conferences in the world dedicated to machine learning.
Over 130 Georgia Tech researchers presented more than 60 papers and posters at NeurIPS 2023. One-third of CSE’s faculty represented the School at the conference. Along with Herrmann, these faculty included Ümit Çatalyürek, Polo Chau, Bo Dai, Srijan Kumar, Yunan Luo, Anqi Wu, and Chao Zhang.
“In the field of geophysics, inverse problems and statistical solutions of these problems are known, but no one has been able to characterize these statistics in a realistic way,” Herrmann said.
“That’s where these machine learning techniques come into play, and we can do things now that you could never do before.”
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Nov. 29, 2023
The National Institute of Health (NIH) has awarded Yunan Luo a grant for more than $1.8 million to use artificial intelligence (AI) to advance protein research.
New AI models produced through the grant will lead to new methods for the design and discovery of functional proteins. This could yield novel drugs and vaccines, personalized treatments against diseases, and other advances in biomedicine.
“This project provides a new paradigm to analyze proteins’ sequence-structure-function relationships using machine learning approaches,” said Luo, an assistant professor in Georgia Tech’s School of Computational Science and Engineering (CSE).
“We will develop new, ready-to-use computational models for domain scientists, like biologists and chemists. They can use our machine learning tools to guide scientific discovery in their research.”
Luo’s proposal improves on datasets spearheaded by AlphaFold and other recent breakthroughs. His AI algorithms would integrate these datasets and craft new models for practical application.
One of Luo’s goals is to develop machine learning methods that learn statistical representations from the data. This reveals relationships between proteins’ sequence, structure, and function. Scientists then could characterize how sequence and structure determine the function of a protein.
Next, Luo wants to make accurate and interpretable predictions about protein functions. His plan is to create biology-informed deep learning frameworks. These frameworks could make predictions about a protein’s function from knowledge of its sequence and structure. It can also account for variables like mutations.
In the end, Luo would have the data and tools to assist in the discovery of functional proteins. He will use these to build a computational platform of AI models, algorithms, and frameworks that ‘invent’ proteins. The platform figures the sequence and structure necessary to achieve a designed proteins desired functions and characteristics.
“My students play a very important part in this research because they are the driving force behind various aspects of this project at the intersection of computational science and protein biology,” Luo said.
“I think this project provides a unique opportunity to train our students in CSE to learn the real-world challenges facing scientific and engineering problems, and how to integrate computational methods to solve those problems.”
The $1.8 million grant is funded through the Maximizing Investigators’ Research Award (MIRA). The National Institute of General Medical Sciences (NIGMS) manages the MIRA program. NIGMS is one of 27 institutes and centers under NIH.
MIRA is oriented toward launching the research endeavors of young career faculty. The grant provides researchers with more stability and flexibility through five years of funding. This enhances scientific productivity and improves the chances for important breakthroughs.
Luo becomes the second School of CSE faculty to receive the MIRA grant. NIH awarded the grant to Xiuwei Zhang in 2021. Zhang is the J.Z. Liang Early-Career Assistant Professor in the School of CSE.
[Related: Award-winning Computer Models Propel Research in Cellular Differentiation]
“After NIH, of course, I first thanked my students because they laid the groundwork for what we seek to achieve in our grant proposal,” said Luo.
“I would like to thank my colleague, Xiuwei Zhang, for her mentorship in preparing the proposal. I also thank our school chair, Haesun Park, for her help and support while starting my career.”
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Nov. 14, 2023
Georgia Tech researchers have created a machine learning (ML) visualization tool that must be seen to believe.
Ph.D. student Alec Helbling is the creator of ManimML, a tool that renders common ML concepts into animation. This development will enable new ML technologies by allowing designers to see and share their work in action.
Helbling presented ManimML at IEEE VIS, the world’s highest-rated conference for visualization research and second-highest rated for computer graphics. It received so much praise at the conference that it won the venue’s prize for best poster.
“I was quite surprised and honored to have received this award,” said Helbling, who is advised by School of Computational Science and Engineering Associate Professor Polo Chau.
“I didn't start ManimML with the intention of it becoming a research project, but because I felt like a tool for communicating ML architectures through animation needed to exist.”
ManimML uses animation to show ML developers how their algorithms work. Not only does the tool allow designers to watch their projects come to life, but they can also explain existing and new ML techniques to broad audiences, including non-experts.
ManimML is an extension of the Manim Community library, a Python tool for animating mathematical concepts. ManimML connects to the library to offer a new capability that animates ML algorithms and architectures.
Helbling chose familiar platforms like Python and Manim to make the tool accessible to large swaths of users varying in skill and experience. Enthusiasts and experts alike can find practical use in ManimML considering today’s widespread interest and application of ML.
“We know that animation is an effective means of instruction and learning,” Helbling said. “ManimML offers that ability for ML practitioners to easily communicate how their systems work, improving public trust and awareness of machine learning.”
ManimML overcomes what has been an elusive approach to visualizing ML algorithms. Current techniques require developers to create custom animations for every specific algorithm, often needing specialized software and experience.
ManimML streamlines this by producing animations of common ML architectures coded in Python, like neural networks.
A user only needs to specify a sequence of neural network layers and their respective hyperparameters. ManimML then constructs an animation of the entire network.
“To use ManimML, you simply need to specify an ML architecture in code, using a syntax familiar to most ML professionals,” Helbling said. “Then it will automatically generate an animation that communicates how the system works.”
ManimML ranked as the best poster from a field of 49 total presentations. IEEE VIS 2023 occurred Oct. 22-27 in Melbourne, Australia. This event marks the first time IEEE held the conference in the Southern Hemisphere.
ManimML has more than 23,000 downloads and a demonstration on social media has hundreds of thousands of views.
ManimML is open source and available at: https://github.com/helblazer811/ManimML
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Pagination
- Previous page
- 3 Page 3