May. 02, 2024
Quantum sensors detect the smallest of environmental changes — for example, an atom reacting to a magnetic field. As these sensors “read” the unique behaviors of subatomic particles, they also dramatically improve scientists’ ability to measure and detect changes in our wider environment.
Monitoring these tiny changes results in a wide range of applications — from improving navigation and natural disaster forecasting, to smarter medical imaging and detection of biomarkers of disease, gravitational wave detection, and even better quantum communication for secure data sharing.
Georgia Tech physicists are pioneering new quantum sensing platforms to aid in these efforts. The research team’s latest study, “Sensing Spin Wave Excitations by Spin Defects in Few-Layer Thick Hexagonal Boron Nitride” was published in Science Advances this week.
The research team includes School of Physics Assistant Professors Chunhui (Rita) Du and Hailong Wang (corresponding authors) alongside fellow Georgia Tech researchers Jingcheng Zhou, Mengqi Huang, Faris Al-matouq, Jiu Chang, Dziga Djugba, and Professor Zhigang Jiang and their collaborators.
An ultra-sensitive platform
The new research investigates quantum sensing by leveraging color centers — small defects within crystals (Du’s team uses diamonds and other 2D layered materials) that allow light to be absorbed and emitted, which also give the crystal unique electronic properties.
By embedding these color centers into a material called hexagonal boron nitride (hBN), the team hoped to create an extremely sensitive quantum sensor — a new resource for developing next-generation, transformative sensing devices.
For its part, hBN is particularly attractive for quantum sensing and computing because it could contain defects that can be manipulated with light — also known as "optically active spin qubits."
The quantum spin defects in hBN are also very magnetically sensitive, and allow scientists to “see” or “sense” in more detail than other conventional techniques. In addition, the sheet-like structure of hBN is compatible with ultra-sensitive tools like nanodevices, making it a particularly intriguing resource for investigation.
The team’s research has resulted in a critical breakthrough in sensing spin waves, Du says, explaining that “in this study, we were able to detect spin excitations that were simply unattainable in previous studies.”
Detecting spin waves is a fundamental component of quantum sensing, because these phenomena can travel for long distances, making them an ideal candidate for energy-efficient information control, communication, and processing.
The future of quantum
“For the first time, we experimentally demonstrated two-dimensional van der Waals quantum sensing — using few-layer thick hBN in a real-world environment,” Du explains, underscoring the potential the material holds for precise quantum sensing. “Further research could make it possible to sense electromagnetic features at the atomic scale using color centers in thin layers of hBN.”
Du also emphasizes the collaborative nature of the research, highlighting the diverse skill sets and resources of researchers within Georgia Tech.
“Within the School of Physics, Professor Zhigang Jiang's research group provided the team with high-quality hBN crystals. Jingcheng Zhou, who is a member of both Professor Hailong Wang’s and my research teams, performed the cutting-edge quantum sensing measurements,” she says. “Many incredible students also helped with this project.”
Du is a leading scientist in the field of quantum sensing — this year, she received a new grant from the U.S. Department of Energy, along with a Sloan Research Fellowship for her pioneering work on developing state-of-the-art quantum sensing techniques for quantum information technology applications. The prestigious Sloan award recognizes researchers whose “creativity, innovation, and research accomplishments make them stand out as the next-generation of leaders in the fields.”
This work is supported by the U. S. National Science Foundation (NSF) under award No. DMR-2342569, the Air Force Office of Scientific Research under award No. FA9550-20-1-0319 and its Young Investigator Program under award No. FA9550-21-1-0125, the Office of Naval Research (ONR) under grant No. N00014-23-1-2146, NASA-REVEALS SSERVI (CAN No. NNA17BF68A), and NASA-CLEVER SSERVI (CAN No. 80NSSC23M0229).
News Contact
Written by Selena Langner
Contact: Jess Hunt-Raston
Director of Communications
College of Sciences at Georgia Tech
Apr. 19, 2024
When U.S. Rep. Earl L. “Buddy” Carter from Georgia’s 1st District visited Atlanta recently, one of his top priorities was meeting with the experts at Georgia Tech’s 20,000-square-foot Advanced Manufacturing Pilot Facility (AMPF).
Carter was recently named the House Energy and Commerce Committee’s chair of the Environment, Manufacturing, and Critical Materials Subcommittee, a group that concerns itself primarily with contamination of soil, air, noise, and water, as well as emergency environmental response, whether physical or cybersecurity.
Because AMPF’s focus dovetails with subcommittee interests, the facility was a fitting stop for Carter, who was welcomed for an afternoon tour and series of live demonstrations. Programs within Georgia Tech’s Enterprise Innovation Institute — specifically the Georgia Artificial Intelligence in Manufacturing (Georgia AIM) and Georgia Manufacturing Extension Partnership (GaMEP) — were well represented.
“Innovation is extremely important,” Carter said during his April 1 visit. “In order to handle some of our problems, we’ve got to have adaptation, mitigation, and innovation. I’ve always said that the greatest innovators, the greatest scientists in the world, are right here in the United States. I’m so proud of Georgia Tech and what they do for our state and for our nation.”
Carter’s AMPF visit began with an introduction by Thomas Kurfess, Regents' Professor and HUSCO/Ramirez Distinguished Chair in Fluid Power and Motion Control in the George W. Woodruff School of Mechanical Engineering and executive director of the Georgia Tech Manufacturing Institute; Steven Ferguson, principal research scientist and managing director at Georgia AIM; research engineer Kyle Saleeby; and Donna Ennis, the Enterprise Innovation Institute’s director of community engagement and program development, and co-director of Georgia AIM.
Ennis provided an overview of Georgia AIM, while Ferguson spoke on the Manufacturing 4.0 Consortium and Kurfess detailed the AMPF origin story, before introducing four live demonstrations.
The first of these featured Chuck Easley, Professor of the Practice in the Scheller College of Business, who elaborated on supply chain issues. Afterward, Alan Burl of EPICS: Enhanced Preparation for Intelligent Cybermanufacturing Systems and mechanical engineer Melissa Foley led a brief information session on hybrid turbine blade repair.
Finally, GaMEP project manager Michael Barker expounded on GaMEP’s cybersecurity services, and Deryk Stoops of Central Georgia Technical College detailed the Georgia AIM-sponsored AI robotics training program at the Georgia Veterans Education Career Transition Resource (VECTR) Center, which offers training and assistance to those making the transition from military to civilian life.
The topic of artificial intelligence, in all its subtlety and nuance, was of particular interest to Carter.
“AI is the buzz in Washington, D.C.,” he said. “Whether it be healthcare, energy, [or] science, we on the Energy and Commerce Committee look at it from a sense [that there’s] a very delicate balance, and we understand the responsibility. But we want to try to benefit from this as much as we can.”
“I heard something today I haven’t heard before," Carter continued, "and that is instead of calling it artificial intelligence, we refer to it as ‘augmented intelligence.’ I think that’s a great term, and certainly something I’m going to take back to Washington with me.”
“It was a pleasure to host Rep. Carter for a firsthand look at AMPF," shared Ennis, "which is uniquely positioned to offer businesses the opportunity to collaborate with Georgia Tech researchers and students and to hear about Georgia AIM.
“At Georgia AIM, we’re committed to making the state a leader in artificial intelligence-assisted manufacturing, and we’re grateful for Congressman Carter’s interest and support of our efforts."
News Contact
Eve Tolpa
Senior Writer/Editor
Enterprise Innovation Institute (EI2)
Apr. 19, 2024
Metz, France
Three-dimensional (3D) hetero-integration technology is set to transform the field of electronics. Vertically stacking functional layers, creates novel 2D-3D circuit architectures with high integration density and unprecedented multifunctionality.
Three researchers at Georgia Tech-CNRS IRL 2958, a joint international research laboratory based at Georgia Tech-Europe in Metz, France, were among a team that demonstrated cutting-edge 2D/single-crystalline 3D/2D (2D/C-3D/2D) Integration using a precise layer splitting technique to overcome drawbacks in ferroelectric materials use in electrostatic capacitors.
Abdallah Ougazzaden, professor of Electrical and Computer Engineering at Georgia Tech, and president of Georgia Tech-Europe, Phuong Vuong, Georgia Tech-CNRS IRL 2958 researcher, and Suresh Sundaram, adjunct faculty in Georgia Tech’s School of Electrical and Computer Engineering, are co-authors on an April 19, 2024 research article in the journal, Science, entitled “High energy density in artificial heterostructures through relaxation time modulation.”
Ferroelectric materials used in electrostatic capacitors have unique advantages such as maximum polarization due to their higher electric susceptibilities related to dielectric constants, or permittivity, a measure of a material’s ability to store electrical energy. However, their high remnant polarization, the amount of polarization that remains in the material after the electric field is removed, limits how well they can store and release energy during the discharging process.
In the Science article, researchers demonstrated an innovative approach that sandwiches a single crystalline BTO (C-BTO) layer with 2D materials in the form of a freestanding membrane and effectively suppresses the remnant polarization of ferroelectric materials while maintaining the maximum polarization.
This ultra-thin vertical stacking technology was achieved using three different two-dimensional materials in combination with single crystalline BTO (C-BTO). Hexagonal boron nitride semiconductor was developed in the International Research Laboratory at GT-Europe, while graphene, and Molybdenum disulfide (MoS2) were developed at MIT.
These new 2D material technologies have a special type of bonding called van der Waals forces. On account of this, the layers can be easily separated to create components without needing any chemical etching or cutting processes.
“The 2D hexagonal boron nitride (h-BN) material that we are developing at Georgia Tech-CNRS IRL 2958 on large surfaces using the MOCVD epitaxial growth technique has demonstrated its significant potential in emerging technologies across various domains such as future quantum computers, biotechnology, flexible electronics, sensors, energy, and optogenetics.” said Ougazzaden, head of the h-BN project at Georgia Tech-CNRS IRL 2958,” adding, “We are currently working on some of these applications, and we hope to produce even more results and demonstrate new achievements."
Electrostatic capacitors, with their ability to store and release electrical energy quickly, find a wide range of applications across various fields of electronics and electrical engineering for energy storage, power conditioning etc.
In a similar collaboration, the same research team from Georgia Tech-CNRS IRL 2958 published a paper in December 2023, showing the first demonstration of the monolithic 3D integration of an artificial intelligence (AI) processor using two-dimensional (2D) materials.
This innovative integration approach combined six layers of transistor and memristor networks into a 3D nano-system. By stacking nanoscale materials made from 2D materials using bottom-up technology, the team created a fully integrated AI system.
The monolithic 3D method significantly improved processing efficiency by reducing time, voltage drops, latency, and footprint. In addition to offering a solution for electronic hetero-integration with 2D materials, broke new ground for advanced multifunctional processors and systems for AI applications and complex computing.
The team’s results on vertical hetero-integration were published in the scientific journal, Nature Materials, entitled, "Monolithic 3D Integration of 2D Materials-Based Electronics towards Ultimate Edge Computing Solutions.”
The researchers who contributed to the Science article discovered that when ferroelectric materials are combined in special structures (like 2D/C-3D/2D layers), it affects how much leftover charge a capacitor has and how well it can store energy. These insights will advance designs of high-energy capacitors using these materials. In the future, this could lead to more efficient and powerful energy storage systems.
Apr. 16, 2024
There is an expectation that implementing new and emerging Generative AI (GenAI) tools enhances the effectiveness and competitiveness of organizations. This belief is evidenced by current and planned investments in GenAI tools, especially by firms in knowledge-intensive industries such as finance, healthcare, and entertainment, among others. According to forecasts, enterprise spending on GenAI will increase by two-fold in 2024 and grow to $151.1 billion by 2027.
However, the path to realizing return on these investments remains somewhat ambiguous. While there is a history of efficiency and productivity gains from using computers to automate large-scale routine and structured tasks across various industries, knowledge and professional jobs have largely resisted automation. This stems from the nature of knowledge work, which often involves tasks that are unstructured and ill-defined. The specific input information, desired outputs, and/or the processes of converting inputs to outputs in such tasks are not known a priority, which consequently has limited computer applications in core knowledge tasks.
GenAI tools are changing the business landscape by expanding the range of tasks that can be performed and supported by computers, including idea generation, software development, and creative writing and content production. With their advanced human-like generative abilities, GenAI tools have the potential to significantly enhance the productivity and creativity of knowledge workers. However, the question of how to integrate GenAI into knowledge work to successfully harness these advantages remains a challenge. Dictating the parameters for GenAI usage via a top-down approach, such as through formal job designs or redesigns, is difficult, as it has been observed that individuals tend to adopt new digital tools in ways that are not fully predictable. This unpredictability is especially pertinent to the use of GenAI in supporting knowledge work for the following reasons.
Continue reading: How Different Fields Are Using GenAI to Redefine Roles
Reprinted from the Harvard Business Review, March 25, 2024
Maryam Alavi is the Elizabeth D. & Thomas M. Holder Chair & Professor of IT Management, Scheller College of Business, Georgia Institute of Technology.
News Contact
Lorrie Burroughs
Mar. 14, 2024
Schmidt Sciences has selected Kai Wang as one of 19 researchers to receive this year’s AI2050 Early Career Fellowship. In doing so, Wang becomes the first AI2050 fellow to represent Georgia Tech.
“I am excited about this fellowship because there are so many people at Georgia Tech using AI to create social impact,” said Wang, an assistant professor in the School of Computational Science and Engineering (CSE).
“I feel so fortunate to be part of this community and to help Georgia Tech bring more impact on society.”
AI2050 has allocated up to $5.5 million to support the cohort. Fellows receive up to $300,000 over two years and will join the Schmidt Sciences network of experts to advance their research in artificial intelligence (AI).
Wang’s AI2050 project centers on leveraging decision-focused AI to address challenges facing health and environmental sustainability. His goal is to strengthen and deploy decision-focused AI in collaboration with stakeholders to solve broad societal problems.
Wang’s method to decision-focused AI integrates machine learning with optimization to train models based on decision quality. These models borrow knowledge from decision-making processes in high-stakes domains to improve overall performance.
Part of Wang’s approach is to work closely with non-profit and non-governmental organizations. This collaboration helps Wang better understand problems at the point-of-need and gain knowledge from domain experts to custom-build AI models.
“It is very important to me to see my research impacting human lives and society,” Wang said. That reinforces my interest and motivation in using AI for social impact.”
[Related: Wang, New Faculty Bolster School’s Machine Learning Expertise]
This year’s cohort is only the second in the fellowship’s history. Wang joins a class that spans four countries, six disciplines, and seventeen institutions.
AI2050 commits $125 million over five years to identify and support talented individuals seeking solutions to ensure society benefits from AI. Last year’s AI2050 inaugural class of 15 early career fellows received $4 million.
The namesake of AI2050 comes from the central motivating question that fellows answer through their projects:
It’s 2050. AI has turned out to be hugely beneficial to society. What happened? What are the most important problems we solved and the opportunities and possibilities we realized to ensure this outcome?
AI2050 encourages young researchers to pursue bold and ambitious work on difficult challenges and promising opportunities in AI. These projects involve research that is multidisciplinary, risky, and hard to fund through traditional means.
Schmidt Sciences, LLC is a 501(c)3 non-profit organization supported by philanthropists Eric and Wendy Schmidt. Schmidt Sciences aims to accelerate and deepen understanding of the natural world and develop solutions to real-world challenges for public benefit.
Schmidt Sciences identify under-supported or unconventional areas of exploration and discovery with potential for high impact. Focus areas include AI and advanced computing, astrophysics and space, biosciences, climate, and cross-science.
“I am most grateful for the advice from my mentors, colleagues, and collaborators, and of course AI2050 for choosing me for this prestigious fellowship,” Wang said. “The School of CSE has given me so much support, including career advice from junior and senior level faculty.”
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Mar. 12, 2024
Cicadas are the soundtrack of summer, but their pee is more special than their music. Rather than sprinkling droplets, they emit jets of urine from their small frames. For years, Georgia Tech researchers have wanted to understand the cicada’s unique urination.
Cicadas are the soundtrack of summer, but their pee is more special than their music. Rather than sprinkling droplets, they emit jets of urine from their small frames. For years, Georgia Tech researchers have wanted to understand the cicada’s unique urination.
Saad Bhamla, an assistant professor in the School of Chemical and Biochemical Engineering, and his research group hoped for an opportunity to study a cicada’s fluid excretion. However, while cicadas are easily heard, they hide in trees, making them hard to observe. As such, seeing a cicada pee is an event. Bhamla’s team had only watched the process on YouTube.
Then, while doing fieldwork in Peru, the team got lucky: They saw numerous cicadas in a tree, peeing.
Read more about what they discovered at Georgia Tech Research News.
News Contact
Tess Malone, Senior Research Writer/Editor
tess.malone@gatech.edu
Feb. 29, 2024
One of the hallmarks of humanity is language, but now, powerful new artificial intelligence tools also compose poetry, write songs, and have extensive conversations with human users. Tools like ChatGPT and Gemini are widely available at the tap of a button — but just how smart are these AIs?
A new multidisciplinary research effort co-led by Anna (Anya) Ivanova, assistant professor in the School of Psychology at Georgia Tech, alongside Kyle Mahowald, an assistant professor in the Department of Linguistics at the University of Texas at Austin, is working to uncover just that.
Their results could lead to innovative AIs that are more similar to the human brain than ever before — and also help neuroscientists and psychologists who are unearthing the secrets of our own minds.
The study, “Dissociating Language and Thought in Large Language Models,” is published this week in the scientific journal Trends in Cognitive Sciences. The work is already making waves in the scientific community: an earlier preprint of the paper, released in January 2023, has already been cited more than 150 times by fellow researchers. The research team has continued to refine the research for this final journal publication.
“ChatGPT became available while we were finalizing the preprint,” Ivanova explains. “Over the past year, we've had an opportunity to update our arguments in light of this newer generation of models, now including ChatGPT.”
Form versus function
The study focuses on large language models (LLMs), which include AIs like ChatGPT. LLMs are text prediction models, and create writing by predicting which word comes next in a sentence — just like how a cell phone or email service like Gmail might suggest what next word you might want to write. However, while this type of language learning is extremely effective at creating coherent sentences, that doesn’t necessarily signify intelligence.
Ivanova’s team argues that formal competence — creating a well-structured, grammatically correct sentence — should be differentiated from functional competence — answering the right question, communicating the correct information, or appropriately communicating. They also found that while LLMs trained on text prediction are often very good at formal skills, they still struggle with functional skills.
“We humans have the tendency to conflate language and thought,” Ivanova says. “I think that’s an important thing to keep in mind as we're trying to figure out what these models are capable of, because using that ability to be good at language, to be good at formal competence, leads many people to assume that AIs are also good at thinking — even when that's not the case.
It's a heuristic that we developed when interacting with other humans over thousands of years of evolution, but now in some respects, that heuristic is broken,” Ivanova explains.
The distinction between formal and functional competence is also vital in rigorously testing an AI’s capabilities, Ivanova adds. Evaluations often don’t distinguish formal and functional competence, making it difficult to assess what factors are determining a model’s success or failure. The need to develop distinct tests is one of the team’s more widely accepted findings, and one that some researchers in the field have already begun to implement.
Creating a modular system
While the human tendency to conflate functional and formal competence may have hindered understanding of LLMs in the past, our human brains could also be the key to unlocking more powerful AIs.
Leveraging the tools of cognitive neuroscience while a postdoctoral associate at Massachusetts Institute of Technology (MIT), Ivanova and her team studied brain activity in neurotypical individuals via fMRI, and used behavioral assessments of individuals with brain damage to test the causal role of brain regions in language and cognition — both conducting new research and drawing on previous studies. The team’s results showed that human brains use different regions for functional and formal competence, further supporting this distinction in AIs.
“Our research shows that in the brain, there is a language processing module and separate modules for reasoning,” Ivanova says. This modularity could also serve as a blueprint for how to develop future AIs.
“Building on insights from human brains — where the language processing system is sharply distinct from the systems that support our ability to think — we argue that the language-thought distinction is conceptually important for thinking about, evaluating, and improving large language models, especially given recent efforts to imbue these models with human-like intelligence,” says Ivanova’s former advisor and study co-author Evelina Fedorenko, a professor of brain and cognitive sciences at MIT and a member of the McGovern Institute for Brain Research.
Developing AIs in the pattern of the human brain could help create more powerful systems — while also helping them dovetail more naturally with human users. “Generally, differences in a mechanism’s internal structure affect behavior,” Ivanova says. “Building a system that has a broad macroscopic organization similar to that of the human brain could help ensure that it might be more aligned with humans down the road.”
In the rapidly developing world of AI, these systems are ripe for experimentation. After the team’s preprint was published, OpenAI announced their intention to add plug-ins to their GPT models.
“That plug-in system is actually very similar to what we suggest,” Ivanova adds. “It takes a modularity approach where the language model can be an interface to another specialized module within a system.”
While the OpenAI plug-in system will include features like booking flights and ordering food, rather than cognitively inspired features, it demonstrates that “the approach has a lot of potential,” Ivanova says.
The future of AI — and what it can tell us about ourselves
While our own brains might be the key to unlocking better, more powerful AIs, these AIs might also help us better understand ourselves. “When researchers try to study the brain and cognition, it's often useful to have some smaller system where you can actually go in and poke around and see what's going on before you get to the immense complexity,” Ivanova explains.
However, since human language is unique, model or animal systems are more difficult to relate. That's where LLMs come in.
“There are lots of surprising similarities between how one would approach the study of the brain and the study of an artificial neural network” like a large language model, she adds. “They are both information processing systems that have biological or artificial neurons to perform computations.”
In many ways, the human brain is still a black box, but openly available AIs offer a unique opportunity to see the synthetic system's inner workings and modify variables, and explore these corresponding systems like never before.
“It's a really wonderful model that we have a lot of control over,” Ivanova says. “Neural networks — they are amazing.”
Along with Anna (Anya) Ivanova, Kyle Mahowald, and Evelina Fedorenko, the research team also includes Idan Blank (University of California, Los Angeles), as well as Nancy Kanwisher and Joshua Tenenbaum (Massachusetts Institute of Technology).
DOI: https://doi.org/10.1016/j.tics.2024.01.011
Researcher Acknowledgements
For helpful conversations, we thank Jacob Andreas, Alex Warstadt, Dan Roberts, Kanishka Misra, students in the 2023 UT Austin Linguistics 393 seminar, the attendees of the Harvard LangCog journal club, the attendees of the UT Austin Department of Linguistics SynSem seminar, Gary Lupyan, John Krakauer, members of the Intel Deep Learning group, Yejin Choi and her group members, Allyson Ettinger, Nathan Schneider and his group members, the UT NLL Group, attendees of the KUIS AI Talk Series at Koç University in Istanbul, Tom McCoy, attendees of the NYU Philosophy of Deep Learning conference and his group members, Sydney Levine, organizers and attendees of the ILFC seminar, and others who have engaged with our ideas. We also thank Aalok Sathe for help with document formatting and references.
Funding sources
Anna (Anya) Ivanova was supported by funds from the Quest Initiative for Intelligence. Kyle Mahowald acknowledges funding from NSF Grant 2104995. Evelina Fedorenko was supported by NIH awards R01-DC016607, R01-DC016950, and U01-NS121471 and by research funds from the Brain and Cognitive Sciences Department, McGovern Institute for Brain Research, and the Simons Foundation through the Simons Center for the Social Brain.
News Contact
Written by Selena Langner
Editor and Press Contact:
Jess Hunt-Ralston
Director of Communications
College of Sciences
Georgia Tech
Feb. 21, 2024
Energy is everywhere, affecting everything, all the time. And it can be manipulated and converted into the kind of energy that we depend on as a civilization. But transforming this ambient energy (the result of gyrating atoms and molecules) into something we can plug into and use when we need it requires specific materials.
These energy materials — some natural, some manufactured, some a combination — facilitate the conversion or transmission of energy. They also play an essential role in how we store energy, how we reduce power consumption, and how we develop cleaner, efficient energy solutions.
“Advanced materials and clean energy technologies are tightly connected, and at Georgia Tech we’ve been making major investments in people and facilities in batteries, solar energy, and hydrogen, for several decades,” said Tim Lieuwen, the David S. Lewis Jr. Chair and professor of aerospace engineering, and executive director of Georgia Tech’s Strategic Energy Institute (SEI).
That research synergy is the underpinning of Georgia Tech Energy Materials Day (March 27), a gathering of people from academia, government, and industry, co-hosted by SEI, the Institute for Materials (IMat), and the Georgia Tech Advanced Battery Center. This event aims to build on the momentum created by Georgia Tech Battery Day, held in March 2023, which drew more than 230 energy researchers and industry representatives.
“We thought it would be a good idea to expand on the Battery Day idea and showcase a wide range of research and expertise in other areas, such as solar energy and clean fuels, in addition to what we’re doing in batteries and energy storage,” said Matt McDowell, associate professor in the George W. Woodruff School of Mechanical Engineering and the School of Materials Science and Engineering (MSE), and co-director, with Gleb Yushin, of the Advanced Battery Center.
Energy Materials Day will bring together experts from academia, government, and industry to discuss and accelerate research in three key areas: battery materials and technologies, photovoltaics and the grid, and materials for carbon-neutral fuel production, “all of which are crucial for driving the clean energy transition,” noted Eric Vogel, executive director of IMat and the Hightower Professor of Materials Science and Engineering.
“Georgia Tech is leading the charge in research in these three areas,” he said. “And we’re excited to unite so many experts to spark the important discussions that will help us advance our nation’s path to net-zero emissions.”
Building an Energy Hub
Energy Materials Day is part of an ongoing, long-range effort to position Georgia Tech, and Georgia, as a go-to location for modern energy companies. So far, the message seems to be landing. Georgia has had more than $28 billion invested or announced in electric vehicle-related projects since 2020. And Georgia Tech was recently ranked by U.S. News & World Report as the top public university for energy research.
Georgia has become a major player in solar energy, also, with the announcement last year of a $2.5 billion plant being developed by Korean solar company Hanwha Qcells, taking advantage of President Biden’s climate policies. Qcells’ global chief technology officer, Danielle Merfeld, a member of SEI’s External Advisory Board, will be the keynote speaker for Energy Materials Day.
“Growing these industry relationships, building trust through collaborations with industry — these have been strong motivations in our efforts to create a hub here in Atlanta,” said Yushin, professor in MSE and co-founder of Sila Nanotechnologies, a battery materials startup valued at more than $3 billion.
McDowell and Yushin are leading the battery initiative for Energy Materials Day and they’ll be among 12 experts making presentations on battery materials and technologies, including six from Georgia Tech and four from industry. In addition to the formal sessions and presentations, there will also be an opportunity for networking.
“I think Georgia Tech has a responsibility to help grow a manufacturing ecosystem,” McDowell said. “We have the research and educational experience and expertise that companies need, and we’re working to coordinate our efforts with industry.”
Marta Hatzell, associate professor of mechanical engineering and chemical and biomolecular engineering, is leading the carbon-neutral fuel production portion of the event, while Juan-Pablo Correa-Baena, assistant professor in MSE, is leading the photovoltaics initiative.
They’ll be joined by a host of experts from Georgia Tech and institutes across the country, “some of the top thought leaders in their fields,” said Correa-Baena, whose lab has spent years optimizing a semiconductor material for solar energy conversion.
“Over the past decade, we have been working to achieve high efficiencies in solar panels based on a new, low-cost material called halide perovskites,” he said. His lab recently discovered how to prevent the chemical interactions that can degrade it. “It’s kind of a miracle material, and we want to increase its lifespan, make it more robust and commercially relevant.”
While Correa-Baena is working to revolutionize solar energy, Hatzell’s lab is designing materials to clean up the manufacturing of clean fuels.
“We’re interested in decarbonizing the industrial sector, through the production of carbon-neutral fuels,” said Hatzell, whose lab is designing new materials to make clean ammonia and hydrogen, both of which have the potential to play a major role in a carbon-free fuel system, without using fossil fuels as the feedstock. “We’re also working on a collaborative project focusing on assessing the economics of clean ammonia on a larger, global scale.”
The hope for Energy Materials Day is that other collaborations will be fostered as industry’s needs and the research enterprise collide in one place — Georgia Tech’s Exhibition Hall — over one day. The event is part of what Yushin called “the snowball effect.”
“You attract a new company to the region, and then another,” he said. “If we want to boost domestic production and supply chains, we must roll like a snowball gathering momentum. Education is a significant part of that effect. To build this new technology and new facilities for a new industry, you need trained, talented engineers. And we’ve got plenty of those. Georgia Tech can become the single point of contact, helping companies solve the technical challenges in a new age of clean energy.”
News Contact
Feb. 15, 2024
Artificial intelligence is starting to have the capability to improve both financial reporting and auditing. However, both companies and audit firms will only realize the benefits of AI if their people are open to the information generated by the technology. A new study forthcoming in Review of Accounting Studies attempts to understand how financial executives perceive and respond to the use of AI in both financial reporting and auditing.
In “How do Financial Executives Respond to the Use of Artificial Intelligence in Financial Reporting and Auditing?,” researchers surveyed financial executives (e.g., CFOs, controllers) to assess their perceptions of AI use in their companies’ financial reporting process, as well as the use of AI by their financial statement auditor. The study is authored by Nikki MacKenzie of the Georgia Tech Scheller College of Business, Cassandra Estep from Emory University, and Emily Griffith of the University of Wisconsin.
“We were curious about how financial executives would respond to AI-generated information as we often hear how the financial statements are a joint product of the company and their auditors. While we find that financial executives are rightfully cautious about the use of AI, we do not find that they are averse to its use as has been previously reported. In fact, a number of our survey respondents were excited about AI and see the significant benefits for their companies’ financial reporting process,” says MacKenzie.
Continue reading: The Use of AI by Financial Executives and Their Auditors
Reprinted from Forbes
News Contact
Lorrie Burroughs, Scheller College of Business
Feb. 13, 2024
Mechanical engineering, in the broadest sense of the discipline, touches a vast array of processes and systems, encompassing familiar industries and niche startups. Rapid technology advances mean engineering skills and methods change frequently to adapt to newer materials, tools, or customer needs. At its core, however, the intersection of design and innovation drives engineering, shaping the future of products and manufacturing processes. At the forefront of this intersection is the George W. Woodruff School of Mechanical Engineering at Georgia Tech, well known for its commitment to design education and unique approach to understanding the crucial role design plays in educating future engineers.
News Contact
Ian Sargent
Pagination
- Previous page
- 8 Page 8
- Next page