Nov. 22, 2024
Georgia Tech is days away from the Fall 2024 Idea to Prototype (I2P) Showcase, set to take place on Dec. 3 at 5 p.m. in the Exhibition Hall. This event offers students a platform to present solutions built over the semester to tackle real-world problems and compete for rewards, including a golden ticket into the CREATE-X summer startup accelerator, Startup Launch. The program offers optional seed funding, workspace, entrepreneurial education, and continued mentorship to help students turn their prototypes into viable startups. Over 50 teams will present their prototypes at the showcase.
The event is open to all Georgia Tech students, faculty, staff, and the local community. Tickets are available now but are limited, so register for the I2P Showcase today.
Each semester, students in the Idea-to-Prototype course take time out of their schedules, similar to undergraduate research, to build prototypes. Teams accepted into I2P receive a reimbursement of up to $500 for physical expenses, course credit (undergraduate students only), and mentorship from a Georgia Tech faculty member.
During the showcase, participants and judges interact with the projects and give feedback. The criteria for judging are centered on innovation and overall market and impact potential. Judges can include industry professionals, faculty members, and alumni.
Throughout I2P Showcase history, many winning projects have gone on to achieve significant success. One is CaseDocker, which provides an end-to-end workflow management system. The startup now has a user base of over 400 global clients, including Fortune 500 companies. Other winners of the showcase include a blockchain-based music application, Radiochain, a personal financial management platform, Dolfin Solutions, and an EEG monitoring device for pediatric seizure detection, NeuroChamp.
This semester, the I2P cohort includes a digital twin using individual data and AI for health screenings and early detection, an active shooter detection and tracking tool, an AR tool that turns walls into interactive canvases, a device that detects overdosages, 3D-printed circuit boards, an AI detector for digital media, and more.
Whether you're a student with a passion for entrepreneurship, a faculty member interested in the latest student innovations, or a community member looking to support local talent, the I2P Showcase is a perfect opportunity to explore student innovations, mingle, and enjoy refreshments. Register for the I2P Showcase today and join us at the Exhibition Hall for an evening of creativity and community.
Students interested in participating in I2P can do so in the spring, summer, or fall semesters. The registration process involves providing a brief description of the project, the team members involved, and the current stage of development. The deadline for applications is Jan. 6 for Spring 2025 and May 12 for Summer 2025.
News Contact
Breanna Durham
Marketing Strategist
Nov. 21, 2024
Deven Desai and Mark Riedl have seen the signs for a while.
Two years since OpenAI introduced ChatGPT, dozens of lawsuits have been filed alleging technology companies have infringed copyright by using published works to train artificial intelligence (AI) models.
Academic AI research efforts could be significantly hindered if courts rule in the plaintiffs' favor.
Desai and Riedl are Georgia Tech researchers raising awareness about how these court rulings could force academic researchers to construct new AI models with limited training data. The two collaborated on a benchmark academic paper that examines the landscape of the ethical issues surrounding AI and copyright in industry and academic spaces.
“There are scenarios where courts may overreact to having a book corpus on your computer, and you didn’t pay for it,” Riedl said. “If you trained a model for an academic paper, as my students often do, that’s not a problem right now. The courts could deem training is not fair use. That would have huge implications for academia.
“We want academics to be free to do their research without fear of repercussions in the marketplace because they’re not competing in the marketplace,” Riedl said.
Desai is the Sue and John Stanton Professor of Business Law and Ethics at the Scheller College of Business. He researches how business interests and new technology shape privacy, intellectual property, and competition law. Riedl is a professor at the College of Computing’s School of Interactive Computing, researching human-centered AI, generative AI, explainable AI, and gaming AI.
Their paper, Between Copyright and Computer Science: The Law and Ethics of Generative AI, was published in the Northwestern Journal of Technology and Intellectual Property on Monday.
Desai and Riedl say they want to offer solutions that balance the interests of various stakeholders. But that requires compromise from all sides.
Researchers should accept they may have to pay for the data they use to train AI models. Content creators, on the other hand, should receive compensation, but they may need to accept less money to ensure data remains affordable for academic researchers to acquire.
Who Benefits?
The doctrine of fair use is at the center of every copyright debate. According to the U.S. Copyright Office, fair use permits the unlicensed use of copyright-protected works in certain circumstances, such as distributing information for the public good, including teaching and research.
Fair use is often challenged when one or more parties profit from published works without compensating the authors.
Any original published content, including a personal website on the internet, is protected by copyright. However, copyrighted material is republished on websites or posted on social media innumerable times every day without the consent of the original authors.
In most cases, it’s unlikely copyright violators gained financially from their infringement.
But Desai said business-to-business cases are different. The New York Times is one of many daily newspapers and media companies that have sued OpenAI for using its content as training data. Microsoft is also a defendant in The New York Times’ suit because it invested billions of dollars into OpenAI’s development of AI tools like ChatGPT.
“You can take a copyrighted photo and put it in your Twitter post or whatever you want,” Desai said. “That’s probably annoying to the owner. Economically, they probably wanted to be paid. But that’s not business to business. What’s happening with Open AI and The New York Times is business to business. That’s big money.”
OpenAI started as a nonprofit dedicated to the safe development of artificial general intelligence (AGI) — AI that, in theory, can rival human thinking and possess autonomy.
These AI models would require massive amounts of data and expensive supercomputers to process that data. OpenAI could not raise enough money to afford such resources, so it created a for-profit arm controlled by its parent nonprofit.
Desai, Riedl, and many others argue that OpenAI ceased its research mission for the public good and began developing consumer products.
“If you’re doing basic research that you’re not releasing to the world, it doesn’t matter if every so often it plagiarizes The New York Times,” Riedl said. “No one is economically benefitting from that. When they became a for-profit and produced a product, now they were making money from plagiarized text.”
OpenAI’s for-profit arm is valued at $80 billion, but content creators have not received a dime since the company has scraped massive amounts of copyrighted material as training data.
The New York Times has posted warnings on its sites that its content cannot be used to train AI models. Many other websites offer a robot.txt file that contains instructions for bots about which pages can and cannot be accessed.
Neither of these measures are legally binding and are often ignored.
Solutions
Desai and Riedl offer a few options for companies to show good faith in rectifying the situation.
- Spend the money. Desai says Open AI and Microsoft could have afforded its training data and avoided the hassle of legal consequences.
“If you do the math on the costs to buy the books and copy them, they could have paid for them,” he said. “It would’ve been a multi-million dollar investment, but they’re a multi-billion dollar company.”
- Be selective. Models can be trained on randomly selected texts from published works, allowing the model to understand the writing style without plagiarizing.
“I don’t need the entire text of War and Peace,” Desai said. “To capture the way authors express themselves, I might only need a hundred pages. I’ve also reduced the chance that my model will cough up entire texts.”
- Leverage libraries. The authors agree libraries could serve as an ideal middle ground as a place to store published works and compensate authors for access to those works, though the amount may be less than desired.
“Most of the objections you could raise are taken care of,” Desai said. “They are legitimate access copies that are secure. You get access to only as much as you need. Libraries at universities have already become schools of information.”
Desai and Riedl hope the legal action taken by publications like The New York Times will send a message to companies that develop AI tools to pump the breaks. If they don’t, researchers uninterested in profit could pay the steepest price.
The authors say it’s not a new problem but is reaching a boiling point.
“In the history of copyright, there are ways that society has dealt with the problem of compensating creators and technology that copies or reduces your ability to extract money from your creation,” Desai said. “We wanted to point out there’s a way to get there.”
News Contact
Nathan Deen
Communications Officer
School of Interactive Computing
Nov. 15, 2024
The Automatic Speech Recognition (ASR) models that power voice assistants like Amazon Alexa may have difficulty transcribing English speakers with minority dialects.
A study by Georgia Tech and Stanford researchers compared the transcribing performance of leading ASR models for people using Standard American English (SAE) and three minority dialects — African American Vernacular English (AAVE), Spanglish, and Chicano English.
Interactive Computing Ph.D. student Camille Harris is the lead author of a paper accepted into the 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP) this week in Miami.
Harris recruited people who spoke each dialect and had them read from a Spotify podcast dataset, which includes podcast audio and metadata. Harris then used three ASR models — wav2vec 2.0, HUBERT, and Whisper — to transcribe the audio and compare their performances.
For each model, Harris found SAE transcription significantly outperformed each minority dialect. The models more accurately transcribed men who spoke SAE than women who spoke SAE. Members who spoke Spanglish and Chicano English had the least accurate transcriptions out of the test groups.
While the models transcribed SAE-speaking women less accurately than their male counterparts, that did not hold true across minority dialects. Minority men had the most inaccurate transcriptions of all demographics in the study.
“I think people would expect if women generally perform worse and minority dialects perform worse, then the combination of the two must also perform worse,” Harris said. “That’s not what we observed.
“Sometimes minority dialect women performed better than Standard American English. We found a consistent pattern that men of color, particularly Black and Latino men, could be at the highest risk for these performance errors.”
Addressing underrepresentation
Harris said the cause of that outcome starts with the training data used to build these models. Model performance reflected the underrepresentation of minority dialects in the data sets.
AAVE performed best under the Whisper model, which Harris said had the most inclusive training data of minority dialects.
Harris also looked at whether her findings mirrored existing systems of oppression. Black men have high incarceration rates and are one of the people groups most targeted by police. Harris said there could be a correlation between that and the low rate of Black men enrolled in universities, which leads to less representation in technology spaces.
“Minority men performing worse than minority women doesn’t necessarily mean minority men are more oppressed,” she said. “They may be less represented than minority women in computing and the professional sector that develops these AI systems.”
Harris also had to be cautious of a few variables among AAVE, including code-switching and various regional subdialects.
Harris noted in her study there were cases of code-switching to SAE. Speakers who code-switched performed better than speakers who did not.
Harris also tried to include different regional speakers.
“It’s interesting from a linguistic and history perspective if you look at migration patterns of Black folks — perhaps people moving from a southern state to a northern state over time creates different linguistic variations,” she said. “There are also generational variations in that older Black Americans may speak differently from younger folks. I think the variation was well represented in our data. We wanted to be sure to include that for robustness.”
TikTok barriers
Harris said she built her study on a paper she authored that examined user-design barriers and biases faced by Black content creators on TikTok. She presented that paper at the Association of Computing Machinery’s (ACM) 2023 Conference on Computer Supported Cooperative Works.
Those content creators depended on TikTok for a significant portion of their income. When providing captions for videos grew in popularity, those creators noticed the ASR tool built into the app inaccurately transcribed them. That forced the creators to manually input their captions, while SAE speakers could use the ASR feature to their benefit.
“Minority users of these technologies will have to be more aware and keep in mind that they’ll probably have to do a lot more customization because things won’t be tailored to them,” Harris said.
Harris said there are ways that designers of ASR tools could work toward being more inclusive of minority dialects, but cultural challenges could arise.
“It could be difficult to collect more minority speech data, and you have to consider consent with that,” she said. “Developers need to be more community-engaged to think about the implications of their models and whether it’s something the community would find helpful.”
News Contact
Nathan Deen
Communications Officer
School of Interactive Computing
Nov. 13, 2024
Members of the recently victorious cybersecurity group known as Team Atlanta received recognition from one of the top technology companies in the world for their discovery of a zero-day vulnerability in the DARPA AI Cyber Challenge (AIxCC) earlier this year.
On November 1, a team of Google’s security researchers from Project Zero announced they were inspired by the Georgia Tech students and alumni on the team that discovered a flaw in SQLite. This widely used open-source database ran the competition’s scoring algorithm.
According to a post from the project’s blog, when Google researchers saw the success of Atlantis, the large language model (LLM) used in AIxCC, they deployed their LLM to check vulnerabilities in SQLite.
Google’s Big Sleep tool discovered a security flaw in SQLite, an exploitable stack buffer underflow. Project Zero reported the vulnerability and it was patched almost immediately.
“We’re thrilled to see our work on LLM-based bug discovery and remediation inspiring further advancements in security research at Google,” said Hanqing Zhao, a Georgia Tech Ph.D. student. “It’s incredibly rewarding to witness the broader community recognizing and citing our contributions to AI and LLM-driven security efforts.”
Zhao led a group within Team Atlanta focused on tracking their project’s success during the competition, leading to the bug's discovery. He also wrote a technical breakdown of their findings in a blog post cited by Google’s Project Zero.
“This achievement was entirely autonomous, without any human intervention, and we hadn’t even anticipated targeting SQLite3,” he said. “The outcome highlighted the transformative potential of generative AI in security research. Our approach is rooted in a simple yet effective philosophy: mimic the expertise of seasoned security researchers using LLMs.”
The DARPA AI Cyber Challenge (AIxCC) semi-final competition was held at DEF CON 32 in Las Vegas. Team Atlanta, which included Georgia Tech experts, was among the contest’s winners.
Team Atlanta will now compete against six other teams in the final round, which will take place at DEF CON 33 in August 2025. The finalists will use the $2 million semi-final prize to improve their AI system over the next 12 months. Team Atlanta consists of past and present Georgia Tech students and was put together with the help of SCP Professor Taesoo Kim.
The AI systems in the finals must be open-sourced and ready for immediate, real-world launch. The AIxCC final competition will award the champion a $4 million grand prize.
The team tested their cyber reasoning system (CRS), dubbed Atlantis, on software used for data management, website support, healthcare systems, supply chains, electrical grids, transportation, and other critical infrastructures.
Atlantis is a next-generation, bug-finding and fixing system that can hunt bugs in multiple coding languages. The system immediately issues accurate software patches without any human intervention.
AIxCC is a Pentagon-backed initiative announced in August 2023 and will award up to $20 million in prize money throughout the competition. Team Atlanta was among the 42 teams that qualified for the semi-final competition earlier this year.
News Contact
John Popham
Communications Officer II | School of Cybersecurity and Privacy
Nov. 05, 2024
Y Combinator, known for launching over 5,000 startups including Airbnb, Coinbase, DoorDash, Dropbox, and Zapier, is coming to Georgia Tech’s campus on Tuesday, Nov. 12, at 5 p.m. in the John Lewis Student Center’s Walter G. Ehmer Theater for a panel event hosted by CREATE-X. The panel will feature Y Combinator Group Partner Brad Flora and the founders of Greptile, all Georgia Tech alumni, who will discuss their experiences with the startup accelerator.
Since tickets are limited, students are encouraged to RSVP for Y Combinator @ Georgia Tech. As a part of the event, students can apply for Office Hours With Flora, which will be held earlier in the day, by answering optional questions in the RSVP form. Y Combinator will notify selected students. The sessions enable students to discuss side projects or startups, startup idea development, finding co-founders, and monetizing products. Confirmed RSVPs are required to attend the event and office hours.
Y Combinator offers an intensive, three-month program designed to help startups succeed. It provides startups with seed funding, mentorship, and access to a network of investors, industry experts, and alumni.
In 2022, Daksh Gupta and SooHoon Choi participated in CREATE-X Startup Launch and developed Tabnam, which became Greptile after several iterations. Initially, the startup was promoted as an AI shopping assistant that scrapes the internet to tell users what people think about their product.
In 2023, after they graduated from Georgia Tech, Choi, Gupta, and Vaishant Kameswaran launched the latest version of the startup. Now the AI platform focuses on entire codebases and allows users to query via an API. Through the platform, users chat with their codebases, generate descriptions for tickets, automate PR reviews, and build custom internal tools and automations on top of the API. Over 800 software teams, including Wombo, Metamask, Warp, Exa AI, Bland, and Leya, use Greptile. In June, it had a $4 million seed round. Greptile was part of Y Combinator’s Winter 2024 cohort.
For those inspired by Greptile’s success and interested in launching their own startup, CREATE-X is currently accepting applications for Summer 2025 Startup Launch. The priority deadline is Sunday, Nov. 17. Early applicants have a higher chance of acceptance, the opportunity for more feedback, and more opportunities to apply if one idea isn’t accepted.
Startup Launch provides mentorship, $5,000 in optional funding, and $150,000 in services to help Georgia Tech students, alumni, faculty, and researchers launch businesses over 12 weeks in the summer. Teams can be interdisciplinary, made up of co-founders even outside of Georgia Tech, and solopreneurs. CREATE-X, as a whole, has had more than 34,000 participants, launched 560 startups, and has generated a total startup portfolio valuation exceeding $2 billion.
News Contact
Breanna Durham
Marketing Strategist
Oct. 24, 2024
The U.S. Department of Energy (DOE) has awarded Georgia Tech researchers a $4.6 million grant to develop improved cybersecurity protection for renewable energy technologies.
Associate Professor Saman Zonouz will lead the project and leverage the latest artificial technology (AI) to create Phorensics. The new tool will anticipate cyberattacks on critical infrastructure and provide analysts with an accurate reading of what vulnerabilities were exploited.
“This grant enables us to tackle one of the crucial challenges facing national security today: our critical infrastructure resilience and post-incident diagnostics to restore normal operations in a timely manner,” said Zonouz.
“Together with our amazing team, we will focus on cyber-physical data recovery and post-mortem forensics analysis after cybersecurity incidents in emerging renewable energy systems.”
As the integration of renewable energy technology into national power grids increases, so does their vulnerability to cyberattacks. These threats put energy infrastructure at risk and pose a significant danger to public safety and economic stability. The AI behind Phorensics will allow analysts and technicians to scale security efforts to keep up with a growing power grid that is becoming more complex.
This effort is part of the Security of Engineering Systems (SES) initiative at Georgia Tech’s School of Cybersecurity and Privacy (SCP). SES has three pillars: research, education, and testbeds, with multiple ongoing large, sponsored efforts.
“We had a successful hiring season for SES last year and will continue filling several open tenure-track faculty positions this upcoming cycle,” said Zonouz.
“With top-notch cybersecurity and engineering schools at Georgia Tech, we have begun the SES journey with a dedicated passion to pursue building real-world solutions to protect our critical infrastructures, national security, and public safety.”
Zonouz is the director of the Cyber-Physical Systems Security Laboratory (CPSec) and is jointly appointed by Georgia Tech’s School of Cybersecurity and Privacy (SCP) and the School of Electrical and Computer Engineering (ECE).
The three Georgia Tech researchers joining him on this project are Brendan Saltaformaggio, associate professor in SCP and ECE; Taesoo Kim, jointly appointed professor in SCP and the School of Computer Science; and Animesh Chhotaray, research scientist in SCP.
Katherine Davis, associate professor at the Texas A&M University Department of Electrical and Computer Engineering, has partnered with the team to develop Phorensics. The team will also collaborate with the NREL National Lab, and industry partners for technology transfer and commercialization initiatives.
The Energy Department defines renewable energy as energy from unlimited, naturally replenished resources, such as the sun, tides, and wind. Renewable energy can be used for electricity generation, space and water heating and cooling, and transportation.
News Contact
John Popham
Communications Officer II
College of Computing | School of Cybersecurity and Privacy
Oct. 01, 2024
Even though artificial intelligence (AI) is not advanced enough to help the average person build weapons of mass destruction, federal agencies know it could be possible and are keeping pace with next generation technologies through rigorous research and strategic partnerships.
It is a delicate balance, but as the leader of the Department of Homeland Security (DHS), Countering Weapons of Mass Destruction Office (CWMD) told a room full of Georgia Tech students, faculty, and staff, there is no room for error.
“You have to be right all the time, the bad guys only have to be right once,” said Mary Ellen Callahan, assistant secretary for CWMD.
As a guest of John Tien, former DHS deputy secretary and professor of practice in the School of Cybersecurity and Privacy as well as the Sam Nunn School of International Affairs, Callahan was at Georgia Tech for three separate speaking engagements in late September.
"Assistant Secretary Callahan's contributions were remarkable in so many ways,” said Tien. “Most importantly, I love how she demonstrated to our students that the work in the fields of cybersecurity, privacy, and homeland security is an honorable, interesting, and substantive way to serve the greater good of keeping the American people safe and secure. As her former colleague at the U.S. Department of Homeland Security, I was proud to see her represent her CWMD team, DHS, and the Biden-Harris Administration in the way she did, with humility, personality, and leadership."
While the thought of AI-assisted WMDs is terrifying to think about, it is just a glimpse into what Callahan’s office handles on a regular basis. The assistant secretary walked her listeners through how CWMD works with federal and local law enforcement on how to identify and detect the signs of potential chemical, biological, radiological, or nuclear (CBRN) weapons.
“There's a whole cadre of professionals who spend every day preparing for the worst day in U.S. history,” said Callahan. “They are doing everything in their power to make sure that that does not happen.”
CWMD is also researching ways to implement AI technologies into current surveillance systems to help identify and respond to threats faster. For example, an AI-backed bio-hazard surveillance systems would allow analysts to characterize and contextualize the risk of potential bio-hazard threats in a timely manner.
Callahan’s office spearheaded a report exploring the advantages and risks of AI in, “Reducing the Risks at the Intersection of Artificial Intelligence and Chemical, Biological, Radiological, and Nuclear Threats,” which was released to the public earlier this year.
The report was a multidisciplinary effort that was created in collaboration with the White House Office of Science and Technology Policy, Department of Energy, academic institutions, private industries, think tanks, and third-party evaluators.
During his introduction of assistant secretary, SCP Chair Michael Bailey told those seated in the Coda Atrium that Callahan’s career is an incredible example of the interdisciplinary nature he hopes the school’s students and faculty can use as a roadmap.
“Important, impactful, and interdisciplinary research can be inspired by everyday problems,” he said. "We believe that building a secure future requires revolutionizing security education and being vigilant, and together, we can achieve this goal."
While on campus Tuesday, Callahan gave a special guest lecture to the students in “CS 3237 Human Dimension of Cybersecurity: People, Organizations, Societies,” and “CS 4267 - Critical Infrastructures.” Following the lecture, she gave a prepared speech to students, faculty, and staff.
Lastly, she participated in a moderated panel discussion with SCP J.Z. Liang Chair Peter Swire and Jerry Perullo, SCP professor of practice and former CISO of International Continental Exchange as well as the New York Stock Exchange. The panel was moderated by Tien.
News Contact
John Popham, Communications Officer II
School of Cybersecurity and Privacy | Georgia Institute of Technology
scp.cc.gatech.edu | in/jp-popham on LinkedIn
Get the latest SCP updates by joining our mailing list!
Oct. 01, 2024
The Institute for Robotics and Intelligent Machines (IRIM) launched a new initiatives program, starting with several winning proposals, with corresponding initiative leads that will broaden the scope of IRIM’s research beyond its traditional core strengths. A major goal is to stimulate collaboration across areas not typically considered as technical robotics, such as policy, education, and the humanities, as well as open new inter-university and inter-agency collaboration routes. In addition to guiding their specific initiatives, these leads will serve as an informal internal advisory body for IRIM. Initiative leads will be announced annually, with existing initiative leaders considered for renewal based on their progress in achieving community building and research goals. We hope that initiative leads will act as the “faculty face” of IRIM and communicate IRIM’s vision and activities to audiences both within and outside of Georgia Tech.
Meet 2024 IRIM Initiative Leads
Stephen Balakirsky; Regents' Researcher, Georgia Tech Research Institute & Panagiotis Tsiotras; David & Andrew Lewis Endowed Chair, Daniel Guggenheim School of Aerospace Engineering | Proximity Operations for Autonomous Servicing
Why It Matters: Proximity operations in space refer to the intricate and precise maneuvers and activities that spacecraft or satellites perform when they are in close proximity to each other, such as docking, rendezvous, or station-keeping. These operations are essential for a variety of space missions, including crewed spaceflights, satellite servicing, space exploration, and maintaining satellite constellations. While this is a very broad field, this initiative will concentrate on robotic servicing and associated challenges. In this context, robotic servicing is composed of proximity operations that are used for servicing and repairing satellites in space. In robotic servicing, robotic arms and tools perform maintenance tasks such as refueling, replacing components, or providing operation enhancements to extend a satellite's operational life or increase a satellite’s capabilities.
Our Approach: By forming an initiative in this important area, IRIM will open opportunities within the rapidly evolving space community. This will allow us to create proposals for organizations ranging from NASA and the Defense Advanced Research Projects Agency to the U.S. Air Force and U.S. Space Force. This will also position us to become national leaders in this area. While several universities have a robust robotics program and quite a few have a strong space engineering program, there are only a handful of academic units with the breadth of expertise to tackle this problem. Also, even fewer universities have the benefit of an experienced applied research partner, such as the Georgia Tech Research Institute (GTRI), to undertake large-scale demonstrations. Georgia Tech, having world-renowned programs in aerospace engineering and robotics, is uniquely positioned to be a leader in this field. In addition, creating a workshop in proximity operations for autonomous servicing will allow the GTRI and Georgia Tech space robotics communities to come together and better understand strengths and opportunities for improvement in our abilities.
Matthew Gombolay; Assistant Professor, Interactive Computing | Human-Robot Society in 2125: IRIM Leading the Way
Why It Matters: The coming robot “apocalypse” and foundation models captured the zeitgeist in 2023 with “ChatGPT” becoming a topic at the dinner table and the probability occurrence of various scenarios of AI driven technological doom being a hotly debated topic on social media. Futuristic visions of ubiquitous embodied Artificial Intelligence (AI) and robotics have become tangible. The proliferation and effectiveness of first-person view drones in the Russo-Ukrainian War, autonomous taxi services along with their failures, and inexpensive robots (e.g., Tesla’s Optimus and Unitree’s G1) have made it seem like children alive today may have robots embedded in their everyday lives. Yet, there is a lack of trust in the public leadership bringing us into this future to ensure that robots are developed and deployed with beneficence.
Our Approach: This proposal seeks to assemble a team of bright, savvy operators across academia, government, media, nonprofits, industry, and community stakeholders to develop a roadmap for how we can be the most trusted voice to guide the public in the next 100 years of innovation in robotics here at the IRIM. We propose to carry out specific activities that include conducting the activities necessary to develop a roadmap about Robots in 2125: Altruistic and Integrated Human-Robot Society. We also aim to build partnerships to promulgate these outcomes across Georgia Tech’s campus and internationally.
Gregory Sawicki; Joseph Anderer Faculty Fellow, School of Mechanical Engineering & Aaron Young; Associate Professor, Mechanical Engineering | Wearable Robotic Augmentation for Human Resilience
Why It Matters: The field of robotics continues to evolve beyond rigid, precision-controlled machines for amplifying production on manufacturing assembly lines toward soft, wearable systems that can mediate the interface between human users and their natural and built environments. Recent advances in materials science have made it possible to construct flexible garments with embedded sensors and actuators (e.g., exosuits). In parallel, computers continue to get smaller and more powerful, and state-of-the art machine learning algorithms can extract useful information from more extensive volumes of input data in real time. Now is the time to embed lean, powerful, sensorimotor elements alongside high-speed and efficient data processing systems in a continuous wearable device.
Our Approach: The mission of the Wearable Robotic Augmentation for Human Resilience (WeRoAHR) initiative is to merge modern advances in sensing, actuation, and computing technology to imagine and create adaptive, wearable augmentation technology that can improve human resilience and longevity across the physiological spectrum — from behavioral to cellular scales. The near-term effort (~2-3 years) will draw on Georgia Tech’s existing ecosystem of basic scientists and engineers to develop WeRoAHR systems that will focus on key targets of opportunity to increase human resilience (e.g., improved balance, dexterity, and stamina). These initial efforts will establish seeds for growth intended to help launch larger-scale, center-level efforts (>5 years).
Panagiotis Tsiotras; David & Andrew Lewis Endowed Chair, Daniel Guggenheim School of Aerospace Engineering & Sam Coogan; Demetrius T. Paris Junior Professor, School of Electrical and Computer Engineering | Initiative on Reliable, Safe, and Secure Autonomous Robotics
Why It Matters: The design and operation of reliable systems is primarily an integration issue that involves not only each component (software, hardware) being safe and reliable but also the whole system being reliable (including the human operator). The necessity for reliable autonomous systems (including AI agents) is more pronounced for “safety-critical” applications, where the result of a wrong decision can be catastrophic. This is quite a different landscape from many other autonomous decision systems (e.g., recommender systems) where a wrong or imprecise decision is inconsequential.
Our Approach: This new initiative will investigate the development of protocols, techniques, methodologies, theories, and practices for designing, building, and operating safe and reliable AI and autonomous engineering systems and contribute toward promoting a culture of safety and accountability grounded in rigorous objective metrics and methodologies for AI/autonomous and intelligent machines designers and operators, to allow the widespread adoption of such systems in safety-critical areas with confidence. The proposed new initiative aims to establish Tech as the leader in the design of autonomous, reliable engineering robotic systems and investigate the opportunity for a federally funded or industry-funded research center (National Science Foundation (NSF) Science and Technology Centers/Engineering Research Centers) in this area.
Colin Usher; Robotics Systems and Technology Branch Head, GTRI | Opportunities for Agricultural Robotics and New Collaborations
Why It Matters: The concepts for how robotics might be incorporated more broadly in agriculture vary widely, ranging from large-scale systems to teams of small systems operating in farms, enabling new possibilities. In addition, there are several application areas in agriculture, ranging from planting, weeding, crop scouting, and general growing through harvesting. Georgia Tech is not a land-grant university, making our ability to capture some of the opportunities in agricultural research more challenging. By partnering with a land-grant university such as the University of Georgia (UGA), we can leverage this relationship to go after these opportunities that, historically, were not available.
Our Approach: We plan to build collaborations first by leveraging relationships we have already formed within GTRI, Georgia Tech, and UGA. We will achieve this through a significant level of networking, supported by workshops and/or seminars with which to recruit faculty and form a roadmap for research within the respective universities. Our goal is to identify and pursue multiple opportunities for robotics-related research in both row-crop and animal-based agriculture. We believe that we have a strong opportunity, starting with formalizing a program with the partners we have worked with before, with the potential to improve and grow the research area by incorporating new faculty and staff with a unified vision of ubiquitous robotics systems in agriculture. We plan to achieve this through scheduled visits with interested faculty, attendance at relevant conferences, and ultimately hosting a workshop to formalize and define a research roadmap.
Ye Zhao; Assistant Professor, School of Mechanical Engineering | Safe, Social, & Scalable Human-Robot Teaming: Interaction, Synergy, & Augmentation
Why It Matters: Collaborative robots in unstructured environments such as construction and warehouse sites show great promise in working with humans on repetitive and dangerous tasks to improve efficiency and productivity. However, pre-programmed and nonflexible interaction behaviors of existing robots lower the naturalness and flexibility of the collaboration process. Therefore, it is crucial to improve physical interaction behaviors of the collaborative human-robot teaming.
Our Approach: This proposal will advance the understanding of the bi-directional influence and interaction of human-robot teaming for complex physical activities in dynamic environments by developing new methods to predict worker intention via multi-modal wearable sensing, reasoning about complex human-robot-workspace interaction, and adaptively planning the robot’s motion considering both human teaming dynamics and physiological and cognitive states. More importantly, our team plans to prioritize efforts to (i) broaden the scope of IRIM’s autonomy research by incorporating psychology, cognitive, and manufacturing research not typically considered as technical robotics research areas; (ii) initiate new IRIM education, training, and outreach programs through collaboration with team members from various Georgia Tech educational and outreach programs (including Project ENGAGES, VIP, and CEISMC) as well as the AUCC (World’s largest consortia of African American private institutions of higher education) which comprises Clark Atlanta University, Morehouse College, & Spelman College; and (iii) aim for large governmental grants such as DOD MURI, NSF NRT, and NSF Future of Work programs.
-Christa M. Ernst
Sep. 30, 2024
CREATE-X Capstone Design offers students a unique opportunity to blend their technical skills with entrepreneurial ambitions. In this interdisciplinary program, teams of students identify real-world problems and develop innovative solutions through customer discovery and hands-on experience. Below we spotlight Team Sustain, a group of students who participated in the Spring 2024 Capstone Expo. Their project focused on bringing convenience to home-cooked meals, showcasing the practical application of their engineering and entrepreneurial skills. Read on to learn about their journey, their challenges, and how you can get involved in CREATE-X Capstone Design.
Team Sustain
Sustain offers a way to crowdsource meals and provide home cooks with a cash incentive. The system includes software for ordering, reviewing, and collecting data and hardware for meal exchange.
Nirmal Karthik, electrical and computer engineering
Soughtout Olasupo-Ojo, computer science
Nathan Kashani, mechanical engineering
Meghan Janicki, electrical and computer engineering
Joseph Nehme-Haily, mechanical engineering
John Mark Page, electrical engineering
Why did you all choose this project?
“One of the main things CREATE-X Capstone encourages us to do is customer discovery. Through our discussions, we realized that many people enjoy home-cooked meals but find them inconvenient to prepare. While most things in life are just a click away, home-cooked meals still require a personal touch. CREATE-X challenged us to find a problem and create a solution, so we focused on making home-cooked meals more convenient,” Page said.
Why CREATE-X Capstone?
“After graduation, I wanted to try my hand at entrepreneurship later. I thought CREATE-X was a good way for me to try and learn entrepreneurship skills: how to run a business, what it looks like, the timeline, and so on. Either way, if it went well or badly, I could say with my heart that I have an idea of how to do entrepreneurship,” Olasupo-Ojo said.
“You can go into a big city like Atlanta and actually feel like you can do something to help people. It is a great benefit, as opposed to being in the technical weeds of an engineering project. Mixing them together has been a great experience,” Janicki said.
“CREATE-X empowers students to think independently and explore projects they’re passionate about. We get to drive our projects and businesses, learning skills firsthand rather than just in theory,” Kashani said.
What was your biggest struggle?
“As engineers, we’re classically, especially in school, already given the problem. So, the challenge was figuring out what the problem was, and if our solution really solves the root cause of the problem. We figured out how to find the problem,” Page said.
“Figuring out the idea was our biggest struggle. We delved into markets to find opportunities and ways to help people,” Kashani said.
What has been your favorite part of this experience?
“The team. Make sure you surround yourself with good people, and I think each of us has done that. That’s what I’m proudest about — our team,” Page said.
What advice would you give to someone considering entrepreneurship?
“Develop the skill sets to see problems and be able to think about them. At the beginning of the semester, we were thinking about solar design and building solar design for farms, and now we are in a completely different space. But we’re still applying the same skills and building something up from it that matters. The most important skill is adaptability,” Janicki said.
“Be ready to make mistakes. You won’t get it right the first, second, or even third time. Customer discovery is a continuous process — don’t let setbacks discourage you,” Olasupo-Ojo said.
“Don’t be afraid to get started. If you’re feeling nervous or unsure, there’s only one way to find out, so I’d say go full force into it,” Kashani said.
CREATE-X Capstone Design is open to senior undergraduate students in mechanical engineering, electrical and computer engineering, industrial and systems engineering, and computer science. Course registration is available for the fall and spring semesters, and the current sections are ME4723-X/X01, CS4723-X/X01, ECE4853 X/LX, BME4723-X/X01, and ISYE4106.
CREATE-X also offers other programs like Startup Lab and Idea to Prototype, providing students with a foundational entrepreneurial education. For those interested in launching their own ventures, CREATE-X’s 12-week summer accelerator, Startup Launch, offers mentorship, $5,000 in seed funding, and $150,000 of in-kind services. The priority deadline for the accelerator is Nov. 17. Apply for Startup Launch to maximize your chances of acceptance and receive early feedback.
Making Sustain: The Gallery
News Contact
Breanna Durham
Marketing Strategist
Sep. 24, 2024
A year ago, Ray Hung, a master’s student in computer science, assisted Professor Thad Starner in constructing an artificial intelligence (AI)-powered anti-plagiarism tool for Starner’s 900-student Intro to Artificial Intelligence (CS3600) course.
While the tool proved effective, Hung began considering ways to deter plagiarism and improve the education system.
Plagiarism can be prevalent in online exams, so Hung looked at oral examinations commonly used in European education systems and rooted in the Socratic method.
One of the advantages of oral assessments is they naturally hinder cheating. Consulting ChatGPT wouldn’t benefit a student unless the student memorizes the entire answer. Even then, follow-up questions would reveal a lack of genuine understanding.
Hung drew inspiration from the 2009 reboot of Star Trek, particularly the opening scene in which a young Spock provides oral answers to questions prompted by AI.
“I think we can do something similar,” Hung said. “Research has shown that oral assessment improves people’s material understanding, critical thinking, and communication skills.
“The problem is that it’s not scalable with human teachers. A professor may have 600 students. Even with teaching assistants, it’s not practical to conduct oral assessments. But with AI, it’s now possible.”
Hung developed The Socratic Mind with Starner, Scheller College of Business Assistant Professor Eunhee Sohn, and researchers from the Georgia Tech Center for 21st Century Universities (C21U).
The Socratic Mind is a scalable, AI-powered oral assessment platform leveraging Socratic questioning to challenge students to explain, justify, and defend their answers to showcase their understanding.
“We believe that if you truly understand something, you should be able to explain it,” Hung said.
“There is a deeper need for fostering genuine understanding and cultivating high-order thinking skills. I wanted to promote an education paradigm in which critical thinking, material understanding, and communication skills play integral roles and are at the forefront of our education.”
Hung entered his project into the Learning Engineering Tools Competition, one of the largest education technology competitions in the world. Hung and his collaborators were among five teams that won a Catalyst Award and received a $50,000 prize.
Benefits for Students
The Socratic Mind will be piloted in several classes this semester with about 2,000 students participating. One of those classes is the Intro to Computing (CS1301) class taught by College of Computing Professor David Joyner.
Hung said The Socratic Mind will be a resource students can use to prepare to defend their dissertation or to teach a class if they choose to pursue a Ph.D. Anyone struggling with public speaking or preparing for job interviews will find the tool helpful.
“Many users are interested in AI roleplay to practice real-world conversations,” he said. “The AI can roleplay a manager if you want to discuss a promotion. It can roleplay as an interviewer if you have a job interview. There are a lot of uses for oral assessment platforms where you can practice talking with an AI.
“I hope this tool helps students find their education more valuable and help them become better citizens, workers, entrepreneurs, or whoever they want to be in the future.”
Hung said the chatbot is not only conversational but also adverse to human persuasion because it follows the Socratic method of asking follow-up questions.
“ChatGPT and most other large language models are trained as helpful, harmless assistants,” he said. “If you argue with it and hold your position strong enough, you can coerce it to agree. We don’t want that.
“The Socratic Mind AI will follow up with you in real-time about what you just said, so it’s not a one-way conversation. It’s interactive and engaging and mimics human communication well.”
Educational Overhaul
C21U Director of Research in Education Innovation Jonna Lee and C21U Research Scientist Meryem Soylu will measure The Socratic Mind’s effectiveness during the pilot and determine its scalability.
“I thought it would be interesting to develop this further from a learning engineering perspective because it’s about systematic problem solving, and we want to create scalable solutions with technologies,” Lee said.
“I hope we can find actionable insights about how this AI tool can help transform classroom learning and assessment practices compared to traditional methods. We see the potential for personalized learning for various student populations, including non-traditional lifetime learners."
Hung said The Socratic Mind has the potential to revolutionize the U.S. education system depending on how the system chooses to incorporate AI.
Recognizing the advancement of AI is likely an unstoppable trend. Hung advocates leveraging AI to enhance learning and unlock human potential rather than focusing on restrictions.
“We are in an era in which information is abundant, but wisdom is scarce,” Hung said. “Shallow and rapid interactions drive social media, for example. We think it’s a golden time to elevate people’s critical thinking and communication skills.”
For more information about The Socratic Mind and to try a demo, visit the project's website.
News Contact
Nathan Deen
Communications Officer
School of interactive Computing
Pagination
- Previous page
- 2 Page 2
- Next page