Apr. 01, 2026
Manufacturing is undergoing a significant transformation as artificial intelligence reshapes how industrial systems operate, adapt, and scale. The H. Milton Stewart School of Industrial and Systems Engineering (ISyE) has launched its Manufacturing and AI Initiative, which brings together faculty expertise in statistics, optimization, data science, and systems engineering to address emerging challenges and opportunities in modern manufacturing.
ISyE researchers are applying AI to complex manufacturing environments, including multistage production systems, asset management, quality improvement, and human‑centered manufacturing. Faculty leaders emphasize the importance of contextualizing large volumes of manufacturing data so AI can support reliable decision‑making, efficient operations, and sustainable outcomes. At the same time, the initiative acknowledges challenges such as data integration, system complexity, and the need to balance automation with human involvement. Together, these efforts position ISyE at the forefront of shaping AI‑powered manufacturing systems that are innovative, resilient, and socially responsible.
Read the full article in ISyE Magazine
News Contact
Annette Filliat, ISyE Communications Writer
Mar. 25, 2026
Whether it’s a fire or a flood, a ship’s crew can only rely on itself and its training in emergencies at sea. The same is true for crews facing digital threats on oil tankers, cargo ships, and other commercial vessels.
New cybersecurity research from the Georgia Institute of Technology, however, revealed that crews aboard commercial vessels were often not adequately prepared to manage cyberattacks effectively due to systemic training gaps.
The findings are based on interviews conducted by researchers with more than 20 officer-level mariners to assess the maritime industry’s readiness to handle cybersecurity attacks at sea.
"Historically, cybersecurity research has focused heavily on cyber-physical systems like cars, factories, and industrial plants, but ships have largely been overlooked,” said Anna Raymaker, Ph.D. student and lead researcher.
“That gap is concerning when more than 90% of the world’s goods travel by sea. Recent incidents, from GPS spoofing to ships linked to subsea cable disruptions, show that maritime systems are increasingly part of the global cyber threat landscape.”
The researchers proposed four practical strategies to strengthen maritime cyber defenses and close the training gaps. Their findings were presented recently at the ACM SIGSAC Conference on Computer and Communications Security (CCS).
1. Make Cybersecurity Training Actually Maritime
Many of those interviewed for the study described current cybersecurity training as “boilerplate” — generic modules that don’t reflect real shipboard risks.
Researchers recommend:
- Role-specific instruction: Navigation officers should learn to detect and identify GPS spoofing. Engineers should focus on vulnerabilities in remotely monitored systems.
- Bridging IT and Operational Technology: Crews need to understand how attacks on IT systems can trigger physical consequences in operational technology — including collisions, groundings, or explosions.
- Hands-on delivery: Replace passive PowerPoints with drills and in-person exercises that build muscle memory.
- Accessible standards: Training must account for the wide range of educational backgrounds across crews and be standardized across ranks.
2. Move Beyond “Call IT”
At sea, crews can’t simply escalate a cyber incident to a shore-based IT department and wait. Operational resilience requires onboard readiness.
Researchers recommend:
- Vessel-specific response plans: Ships need clear, actionable protocols for threats such as AIS jamming or radar manipulation.
- Military-style drills: Adopting MCON (Emission Control) exercises — used by the U.S. Military Sealift Command — can train crews to operate safely without electronic systems.
- Stronger connectivity controls: High-bandwidth satellite systems like Starlink introduce new risks. Clear policies and network segregation are essential to prevent new entry points for attackers.
Related Article: When GPS lies at sea: How electronic warfare is threatening ships and their crews by Anna Raymaker
3. Create Unified, Ship-Specific Regulations
Maritime cybersecurity regulations are often reactive and fragmented. Researchers argue the industry needs a cohesive, domain-specific framework.
Key recommendations include:
- A unified global model: Like the energy sector’s NERC CIP standards, a maritime framework could mandate baseline controls such as encryption, network segmentation, and anonymous incident reporting.
- Rules built for real crews: Regulations designed for large naval operations don’t translate well to smaller merchant or research vessels. Standards must reflect actual shipboard conditions.
- Future-proofing requirements: Autonomous ships and remotely operated vessels expand the cyber-physical attack surface. Regulations must proactively address these emerging technologies.
4. Invest in Maritime-Specific Cyber Research
Finally, the researchers stress that long-term resilience requires deeper technical research focused on maritime systems.
Priority areas include:
- Real-time intrusion detection systems tailored to shipboard protocols.
- Proactive security risk assessments of interconnected onboard systems.
- Cyber-physical modeling to better understand cascading failures in complex maritime environments.
The Bottom Line
Cyber threats at sea are no longer hypothetical. Mariners report real-world incidents ranging from GPS spoofing to ransomware that disrupts global trade.
“Through our interviews with mariners, I saw firsthand how much dedication and pride they take in their work,” said Raymaker. “Our goal is for this research to serve as a call to action for researchers, policymakers, and industry to invest more attention in maritime cybersecurity and support the people who risk their lives every day to keep global trade, food, and energy moving."
A Sea of Cyber Threats: Maritime Cybersecurity from the Perspective of Mariners was presented at CCS 2025. It was written by Raymaker and her colleagues, Ph.D. students Akshaya Kumar, Miuyin Yong Wong, and Ryan Pickren; Research Scientist Animesh Chhotaray, Associate Professor Frank Li, Associate Professor Saman Zonouz, and Georgia Tech Provost and Executive Vice President for Academic Affairs Raheem Beyah.
News Contact
John Popham
Communications Officer II School of Cybersecurity and Privacy
Mar. 25, 2026
Georgia Tech has announced the recipients of the 2026 Institute Research Awards, honoring faculty, staff, and research teams whose work has made significant scientific, technological, and societal impact. Presented by the Office of the Executive Vice President for Research, the awards recognize excellence across six categories spanning innovation, mentorship, collaboration, engagement, and research program development and impact. This year’s honorees reflect the breadth of Georgia Tech’s research enterprise — from foundational discovery to commercialization and community partnerships — and will be recognized at the Faculty and Staff Honors Luncheon on April 24.
Mar. 24, 2026
By Chris Gaffney, Managing Director of the Georgia Tech Supply Chain and Logistics Institute, Supply Chain Advisor, and former executive at Frito‑Lay, AJC International, and Coca‑Cola
We recently wrapped our semi‑annual industry advisory board meeting, where a core element of the agenda is a set of "hot topics" sourced in advance from our member companies, curated, and facilitated to reflect what is most top of mind in the field. This cycle, one of those topics focused on the impact of AI on supply chain technology investment.
What began as a discussion on technology quickly surfaced a broader issue:
AI is not just changing supply chains—it is raising the standard for execution, and in doing so, redefining what it takes to sustain a brand.
When Capability Becomes Cheap
Within that discussion, a simple example sparked debate. Most of us would trust a platform like DocuSign without hesitation. It has earned that trust through reliability, security, and consistent performance.
But what if a new entrant—call it “FredSign”—offered similar functionality, powered by AI, at lower cost and with comparable features? Would you use it?
The room split. Some argued that established brands are durable because of the trust they have built over time. Others pushed back, suggesting that AI‑enabled challengers could close that gap faster than expected, making brand less relevant.
The discussion quickly moved beyond software to a broader question:
In a world where AI lowers the cost of building capability, does trust shift from brand to performance—or does brand become even more important?
Brand as a Promise
From a supply chain perspective, this is no longer theoretical. It is already happening.
At its core, a brand is a promise. For product companies, that promise is built on quality, consistency, and the experience of using the product over time. For supply chain technology and service providers, it is grounded in reliability, security, and confidence in execution.
Historically, brand has been reinforced by performance—but also protected by time, scale, and familiarity.
AI is changing that balance.
Lower Barriers, Higher Expectations
On one hand, AI lowers barriers to entry. New entrants can replicate functionality faster, improve user experiences, and target specific gaps in incumbent offerings.
In supply chain technology, this is particularly relevant. Many organizations have made significant, long‑term investments in systems that have not always delivered as expected. That creates an opening for AI‑enabled providers to enter through narrow use cases, solve specific problems better, and establish a foothold. Over time, they build credibility.
But there is a second dimension that is more immediate—and more consequential.
AI Raises the Execution Standard
One way to frame this is simple: data is a terrible thing to waste.
For years, supply chains have generated vast amounts of data across planning systems, transportation networks, warehouses, and customer interactions. Much of that data has been underutilized—captured, stored, but not fully leveraged to anticipate risk or improve outcomes.
That is changing.
The capability now exists—and is rapidly maturing—to sense, interpret, and act on that data in ways that were not previously practical. Risks can be identified earlier. Disruptions can be predicted. Corrective actions can be taken before the customer ever feels the impact.
From Disruption to Preventability
Over the past week, in the span of just six days and four unrelated conversations with members of my network, I heard a series of examples that all pointed to this shift.
- A global food company managing risk tied to a critical supplier whose quality issues could impact multiple major brands—raising the question of whether AI could have surfaced a near sole‑source dependency earlier.
- An e‑commerce retailer using machine learning to reduce theft and damage in its fulfillment network, improving the customer experience.
- An organization proactively shifting its fulfillment partner mix based on AI‑driven insights into which nodes can and cannot handle surge capacity.
- A high‑end clothing shipment arriving wet due to a fulfillment breakdown—where the loss was not just the product, but a time‑sensitive moment that could not be recovered.
- A consumer receiving an empty box after successfully purchasing a limited‑release product that could not be replaced.
These are not isolated anecdotes. The common thread is not disruption—it is preventability.
As AI enables earlier detection of risk, better prediction of disruptions, and faster response to exceptions, the tolerance for failure is declining. Companies are no longer judged simply on whether something went wrong. They are judged on whether it should have been avoided.
Brand Is the Delivered Experience
From a brand perspective, that is a fundamental shift.
A product brand may invest heavily in innovation and customer engagement. But if the product arrives damaged, late, or not at all, the customer does not distinguish between the brand owner and the supply chain behind it.
There is only one experience—and therefore only one brand.
In an AI‑enabled supply chain, failure is no longer just a risk—it is increasingly a choice.
The Weakest Node Defines the Brand
A brand is now only as strong as its weakest node.
That node may be a supplier, a logistics provider, a fulfillment partner, or a technology platform. Many sit outside the direct control of the brand owner, yet their performance is inseparable from the customer’s perception of the brand.
AI makes it possible to identify and address these weak points—but it also makes it more apparent when companies fail to do so.
Implications for the Supply Chain Ecosystem
This dynamic extends directly to platform and software providers. In an AI‑enabled environment, it is no longer sufficient for supply chain technology to be stable or functionally adequate. It must evolve—continuously—to sense risk earlier, enable better decisions, and improve execution outcomes. If it does not, its limitations will be exposed quickly, and alternatives will emerge.
Technology providers are not insulated by their brand; they are judged by the outcomes they enable. Their brand will strengthen if their platforms improve execution—and erode if they do not.
Product companies must use AI to protect the customer experience end‑to‑end. Logistics providers must adopt AI to remain credible partners. Technology providers must evolve their platforms to meet a higher execution standard.
If one part of the system advances while another does not, the gap will be visible—and acted upon quickly.
Winners and losers are being judged daily.
What This Means for Leaders
None of this suggests that brand is no longer important. In high‑trust, high‑risk environments—contracts, financial transactions, healthcare, and other sensitive use cases—brand remains critical.
Even in this environment, trust must be continuously reinforced through performance. Leaders must clearly understand what underpins their brand. Brand is not an asset to be protected; it is the result of consistently delivering on a promise. Any performance gaps must be addressed before others move in. AI‑enabled challengers will not challenge strengths—they will target weaknesses.
Finally, leaders must elevate their ecosystem. Brand performance is now inseparable from partner performance. That requires greater visibility, tighter integration, and higher expectations—not only internally, but across suppliers, logistics providers, and technology partners.
One Question to Answer Now
This execution dimension is only one part of how AI is reshaping brand—but it is already decisive.
A great product can still win. A strong brand can still endure. But in an AI‑driven world, where disruptions can be anticipated and failures mitigated, the margin for error is disappearing.
And in many cases—especially where the purchase is infrequent or the moment is critical—you only get one shot. At the conclusion of our discussion, one participant framed it simply:
What is our secret sauce—and what are we doing to build on it?
That is the question every supply chain leader should be answering now.
Because in an AI‑enabled world, your brand will be defined by what your system consistently delivers.
Mar. 12, 2026
Since 2020, Georgia Tech has partnered with Sandia National Laboratories, a federally funded research and development center focused on national security. In February, the two institutions renewed their collaboration with a new Memorandum of Understanding (MOU), reaffirming a relationship that has already strengthened research capabilities on both sides.
The partnership has driven progress in areas ranging from hypersonics to bioscience, while also deepening institutional ties beyond research. Joint faculty appointments — such as Anirban Mazumdar, who holds roles at both Sandia and the George W. Woodruff School of Mechanical Engineering — demonstrate how closely the organizations work together. The collaboration has also expanded student talent pipelines, providing more avenues for Georgia Tech students to pursue careers at the national lab.
“At its core, this partnership is about people,” said Tim Lieuwen, executive vice president for Research at Georgia Tech. “Sandia and Georgia Tech share a commitment to discovery and developing the talent, creativity, and collaboration our nation needs.”
The renewed MOU, he said, “strengthens connections between our researchers, opens new doors for our students, and builds meaningful career pathways into national service. When our communities work together to address national priorities, we not only accelerate technological advances — we expand opportunities for the people who will shape the future of our nation’s security.”
Under the new MOU, Sandia and Georgia Tech will focus on integrated research across key national security‑aligned areas, including secure artificial intelligence and computing, quantum technologies, critical minerals, advanced manufacturing, energy and grid resilience, and hypersonics. The partnership emphasizes connecting manufacturing, computation, and systems approaches directly to national security applications.
“Together, we have been solving new and unprecedented challenges in science and engineering, and now we have a great opportunity to develop this partnership,” said Dan Sinars, Sandia’s deputy chief research officer. “Our research benefits both national security and national prosperity, and keeps the country at the forefront of the world.”
With this strengthened connection, the partners aim to grow their shared research footprint through increased funding, publications, and faculty-led startups. Over the long term, Georgia Tech intends to become one of Sandia’s top hiring pipelines, ensuring that talent developed through joint research continues into national security careers.
History of the Partnership
The Institute’s collaboration with Sandia began in the mid‑2010s, when the labs selected Georgia Tech as one of its partner institutions. The first MOU, signed in 2015, formalized the relationship and outlined initial technical focus areas.
In 2018, George White, executive director of strategic partnerships, and Olof Westerstahl, senior director strategic initiatives in the Office of Corporate Engagement, helped expand the partnership. They launched “Sandia Day,” an event designed to introduce Georgia Tech faculty to Sandia researchers and spark new collaborations. By 2020, the organizations signed a second MOU that expanded the partnership’s technical focus areas to include energy and grid security, materials and nanotechnology, advanced electronics, advanced manufacturing, advanced computing, cyber and information security, bioscience, hypersonics, quantum information science, and engineering sciences.
The results have been substantial. Since 2018, Sandia has sponsored $35 million in research collaborations with Georgia Tech. Researchers from both institutions have co-authored 450 publications since 2016. Research activity continues to accelerate, with $1.6 million in new contracts in the past year alone. As of August 2025, Sandia employs 325 Georgia Tech alumni — a testament to the impact of the growing talent pipeline.
“We view our work with Sandia as the model for engagement with other national labs,” said White. “With the new MOU, we will continue to grow the Sandia partnership. I would like to see our footprint double in scope in the next five years.”
News Contact
Tess Malone, Senior Research Writer/Editor
tess.malone@gatech.edu
Mar. 05, 2026
More than 6,200 high school students across Georgia tuned in for Engineers Week 2026. Through a series of online talks, Georgia Tech researchers shared a glimpse of the technologies shaping the future.
A national initiative held February 23–27, the event highlighted research spanning cybersecurity, aerospace engineering, robotics, infrastructure, and advanced manufacturing. The program virtually brought engineers into classrooms statewide, who offered online learning experiences centered on inquiry, problem solving, and design.
“This is a great collaborative effort between the College of Engineering, the Georgia Tech Manufacturing Institute (GTMI), and the Georgia Tech Research Institute (GTRI),” said Sean Mulvanity, program lead at STEM@GTRI. “We provided students from across the state the opportunity to interact with leaders in a variety of engineering fields.”
Each day featured a different engineer discussing the real-world challenges driving their work. Cybersecurity professor Saman Zonouz began the week with a talk on protecting critical digital systems that power modern life. Aerospace engineer professor Adam Steinberg followed with insights into developing faster, cleaner engines for next-generation supersonic aircraft. Juergen Rauleder, also an aerospace engineer professor, then introduced students to aerodynamics research conducted in Georgia Tech's wind tunnel — one of the largest in the United States.
Later sessions expanded the conversation across disciplines. Civil and environmental engineering professor Lauren Stewart discussed designing buildings and infrastructure capable of withstanding extreme loads, while mechanical engineer professor Aaron Stebner closed the week with his talk, “3D Printing Titanium: Realizing the Superhero Powers of Ironman,” exploring advances in additive manufacturing.
“These talks show engineering isn’t just theory,” said Steven Ferguson, GTMI principal research scientist. “Students are hearing directly about the kinds of problems people are working on right now.”
One session featured Aparna Srinidhi Jagannathan, a third-year biomedical engineering student and undergraduate researcher at Georgia Tech, who spoke about her research in the Exoskeleton and Prosthetic Intelligent Controls (EPIC) Lab. Jagannathan is developing a wearable biofeedback system designed to help patients with gait disorders improve balance and coordination while walking.
“One of the things I value about being an engineer is the ability to turn abstract ideas and theories into tangible devices and technologies through research and design,” Jagannathan said. “Engineers Week empowers students with the knowledge that they, too, can meaningfully contribute to engineering. It reminds them that they can lead projects that benefit the communities around them.”
Engineers Week at Georgia Tech was presented by the College of Engineering, the Georgia Tech Manufacturing Institute, and the Georgia Tech Research Institute.
News Contact
Yanet Chernet
Communications Officer I
Georgia Tech
Feb. 18, 2026
While most people don’t think twice about a cut or scrape, for those with diabetes, every wound is a potential threat that requires vigilant care.
Diabetic foot ulcers, for example, are slow to heal and can increase the risk of infection, hospitalization, and even amputation.
To address this critical challenge, researchers at the Georgia Institute of Technology (Georgia Tech) and the Georgia Tech Research Institute (GTRI) have developed a sensor designed to monitor chronic wounds in real-time. Embedded directly into a bandage, this flexible, low-cost device could transform wound management for diabetic patients and other critical applications — such as providing direct treatment to soldiers on the battlefield or managing chronic wounds in elderly populations and patients with limited healthcare access — by reducing invasive bandage changes and ensuring timely medical intervention.
“For diabetic patients with foot ulcers, long-term monitoring and care are essential,” said GTRI Principal Research Engineer and Project Lead Judy Song. “We were inspired by the success of wearable glucose monitors to develop a compact, affordable sensor tailored to wound care.”
This project was supported by GTRI’s Independent Research and Development (IRAD) program between 2022-2025 and reflects the strength of interdisciplinary collaboration across Georgia Tech. Researchers from three out of GTRI’s eight laboratories developed the sensor with experts from the George W. Woodruff School of Mechanical Engineering, the H. Milton Stewart School of Industrial and Systems Engineering and the Wallace H. Coulter Department of Biomedical Engineering at Tech and Emory University.
About one in four people with diabetes will develop a foot ulcer at some point in their lives, making it one of the leading causes of foot amputations. For these patients, nerve damage and poor blood flow hinder the body’s natural healing process and allow wounds to linger and worsen.
During the initial phases of their research, the team noted that nitric oxide (NO) had been previously identified as a key biomarker for wound health due to its central role in the healing process. Nitric oxide improves blood flow, reduces inflammation, promotes tissue growth and fights infection. By tracking nitric oxide levels in wounds, clinicians could determine whether a wound is improving or detect early signs of trouble.
"Nitric oxide plays a fascinating, almost paradoxical, role in wound healing,” said GTRI Senior Research Engineer Victoria Razin, who is co-leading the project. “It’s essential for processes like blood flow and tissue repair, but can also signal when something is going wrong.”
At the core of the smart bandage is a flexible sensor powered by a three-electrode system capable of detecting changes in nitric oxide. The team used advanced Aerosol Jet® printing techniques to fabricate the sensor, significantly reducing production costs from thousands of dollars to just a few dollars per unit and making the design more affordable and scalable.
“Typically, prototyping these sensors can cost thousands of dollars, but our approach brought costs down dramatically,” said Chuck Zhang, the Eugene C. Gwaltney, Jr. Chair and Professor in ISYE and a program director at the National Science Foundation (NSF), who oversaw sensor fabrication for this project. “Lower costs let us iterate quickly and deliver something that could have real healthcare impact.”
To test the sensor’s accuracy, the team conducted extensive laboratory studies in both biological and simulated wound conditions.
In one set of experiments, endothelial cell cultures were used to create “wounds” by scraping the cell layers. As the cells migrated to repair the gap, nitric oxide production increased, and the sensor successfully tracked these changes in real-time. Additional fluid tests using blood plasma and red blood cells demonstrated that the sensor could reliably detect nitric oxide in a variety of conditions that closely mimic real-world wound environments.
These experiments confirmed that the sensor can identify the fluctuations in nitric oxide associated with different phases of wound healing.
Lab testing was led by Dr. Wilbur Lam, a professor in the Department of Biomedical Engineering and at Emory University School of Medicine, with support from Kirby Fibben, a biomedical engineering Ph.D. student at Tech.
"There’s a significant clinical need for real time, minimally invasive sensor technologies that detect nitric oxide,” said Dr. Lam. “While we’re starting with wound healing, there’s multiple other applications for vascular, hematologic, and pulmonary diseases as well.”
The next step in the project is integrating the sensor into a functional wearable device. The team is combining the sensor with a miniaturized potentiostat (MicroPS) – a small electronic device that measures chemical signals – along with flexible electronic components and a system to transmit data to a mobile app.
The MicroPS, designed by the GTRI research team, led by GTRI Research Engineer Curtis Mulady, enables compact electrochemical measurements and the wireless platform transmits nitric oxide readings from the bandage to a mobile app via Bluetooth. The app uploads the data to a cloud platform, giving clinicians the ability to remotely monitor wound progress in real time. This system could reduce the need for frequent in-person checkups, enabling earlier interventions and improving outcomes for patients.
Future iterations of the bandage aim to include “closed-loop” systems capable of both monitoring and treating wounds, said GTRI’s Song. For example, sensors could trigger a response, like releasing therapeutic agents or antimicrobials directly to the wound, when abnormalities are detected.
The researchers are also exploring commercialization pathways, including partnerships with medical device companies or the formation of a startup.
“This sensor meets a real need for early detection of infection and to evaluate wound healing, and I believe it could have significant commercial success,” said Peter Hesketh, a professor in the School of Mechanical Engineering who led sensor design and performance testing.
Other contributors to this project from GTRI include Mulady, Cora Weidner, Maxwell Blanchard, Rachel Erbrick and Christopher Heist. Zhaonan “Zeke” Liu, a postdoctoral fellow in ISYE, assisted with sensor fabrication, while Rizky Ilhamsyah, a graduate research assistant in the School of Mechanical Engineering, contributed to sensor design and performance testing.
Writer: Anna Akins
Photos: Sean McNeil
GTRI Communications
Georgia Tech Research Institute
Atlanta, Georgia USA
For more information, please contact gtri.media@gtri.gatech.edu.
To learn more about GTRI, visit: Georgia Tech Research Institute | GTRI
News Contact
For more information, please contact gtri.media@gtri.gatech.edu
Writer: Anna Akins (anna.akins@gtri.gatech.edu).
Feb. 24, 2026
By Chris Gaffney, Managing Director of the Georgia Tech Supply Chain and Logistics Institute, Supply Chain Advisor, and former executive at Frito‑Lay, AJC International, and Coca‑Cola, and Michael Barnett, Founder and Principal of Synaptic SC, former global leader of Supply Chain AI at BCG, and former executive at Aera Technology and Koch Industries.
Entering 2026, one thing is clear: staying on the sidelines is no longer a viable option. We both agree that 2025 was the last year when being “behind” on AI adoption could be rationalized. In 2026, leaders cannot stay in the foxhole. They need to move forward, doing so in a way that reduces the risk of failure.
The past two years have been full of promise for AI in supply chain: we have seen impressive pilots, compelling research findings, and no shortage of claims about what agents and large language models can do. At the same time, many supply chain leaders are frustrated; there has been significant activity and investment in centralized capabilities without meaningful results in the supply chain. Too many efforts stall. Too many pilots never scale. Many organizations feel they have kissed a lot of frogs and are still waiting for something that works reliably.
The question for 2026 is no longer whether to engage with AI, but how to do so in a way that consistently delivers results. This is the year to put points on the board through disciplined, repeatable progress rather than moonshots.
Two Principles Separate Progress from Experimentation
Across our work and conversations with supply chain leaders, organizations that are driving tangible results tend to follow two principles, sometimes explicitly, sometimes intuitively:
1. Leverage GenAI Where It Adds Differential Value
Large language models are exceptionally strong at working with language. They summarize, explain, code, and translate intent into logic. This makes them powerful tools for accelerating development, analysis, and communication.
Much of supply chain execution, however, depends on precision. Planning rates, forecasts, production schedules, routing logic, and inventory policies rely on structured data, mathematical relationships, and deterministic logic. In these environments, hallucinations or probabilistic answers are not just inconvenient. They can be operationally disruptive.
Many early failures stem from applying LLMs where deterministic logic is required, rather than using them to support the creation, maintenance, and monitoring of that logic. In practice, GenAI is most effective upstream, helping teams build analytics faster, surface issues earlier, and lower the friction of development and maintenance.
2. Design with People in the Loop
This is not only a philosophical stance. It reflects technical reality. While recent research shows that collections of agents can outperform humans in controlled settings, production supply chains are not laboratories. They are complex, interconnected processes and organizations that operate in a dynamic, ever-changing environment. In contrast to AI that augments workers, fully autonomous systems introduce risks—technical, organizational, and reputational—that erode the incremental value relative to the increased costs to develop and maintain them.
Human-in-the-loop is not a concession. It is a design principle.
From Ideation to Error-Proofed Execution
Most supply chain organizations are not short on AI use cases. What they lack are clear, high‑probability paths to value creation.
A familiar pattern plays out: organizations rush into pilots without a clear view of where AI adds value. Results are mixed and hard to interpret. When early efforts disappoint, leaders become more cautious, not because they doubt AI’s potential, but because they are wary of repeating visible failures.
One executive described this dynamic as being "tired of kissing frogs." After aggressively leaning into new technologies early, the organization became skeptical, insisting on external proof and peer validation before investing further.
The more productive question is no longer "What is the most advanced thing we can try?" but instead: "What can we do today that has a high probability of working, scaling, and building our capabilities?"
How to Put Points on the Board in 2026
Across our experimentation and advisory work, two areas consistently emerge where GenAI is already delivering value.
Enterprise Productivity: The Safest On-Ramp
The most reliable progress comes from improving everyday productivity.
Most organizations take a restrictive approach, limiting AI access to a small group or tightly controlled pilots led by centralized technical teams, only to realize they were slowing learning and adoption across the enterprise. In one large retailer, leadership initially centralized AI use due to security and governance concerns. Over time, they shifted to enterprise licensing that centralized risk management while allowing broader employee access within guardrails.
The result was not chaos or "shadow IT." It was productivity: meeting summaries, analysis support, presentation development, and faster access to internal knowledge.
These gains may sound modest, but they matter. Giving people five to ten hours per week back changes how employees experience AI. It becomes a tool that helps them do their jobs better, not a signal that their jobs are being automated away.
For leaders, this means actively enabling access to approved tools, supporting skill development, and encouraging experimentation within clear boundaries. This is one of the most straightforward ways to quickly and visibly put points on the board.
Decision Intelligence: Rewiring the Operating Model
Advanced analytics, optimization, and planning systems predate GenAI. What is new is not the math, but rather the speed, accessibility, and maintainability of building and sustaining advanced analytics solutions.
GenAI acts as an accelerator. It reduces the friction of writing code, standing up, monitoring logic, and explaining results. It brings advanced capabilities closer to the business, rather than confining them to a small central team.
A concrete example comes from production planning. Planned production rates are often set during commissioning or early ramp up and then reused for long periods. Over time, changes in labor mix, maintenance practices, or product complexity cause actual throughput to drift. Plans continue to run, but they quietly degrade.
In effective implementations, GenAI does not update the planning system autonomously. Instead, it operates adjacent to it. It helps teams build monitoring logic that compares planned versus actual performance, surfaces statistically meaningful drift, and generates candidate adjustments with supporting context. Planners review and approve changes before they are re-ingested into the APS.
The system of record remains intact. Human accountability is preserved. What improves is the speed, frequency, and quality of assumption hygiene, enabling earlier detection of problems before they cascade into service, cost, or inventory issues.
Avoid Kissing Frogs: Technology and Organizational Choices
Many organizations “kiss frogs” not because the new technology is flawed, but because they are not ready to adopt it.
To avoid this fate, successful efforts often include the following elements:
- Leverage existing, approved AI platforms rather than onboarding new technologies
- Accelerates time to value
- Helps define the true limitations of your current technology stack to guide future platform selection
- Maximize the value of current systems (e.g., APS, production scheduling software) instead of chasing new applications
- Existing, complex supply chain software often under-delivers on its promised value
- AI agents and workflows are highly effective at improving master data quality and ensuring planning parameters are accurate
- Foster ideation and solution development with internal teams, while using third parties to accelerate capability building—not to replace it
- Make progress visible by sharing early wins, curating employee-driven experiments, and scaling what works
Change management is not an option; it must be designed into every aspect of an AI program from the start. When organizations invest heavily in advanced capabilities at the top while doing little to equip everyday employees, the message received is often, "This is happening to you, not for you." That perception creates resistance, fear, and organizational drag.
Effective leaders communicate a clear vision for how new capabilities will augment, not replace, their teams, so that scarce human intellect is applied where it adds the most value.
Key Actions to Win in 2026
The principles are clear. The opportunity is real. The question now is execution.
If 2026 is the year to put points on the board, supply chain leaders must move from experimentation to engineered progress. That begins with clarity.
1. Define a Multi-Year AI Value Vision
Develop a concrete view of how AI will create value in your organization over the next several years. Not a collection of pilots. Not a list of tools. A clear articulation of where and how AI will improve productivity, strengthen decision quality, and increase operational reliability.
That vision should:
- Clarify where AI will augment human decision-making versus automate tasks
- Identify the business outcomes you expect to improve (service, cost, inventory, resilience, productivity)
- Guide decisions on organizational design, platform selection, governance, and partnerships
- Establish sequencing - what you will enable now versus what must wait
Without a defined direction, AI efforts default to software deployment. With it, technology becomes a lever for measurable operational improvement.
2. Enable Broad, Responsible Access
Capability development accelerates when access is not unnecessarily constrained. Ensure that team members at every level - from executives to frontline planners - have access to approved enterprise AI tools and agent-building capabilities, along with practical training tied to real workflows.
Effective enablement includes:
- Enterprise licensing and governance that remove friction while protecting data
- Hands-on guidance tied directly to day-to-day supply chain work - reporting, master data cleanup, production monitoring, inventory analysis, schedule validation
- Clear operating guardrails that define appropriate data use and boundaries
- Leadership support for responsible experimentation
Restricting access may feel prudent. In practice, it slows learning and reinforces dependency on centralized teams. Broad enablement builds capability across the organization.
3. Create Local Ideation and Scaling Mechanisms
Durable progress does not originate only from centralized programs. It often begins at the front line.
Leaders should create simple, visible mechanisms for individuals and teams to experiment within defined guardrails and to share what they are building.
This includes:
- Recurring forums or showcases where teams present working solutions
- Curated libraries of effective prompts, workflows, and agents
- Clear channels for submitting ideas and documenting results
Most importantly, organizations must be able to move from local experimentation to scaled adoption. That requires:
- Identifying the strongest minimum viable solutions emerging from the field
- Refining and hardening them into repeatable workflows
- Productizing and scaling what demonstrably improves performance
The objective is not activity. It is building capability that compounds over time.
These steps are straightforward. They require intention and follow-through. That is what separates durable capability from scattered experimentation.
It is not too late to lead. The last several years have provided lessons - technical, organizational, and cultural. Leaders who absorb those lessons and design deliberately for scale will build AI capabilities that strengthen over time.
That kind of progress is not flashy. It does not depend on moonshots or fully autonomous systems operating in isolation. It depends on clarity, access, discipline, and accountability.
In 2026, novelty will attract attention. Durability will create an advantage.
The organizations that win will not be the ones with the most pilots. They will be the ones who consistently translate AI into measurable operational improvement.
This is the year to move from experimentation to engineered results.
Put points on the board.
Feb. 05, 2026
Students from three Southwest Georgia high schools put their engineering skills to the test at the Advanced Manufacturing Program’s first tri‑district race, showcasing custom cars they designed and built. With strong support from educators, industry partners, and local leaders, the program is fostering homegrown technical talent. As AMP expands to six schools, communities are beginning to imagine new possibilities for their future workforce.
Jan. 23, 2026
By Chris Gaffney, Managing Director, Georgia Tech Supply Chain and Logistics Institute | Supply Chain Advisor | Former Executive at Frito-Lay, AJC International, and Coca-Cola
People often ask me a simple question: “You always recommend a good book to read; what have you read lately?”
I usually give them my version of a money-back guarantee. I haven’t had to pay up yet!
The Thinking Machine, Stephen Witt’s book on Jensen Huang and NVIDIA, is one of those recommendations.
It’s a fast, engaging read that packs a lot of insight into a book you can finish in just a couple of days. It’s also one of the most interesting books I’ve read this past year out of a stack of twenty or thirty. Most importantly for my world, it’s a book from which supply chain students, young professionals, and senior leaders can all take something different.
What many supply chain readers may not realize is that NVIDIA’s story is, at its core, a case study in supply chain design, constraint management, and long-horizon system building played out on a global stage.
This book matters to me because it pulls back the curtain on the largest technology shift impacting supply chains this century. It shows it not just as a technology story, but as a supply chain, leadership, and ethics story hiding in plain sight.
More Than a Tech Book
On the surface, this is a story about GPUs, artificial intelligence, and one of the most important technology companies in the world. But underneath, it’s really a story about context: how ideas evolve, how industries form, and how long-term decisions compound over decades.
You don’t need to be an engineer to enjoy it. By the time you’re done, you’ll have a much better grasp of:
- why chips matter,
- why AI depends on physical infrastructure,
- and why supply chains quietly shape what’s possible.
That combination makes the book especially relevant for anyone building a career in supply chain, operations, or industrial leadership.
The Immigrant Story — Still Worth Protecting
One of the most powerful threads running through the book is Jensen Huang’s immigrant story.
His family worked hard to come to the United States. He grew up in modest circumstances, and through persistence, opportunity, and relentless effort, he helped build a company with global impact.
For many of our ancestors, this story feels familiar. For many who come to the U.S. today, it still represents hope. The book serves as a quiet reminder that this pathway from modest beginnings to meaningful contribution is not accidental; it is something that needs to be protected.
The United States is far from perfect, but it remains a remarkable place to innovate and to start businesses. Supply chains are both a driver of that innovation and a beneficiary of the new ideas that emerge.
A Startup Story With Real Twists and Turns
The founding of NVIDIA is not a clean, linear success story.
The original big idea wasn’t necessarily the one that ultimately “won,” and the initial target market wasn’t always the right one. The company faced near-death moments, pivots, resets, and more than a few reasons to walk away.
For students and young professionals considering startups, whether founding one or joining one, this book offers a realistic picture of what that path looks like. It reinforces a few hard truths:
- the probability of failure is high,
- the work ethic required is enormous,
- and the rewards, if they come, often come much later.
I often describe this as a “one scoop now, two scoops later” dynamic. Early effort is rarely rewarded proportionally; patience matters more than hype.
Innovation Is a Team Sport
While Jensen Huang is clearly the centerpiece of the book, one of its strengths is that it avoids treating innovation as a solo act.
Many other players, sometimes knowingly and sometimes unwittingly, contributed research, ideas, and decisions that ultimately shaped where we sit today. The book does a good job showing how progress builds through layers of contribution, often across institutions and generations.
This matters, especially for students and early-career professionals. Breakthroughs rarely come from a single moment or a single person; they come from systems that allow ideas to accumulate and translate into real-world application.
From Basic Engineering to Neural Networks
Several chapters walk through the literal evolution of the technology, and this is where the book is both accessible and impressive.
Even if you can only “just barely hang on” technically, the narrative is clear: today’s AI capabilities are the result of layered progress. Hardware advances built on earlier hardware, software abstractions built on earlier software, and research findings translated into application over time.
Many of the contributors moved fluidly between academia and industry, reinforcing a core lesson: foundational science and engineering still matter. For those of us who remember an analog world, it’s fascinating to see how decades of incremental progress led to the current state and potential of AI.
A Supply Chain Story Hiding in Plain Sight
From a supply chain perspective, The Thinking Machine reads like a case study hiding in plain sight.
NVIDIA is an American innovation success story that is, at the same time, deeply dependent on global supply chains. Its relationship with TSMC in Taiwan, the scarcity of advanced manufacturing capacity, the national security implications of certain chips, and the need to serve global markets all create a complex and fragile operating reality.
One of the quieter but most powerful lessons in the book is how much supply chain design matters. Product success here isn’t just about better ideas; it’s about how effectively those ideas are translated into scalable, resilient, global systems.
AI may feel digital, but its limits are profoundly physical.
Leadership Results — and a Real Paradox
The book also forces an uncomfortable but important leadership conversation.
Jensen Huang is demanding, intense, and uncompromising. While the results are undeniable, I don’t advocate for many aspects of his leadership style. I believe similar outcomes could be achieved without subjecting employees to public humiliation.
Results matter, but how we get them matters too.
Reading this book reminded me that some of the most valuable leadership lessons I’ve learned came from watching both how to lead and how not to lead. I’ve had bosses who modeled the kind of leader I wanted to become, and a few who taught me just as much by showing me what I wanted to avoid. Both experiences have been valuable.
That tension is worth sitting with, especially for those mentoring the next generation of leaders.
Computer Vision, GPUs, and Adaptability
Computer vision plays a supporting role in the story: not the headline act, but an important early driver. Graphics and vision workloads helped shape GPU architectures long before today’s generative AI boom.
Over time, those architectures generalized to support a wide range of parallel computation, including neural networks. It’s a reminder that technologies often succeed not because of a single application, but because they are flexible enough to evolve.
Ethics, Uncertainty, and Responsibility
Finally, the book leaves us with unresolved questions, and that may be its most honest contribution.
AI is resource-intensive, it will reshape work and livelihoods, and it raises real ethical concerns. Opinions vary widely on whether this moment resembles past industrial revolutions or represents something fundamentally different.
I teach and advocate for the application of AI, but I personally struggle with these ethical dilemmas. Rather than avoid them, I try to address them head-on by highlighting the risks and encouraging students to stay informed so they can be voices for responsible, positive use.
In today’s global and regulatory environment, it’s unrealistic to expect a pause in research or application. Education, not avoidance, may be the most practical form of governance we have.
We can’t guarantee how this plays out over the next decade, but we can prepare.
Why I Keep Recommending This Book
If you’re a supply chain student looking for context, a young professional navigating career choices, or a senior leader trying to understand how AI, supply chains, leadership, and ethics intersect, this is a book worth your time.
It’s engaging, timely, and surprisingly human.
And when someone asks me, “What are you reading?”
This is the book I’ll keep recommending.
The Thinking Machine succeeds because it reminds us that behind AI are people, supply chains, and long-term decisions, all operating under real constraints. That’s a lesson worth revisiting as we set the pace for the months ahead.
A Closing Question
This book highlights traditional supply chain constraints that NVIDIA faced in its growth journey, such as single source supply, perceived lead times, capacity at key suppliers, demand volatility, and talent gaps. Where have you seen or faced these, and how have you and your company navigated them?
Pagination
- Page 1
- Next page