May. 11, 2026
The energy shock is already widely understood. What is not yet widely understood is what comes after it — and why a diplomatic deal, when it comes, will not be the end of the story.
By Chris Gaffney, Managing Director of the Georgia Tech Supply Chain and Logistics Institute and a former Vice President of Global Strategic Supply Chain at The Coca-Cola Company.
Three weeks ago, I started hearing from contacts in my network. Senior supply chain executives, people who have managed through COVID and the Suez Canal blockage, were expressing concern. The kind of concern that doesn’t make it into earnings calls or press releases. The kind that shows up in private conversations between people who actually move goods around the world for a living.
Their worry wasn't about crude oil prices. Crude oil prices are now widely discussed. Their worry was about what happens after crude oil prices. About the plastic in your water bottle, the fertilizer going into this year's corn crop, the engine oil in your car, the polyester in your running shoes.
Those conversations sent me back to the data. The geopolitical crisis and the energy shock are now well-documented in mainstream reporting. What is less discussed and what my conversations with experienced practitioners suggested was being systematically underestimated is the operational cascade downstream of that energy shock. I wanted to answer a specific question: given that the Strait has been effectively closed since February 28, what aspects of the downstream impact are already locked in regardless of a diplomatic solution, and what is still unfolding? Could I use publicly available data, straightforward analytical tools, and accessible modeling to produce a defensible, quantified view of that question?
The answer, after several weeks of work, is yes. And what the analysis shows is more operationally significant than most of the public commentary has yet captured.
Start with what is already true.
The International Energy Agency (IEA) has characterized this as what it describes as one of the largest supply disruption in the history of the global oil market. Flows through the Strait fell from roughly 20 million barrels per day before the conflict to low single-digit levels in March and early April. Asian crude stocks dropped 31 million barrels in March alone, with further declines expected through April. Global refinery runs in Asia were cut by around 6 million barrels per day. Middle distillate prices in Singapore hit all-time highs.
But energy prices, as alarming as they are, are the visible part of this problem. The less visible part is what those commodities become.
Naphtha, a petroleum derivative most people have never heard of, is the feedstock for the polyester in your clothing, the polyethylene terephthalate (PET) in your water bottle, the polypropylene in your food packaging, the polyvinyl chloride (PVC) in your plumbing. Roughly 80 percent of the naphtha imported into Asia comes from the Middle East. South Korean petrochemical plants were running at 60 to 70 percent of capacity by late April. Japanese crackers at 65 to 75 percent. The IEA confirmed it in plain language: Asian petrochemical plants curtailed operating rates as feedstock supply dried up.
Liquefied petroleum gas (LPG) is the cooking gas that 60 percent of Indian households depend on for daily meals and was the first fuel to be rationed. Queues formed as deliveries were delayed. This reflected physical supply constraints alongside severe price pressure.
Fertilizer prices hit 49 percent above last year's levels by April, according to DTN data. Corn planting intentions dropped 3.5 percent. The math on that is straightforward: the food prices that result from this spring’s planting decisions will show up at the grocery store in 2027. The disruption has a long tail, and most of that tail is still ahead of us.
The question isn’t whether this will affect what you pay for everyday goods. It already is. The question is how far the cascade goes and how long it lasts.
Here is what the modeling shows.
Working from publicly available IEA, U.S. Energy Information Administration (EIA), and commodity price data, I built a scenario model that tracks 12 commodity-region pairs through a 300-day simulation horizon. I then ran that model over 1,500 times with slightly varying assumptions to produce a range of outcomes rather than a single point estimate. That range is more honest than a single number, because the genuine uncertainty in this situation deserves to be represented.
Three findings stand out.
First: a diplomatic deal today would be unlikely to quickly reverse what has already happened. This is the finding that surprised me most, and it held across almost every simulation. The high-import-dependency commodities have already depleted enough inventory that functional shortage is already embedded in the near-term outlook regardless of when the Strait reopens. The diplomatic question determines how long the pain lasts and how severe the recovery will be. For consumers, this means the effects may show up long after the headlines fade through higher prices, product shortages, and delays in everything from clothing and packaging to fertilizer-dependent food production.
Second: Europe's most visible supply chain story, airlines canceling flights, is a price story, not a physical shortage story. The IEA documents approximately six weeks of European jet fuel supply. Airlines are grounding aircraft because fuel has doubled in price, not because airports are running dry. Meanwhile, Asian petrochemical plants are curtailing because feedstock physically stopped arriving. These two situations look similar in the headlines. They require completely different responses. For consumers, the difference matters because one problem mainly makes travel and goods more expensive, while the other can interrupt the actual production of the products modern life depends on.
Third: the recovery will be harder and longer than most public commentary assumes. S&P Global estimates five weeks to seven months for full supply normalization after a reopening, depending on infrastructure damage. Mine clearance alone requires 60 to 90 days of sustained operations before commercial vessels can transit safely. Insurance premiums will not normalize until underwriters see months of safe transit. And when supply does restart, suppressed demand returns simultaneously with a supply base that is still rebuilding. The EIA's 2027 demand forecast of 1.6 million barrels per day growth (nearly three times the depressed 2026 rate) makes this concrete. We have seen this pattern before. COVID demonstrated it at scale. The bullwhip effect, applied to a supply-side energy shock, produces a second dislocation on the back side of the crisis.
What this means for your grocery bill, your gas tank, and your business.
The analysis maps 36 supply chain pathways from raw commodity to consumer shelf across 15 product categories. Here are three examples that are or will be visible to you.
Take construction materials. PVC pipe, insulation, and window profiles all begin with petrochemical feedstocks moving through the Gulf region. PVC resin prices in India rose nearly 80 percent in March. Since PVC pipe is largely PVC resin, the pass-through to construction costs is immediate and difficult to absorb. The result is likely to show up in higher prices for building materials, repairs, and infrastructure projects long before most consumers connect the cause.
The same pattern is unfolding in synthetic motor oil. Shell's Pearl Gas-to-Liquid facility in Qatar — one of the world's most important sources of premium Group III base oil — was taken offline by missile strikes. Producers in Bahrain and the UAE have declared force majeure. Roughly 40 percent of global Group III supply is now offline or unable to ship. For consumers, that eventually means higher oil-change costs, more expensive industrial lubricants, and added operating costs moving quietly through trucking, aviation, manufacturing, and delivery networks.
Food arrives later, but it arrives. Fertilizer prices are already sharply elevated, and planting decisions are being made right now under those conditions. The agricultural calendar creates a lag most consumers do not see. Disruptions this spring can become higher grocery prices many months from now. That is not speculation. It is simply how agricultural supply chains work.
We tend to underestimate the breadth and duration of these events while they are happening, and overestimate how quickly things return to normal after they appear to resolve.
What we did, and why it matters how we did it.
Every number in this analysis traces to a cited source. Where data was insufficient and judgment was required, those judgment calls are labeled as such. The model is not a black box. It is a documented, reproducible simulation that any researcher can run independently.
I also used AI — specifically Claude by Anthropic — as a partner to help analyze and build this work. While I provided the analytical framework, the practitioner judgments, and the validation of assumptions, the AI assisted with drafting, building models, computation, and data synthesis. This collaboration is fully detailed in the paper.
This represents a new way of performing analytical work. The results are significant: a quantified, sourced, and reproducible analysis of a complex disruption in the actual world. What usually takes a traditional research team months was completed in weeks. That speed is vital when a situation is still unfolding.
The larger point.
Sixty-seven days in, the global supply chain community is navigating a disruption that has no precise historical parallel. The 1973 OAPEC embargo lasted months and produced lasting structural change in how the world consumes energy. The 1990 Gulf War shock was brief enough that it produced relatively mild downstream consequences. The 2022 European energy crisis showed us what happens when industrial feedstock costs become uneconomic for months at a time: capacity comes offline, and some of it does not come back for a long time.
The 2026 Hormuz closure is now 72 days old. It has already lasted longer than the 1990 Gulf War shock. It is approaching the territory where the worse historical outcomes become the more relevant comparators. Every additional week of closure moves the probability distribution toward the scenarios that produced lasting structural damage.
Both public and private entities may be underestimating the magnitude of what recovery will require. Restoring normal supply chain function after an event of this scale and duration is not a matter of reopening a waterway. It is a matter of rebuilding inventory buffers, restarting industrial capacity, normalizing insurance markets, reestablishing commercial relationships, and managing the demand surge that hits simultaneously with the supply restart. The organizations that are planning for that recovery now will be materially better positioned than those that wait.
The people I talked to three weeks ago were right to be concerned. Their concern was based on experience and instinct and what they were seeing in their own business. Our work over the past weeks validates their perspective.
An enduring diplomatic solution is the essential precondition for any of this to improve. Without it, the cascade continues. With it, the hard work of recovery begins. Either way, the time to understand the full scope of what is in motion is now and not after the headlines move on.
Editor’s note:
View the related report: technical analysis, scenario modeling, Monte Carlo simulation methodology, consumer impact assessment.
May. 06, 2026
Investment is the best word that summarizes Agam Shah’s journey as a graduate student at Georgia Tech.
That is clearest on the surface, where Shah studied how public statements by businesses and financial institutions shape market behavior. At a deeper level, though, his success was buoyed by support from professors and his mentorship of younger students.
Shah’s ability to connect and invest in others led him to partner with Georgia Tech colleagues and start a financial technology business. He returns to campus this week to officially graduate from Tech, giving us a chance to catch up about his grad school experience and life as an entrepreneur.
Graduate: Agam Shah
Research Interests: Quantitative and computational finance, artificial intelligence, natural language processing, large language models (LLMs)
Education: Ph.D. in Machine Learning, home unit in the School of Computational Science and Engineering (CSE)
Faculty Advisors: Scheller College of Business Professor Sudheer Chava and School of CSE Associate Professor Chao Zhang
What persuaded you to attend graduate school at Georgia Tech?
Georgia Tech’s dedicated College of Computing strongly appealed to me. I was particularly drawn to the interdisciplinary nature of its machine learning Ph.D. program and the School of Computational Science and Engineering, both of which align well with my research interests.
What research project(s) from Georgia Tech are you most proud of and why?
I am proud of all 20-plus research papers I have had the opportunity to contribute to at Georgia Tech. However, if I had to choose one, it would be my work on Federal Open Market Committee (FOMC) text analysis, which was also highlighted in the news.
This work is not only well-cited in academic literature, but the language model developed in the paper is also actively used by economists at many of the world’s top central banks, including researchers at the FOMC and the Bank of England. It is also used by leading financial institutions such as BlackRock and Daiwa Securities. Since its release, the model has achieved over 100,000 downloads on Hugging Face.
What can you tell us more about your startup, ZettaQuant?
ZettaQuant aims to solve one of the biggest challenges in using LLMs and agents: working effectively with massive underlying datasets. We serve as a layer between raw data and LLMs, helping distill billions of tokens into the relevant context that models can use.
As a deep-tech startup, we are actively engaging with industry practitioners to better understand how to design and engineer our system to integrate seamlessly with their evolving AI workflows. Given the complexity of the problem we are tackling, particularly in advancing document intelligence systems, we are currently very focused on research and foundational development.
How did your Georgia Tech education prepare you for starting ZettaQuant?
Not just my education, but my entire experience at Georgia Tech, extending beyond the classroom, prepared me for this journey. I met my co-founders at Georgia Tech, and many of the initial use cases we are exploring at ZettaQuant are built on open-source research I conducted there.
In addition to research, I mentored more than 300 students through the Vertically Integrated Project “NLP for Financial Markets.” This experience taught me how to manage teams and think about building systems with a long-term vision.
What advice would you give someone interested in graduate school?
Most people pursue graduate school after already completing more than 15 years of education. Also, people who are admitted to a top school like Georgia Tech are often already well-positioned to secure strong job opportunities. So, graduate school should provide value beyond what you could learn outside the classroom.
Before deciding, think carefully about what you hope to gain from graduate school that you cannot otherwise. Once you enroll, take full advantage of the faculty, research labs, networks, and seminars. Many students underutilize these opportunities during their undergraduate and graduate years.
I would also like to quote the epilogue of my Ph.D. thesis: ‘Advice is abundant; conviction must be your own.’ Build a strong conviction about what you want to achieve from graduate school before committing to it.
What did you do for fun and relaxation while attending Georgia Tech? Do you still keep up with these now?
This may sound unconventional, but I spent a significant amount of time mentoring and teaching throughout my Ph.D. Many of my mentees went on to gain admission to top graduate programs. This included two students I mentored for all four years of their undergraduate studies who later joined the ML Ph.D. program at Georgia Tech. They are now teaching and mentoring students, completing a full-circle journey.
Working with mentees and supporting their growth gives me a strong sense of fulfillment and serves as a form of relaxation. In addition, I enjoy listening to music, especially while coding, and I continue to do that today.
What is your favorite Georgia Tech memory?
If I had to choose one favorite memory, beyond the many exciting late nights in the lab, it would be proposing to my wife on Tech Green at Georgia Tech. She is also a Yellow Jacket, having completed her undergraduate degree here and currently pursuing her Ph.D. Our home truly is a hive of Yellow Jackets.
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
May. 06, 2026
When Chengrui Li walks across the stage this Thursday at Commencement, it will be his final, and perhaps easiest, performance at Georgia Tech.
Between orchestra concerts, magic shows, and yo-yo exhibitions, Li thrives in the limelight. In fact, not much rattles his nerves considering the five years of pressure he endured studying computational neuroscience at Tech.
Before he returns to New York City to continue building brain-interface technologies at Meta, we caught up with Li to learn how he keeps such a cool head at Georgia Tech and beyond.
Graduate: Chengrui Li
Research Interests: Computational neuroscience, eye-tracking experiments and data analysis, statistical machine learning
Education: Ph.D. in Computational Science and Engineering (CSE)
Faculty Advisor: School of CSE Assistant Professor Anqi Wu
What persuaded you to attend graduate school at Georgia Tech?
My undergraduate was at Sichuan University in China. We knew that the most cutting-edge technology and research were in the United States, so I participated in an undergraduate exchange program at the University of Tennessee, Knoxville, during my third year.
I wanted to pursue a Ph.D. in neuroscience while also becoming very proficient in math and computer science (CS). This led me to apply to the CSE Ph.D. program over others. Georgia Tech’s CS ranking is very high, and the CSE program is very interdisciplinary, which matched my expectations super well. I did attain a solid education in math and CS at Georgia Tech. I also advanced my interest in neuroscience and its application by studying mathematical models and algorithms.
What research project from Georgia Tech are you most proud of?
My variational importance sampling paper is a favorite. That one was based heavily on statistical inference. I spent many hours working through complicated derivation calculations, often half-awake and half-asleep after several late nights.
This paper confirmed to me, though, that innovative research requires both hard work and inspiration, and that this endeavor can be rewarding. The paper was selected as a top 5% spotlight paper at ICLR 2024, a world-leading conference on artificial intelligence research.
Could you share more about your role as a research scientist at Meta?
I have been working on Meta’s electromyography (EMG) neural band. This next-generation human-computer interaction device connects with and navigates Meta’s AI glasses.
With the neural band, you can use finger gestures to control the display content you see through the glasses, like swiping your thumb to scroll the screen, or writing on your lap as if you had a pen in your hand to send WhatsApp messages.
How did your Georgia Tech education prepare you for this role?
By pursuing my Ph.D., I am more proficient in critical thinking, math, coding, and presentation. During my interview, I demonstrated these skills and provided my publication records. This helped me land an internship, enabled my success in that role, and led to a full-time position. Additionally, my background in computational neuroscience best matched the work on the EMG neural band team at a big tech company.
What advice would you give someone interested in graduate school?
First, be clear whether a bachelor’s or master’s degree meets your work needs, or if you are truly interested in a scientific research topic. This interest should be based on your own passion, not the current trends. Interest is an important factor in deciding to pursue a Ph.D. because you have to like the topic and like it for a long time. A Ph.D. will require you to dive deep into a subject you must be genuinely curious about.
Second, we are in a new era with rapid advances in information technology. Time is an invaluable resource and is shaped by technology. You have to think more about your time, consider where and how you spend it, and embrace ways to use it more efficiently.
Can you tell us more about your hobbies and how you keep up with them?
I started learning violin when I was five years old, and magic tricks when I was 11. The brain is a supercomputer suitable for functional computation. Our brain is an interface between the objective and subjective, where computation plays a core role in integrating these exact mechanics into interpretations of the world. This realization was one of the important factors that inspired me to pursue my Ph.D. research in computational neuroscience.
Another comparison I’ve learned after playing violin for 23 years is that the cochlea in our inner ear is a fast Fourier Transformer that simultaneously computes the aesthetic of music for us. Performing magic tricks for 17 years taught me that all the occurrences of seemingly low-probability magic phenomena are achieved by either letting it be a certain event or exhausting all possibilities.
I also have other hobbies, like yo-yo balls. I enjoy performing all these skills in front of audiences. Performing brings me satisfaction when I see excitement and happiness from the people I entertain. I am very grateful to my parents for their cultivation and encouragement in doing things that bring me fulfillment. They taught me to be curious and explore my interests, to enjoy pastimes, and instilled the habit to not give up my passions. These were not secondary things that distracted me from coursework or Ph.D. research, but rather complementary parts of my life that bring out the best in me.
What is your favorite Georgia Tech memory?
I have a lot. For my research, I debated frequently with Anqi Wu, my advisor. These often went late into the night to defend my stances. These challenged my beliefs and made me a stronger scholar, for which I am grateful to Anqi for her time and patience.
I also enjoyed performing in the Georgia Tech symphony orchestra with our great conductor, Chaowen Ting. I was involved with the Georgia Tech Chinese Students and Scholars Association, where I showcased magic and yo-yo performances at organization events.
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Apr. 30, 2026
The Bill Kent Family Foundation AI in Higher Education Fellowship invites Georgia Tech educators to explore how emerging tools can elevate teaching, learning, and student development through hands-on experimentation grounded in real classroom practice.
Offered through the College of Lifetime Learning, the fellowship provides dedicated time and space to test ideas, evaluate outcomes, and help shape how the Institute advances thoughtful, responsible innovation in teaching and learning.
“The Bill Kent Family Foundation’s philanthropic support enables us to turn new ideas about teaching and learning into action. Their investment empowers our fellows to explore—both in theory and in practice—how AI is changing long-standing academic practices and equipping educators with new tools,” Dean Bill Gaudelli said.
During the 2025–26 academic year, fellows Professor Ying Zhang, Patrick Danahy, Joy Arulraj, and Professor Flavio Fenton moved from concept to classroom — piloting AI‑supported tutoring in large engineering courses, integrating AI and robotics into design studios, developing privacy-conscious AI tools for computing courses, and reimagining scientific writing and physics instruction. Their work is sparking interdisciplinary collaboration and informing broader conversations about the future of teaching at Tech.
We encourage Georgia Tech academic professionals, lecturers, professors of practice, and tenure-track faculty who are excited to advance their understanding and application of AI in higher education to apply for the BKFF AI Fellowship.
Please visit the call-for-proposals webpage for more information. The submission period runs from May 1 to June 1, 2026.
Pictured from left to right: Patrick Danahy, Lesley Baradel (BKFF), Eric Sembrat (College of Lifetime Learning), Ying Zhang, Flavio Fenton, and Joy Arulraj.
News Contact
Communications Program Manager
C21U, College of Lifetime Learning
Apr. 30, 2026
A Georgia Tech School of Interactive Computing professor and his Ph.D. student have been named to the 2026 list of Microsoft Research Fellows and Fellowship Advisors.
Associate Professor Alan Ritter and Ph.D. student Ethan Mendes were awarded fellowships for their work on creating artificial intelligence (AI) agents that function as teammates.
Mendes was named a fellow, while Ritter will serve as his fellowship advisor.
The Microsoft Research Fellowship is open to faculty, students, and postdocs. Ritter said that if Microsoft sees alignment in a project, it gives recipients the opportunity to work even closer with their collaborators by inviting them to join as additional fellows.
That turned out to be the case with Mendes after Ritter listed him as a collaborator in his fellowship proposal.
“I’m delighted to serve as Ethan Mendes’ fellowship advisor,” Ritter said. “He is an exceptionally strong researcher, and I’m excited to see his work recognized through the Microsoft Research Fellowship.”
Through the fellowship, Ritter and Mendes will design AI systems that better support collaboration and decision-making within organizations.
“The goal is to move beyond AI as a tool for a single user and instead study how AI can help groups make more informed, transparent, and coordinated decisions,” Ritter said. “We will focus on methods that bring together information from many different sources, help people reason under uncertainty, and generate analyses that support collective problem-solving in complex work settings.”
Professor Named to Sustainability Cohort
The Purple Mai’a Foundation has selected Associate Professor Josiah Hester to join its Eahou Global Immersion Cohort.
The Purple Mai’a Foundation is a technology education nonprofit headquartered in Aiea, Hawaii, that teaches coding and computer science to Native Hawaiian students.
The 29 members of the Eahou Global Immersion Cohort from 15 countries are leaders from indigenous communities recognized for their contributions to sustainability.
Hester is a Native Hawaiian whose research centers on sustainable and battery-free technology.
The cohort will gather on O’ahu May 1-3 for Eahou Fest, where they will share stories and solutions from research around the world.
“I’m honored to be selected for the Eahou Global Immersion Cohort and to learn alongside such an inspiring group of resilience leaders who come from around the globe,” Hester said.
“Participants are selected for their significant leadership over the past decade and their ability to bring what they learn back to their communities and integrate it into ongoing work and partnerships. I’m excited to connect these experiences with my work and bring these lessons back into research and teaching at Georgia Tech.”
Jill Watson Creator Receives AAAI Lecture Award
Professor Ashok Goel received one of the most distinguished awards from the Association for the Advancement of Artificial Intelligence (AAAI).
Goel was selected as the 20th recipient of the AAAI Robert S. Engel Memorial Lecture Award. Established in 2003, the award is given to those who have demonstrated excellence in AI scholarship, outstanding applications of AI, and extraordinary service to AAAI and the AI community.
Goel received the award in January during the AAAI Conference on Artificial Intelligence in Singapore. According to the awards program, Goel was recognized for contributions to biologically inspired design, case-based reasoning, and application of AI in virtual teaching.
Goel is the inventor of Jill Watson, one of the first AI virtual teaching assistants used in higher education classrooms.
AAAI is also the publisher of AI Magazine, which Goel served as editor-in-chief from 2016 to 2021.
“I am both honored and humbled to receive AAAI's Robert Engelmore Award,” Goel said. “Bob was a long-time editor of AAAI's AI Magazine, and many years after he retired, I became the editor of the magazine. This makes the Engelmore Award special to me.”
Apr. 30, 2026
By Chris Gaffney, Managing Director of the Georgia Tech Supply Chain and Logistics Institute, Supply Chain Advisor, and former executive at Frito‑Lay, AJC International, and Coca‑Cola.
In this issue:
- The real blind spot in analytics teams
- Three failures where the model was “right” and the decision was wrong
- A five-question checklist to run before anything goes to leadership.
A Subtle but Growing Concern
Over the past several months, I have had conversations with senior leaders at several large, well-established supply chain organizations with strong teams responsible for Integrated Business Planning (IBP) and supply chain network design and optimization.
These teams are technically strong. They know how to build models. They are comfortable with large data sets. Many are now incorporating AI tools into their workflows.
But the same concern keeps surfacing across those conversations:
The analytical capability is improving—but the decision-making discipline around it is not keeping pace.
Analysts move quickly to building models without fully defining the business problem. Assumptions are not always surfaced or challenged. Outputs are evaluated mathematically, not operationally. And recommendations are not always translated into real-world implications.
Leaders are concerned about this and are looking for ways to address. I share their concern because I have been in their shoes.
What the Experience Taught Us
Earlier in my career, across different roles at Coca-Cola, we did not formally teach critical thinking. We learned it through experience and often through mistakes. Three situations shaped how I think about this today.
Powerade: When the Model Works but the Thinking Doesn’t
While working with optimization groups at Coca-Cola North America, we overbuilt capacity for Powerade. The model did exactly what it was supposed to do. The problem was upstream of the model.
We took the demand forecast at face value. At the time, we deferred to the brand teams without interrogating their assumptions. We never asked what was driving the projected volume—whether the competitive dynamics supported it, whether the channel assumptions were realistic, whether pricing and distribution plans were grounded, whether overall market growth would materialize as projected.
The consequence was idle capacity, production lines that were purchased and never installed, write-offs, and a fundamental change to our process. Going forward, brand and supply chain teams were both required to sign off on future business cases. The model was technically correct. The thinking around the model had not been.
Little Rock: When Feasibility Isn’t Reality
Later, within Coca-Cola Supply, we made a network decision to close a plant in Little Rock. On paper, the remaining system had the capacity to absorb the volume. The model said so.
What the model assessed was production capacity based on rated line speeds. What it did not account for was dock and storage capacity at peak, or the practical limitations of standing up a new shift at the receiving plants. Those constraints were real. They were also invisible in the model.
In the short term, we had to source sub optimally from other plants—which directly undermined the business case we had built to justify the closure. The math was right. The operational validation was incomplete.
Mini Cans: When the Thinking Matches the Model
By the time I led the National Product Support Group, we had evolved. Decisions like the launch of mini cans required cross-functional alignment, scenario-based thinking, and a clear understanding of how demand would actually be generated across channels and routes to market.
We got that one right, not because the model was more sophisticated, but because the discipline around the model was stronger. We had learned, the hard way, to ask the questions the model could not ask for itself.
Most of the Work Is Outside the Model
There is a line I first heard from Chris Janke: "Most of the work is outside the model." He may have learned it from someone else; I don’t know the original source, but it is the framing that has stayed with me. With the advances in data and machine learning we have seen over the past decade, that proportion may be closer to 75 percent today.
We are better than ever at collecting and cleansing large data sets, processing high volumes of information, and identifying mathematical errors. But the most important work still happens outside the model: defining the right business question, building meaningful scenarios, interpreting outputs in real-world terms, and stress-testing the assumptions that drive the recommendation.
Janke captured this precisely in documenting his own experience with a modeling error that illustrated the point. An analyst had validated the math on a labor cost model—everything checked out numerically. But when the output was translated into real-world terms, it implied production workers earning roughly $300,000 per year while working approximately 60 hours total annually. The math was internally consistent. The result was operationally impossible. The question that should have been asked early: does this make sense in the context of how the business actually operates? It was not asked until after the analysis was complete.
The discipline to ask that question is not modeling skill. It is a critical thinking skill.
Where the Breakdown Happens
Before the Model: Skipping the Hard Questions
A common pattern today is that analysts move quickly to building the model. The harder and more important step of defining the business decision before the model is built gets compressed or skipped entirely. The questions that require that step are not complicated, but they take time and engagement to answer well:
- What business decision are we actually trying to make?
- What scenarios matter, and why?
- What does success look like—not mathematically, but operationally?
- What constraints are real versus assumed?
These questions are not as clean as coding a model. They require conversations with people who understand the constraints, not just the data. That is part of why they get skipped.
After the Model: Mistaking Mathematical Accuracy for Business Validity
This is where more serious errors occur. Model issues can usually be fixed with more time. Misinterpretation of output leads to bad decisions that are much harder to unwind.
The Powerade and Little Rock situations both illustrate this. In each case, the model was not wrong in any technical sense. What was missing was the translation layer— where someone asks, “what changes on a Tuesday night shift, at Plant B, when demand spikes 12 percent?”
That translation layer does not happen automatically. It has to be built into how teams work. And it is exactly the discipline that gets squeezed when organizations reward speed and analytical sophistication above everything else.
What Critical Thinking Actually Means in Supply Chain
Critical thinking in supply chain is not skepticism for its own sake, and it is not a soft skill that sits alongside the analytical work. It is a discipline applied to decisions and not just to models. The word itself points to what we mean: kritikos, the Greek root, means skilled in judging, able to discern*. That is the right definition for our purposes.
It means asking whether the right question is being answered before investing in answering it well. It means making the assumptions that drive a recommendation visible and testable. It means translating analytical output into operational consequence: what actually changes, for whom, at what cost, and under what conditions the answer flips.
That discipline shows up or breaks down at four specific moments:
- Before the model is built: Is the business question defined precisely enough to model?
- While the model is running: Are the assumptions embedded in the data realistic and challenged?
- When the output is ready: Does this result make sense in how the business actually operates?
- Before the recommendation goes forward: Have we planned for how this will be received, and by whom?
When these moments are skipped because of time pressure, overconfidence in tools, or a culture that rewards analytical speed over decision rigor the gap between analysis and action grows. The Powerade and Little Rock situations were both failures at these moments, not failures of the models themselves.
*DeCesare, M. (2009). Casting a critical glance at teaching “critical thinking.” Pedagogy and the Human Sciences, 1(1), 73–77.
A Five-Question Diagnostic
Before an analysis or recommendation moves forward, teams should be able to answer five questions clearly. If any of them cannot be answered, the analysis is not ready—regardless of how strong the model is.

Figure 1: A Five-Question Diagnostic (accessible version)
These are questions that should have specific, grounded answers before a recommendation reaches leadership. If the team cannot answer question two (what assumption would flip the result) then the recommendation rests on unexamined ground. If question four cannot be answered, the change management work has not started yet.
In the Powerade situation, questions one and two were the misses. In Little Rock, it was question three. The models were not the problem. The diagnostic would have surfaced both gaps before the decisions were made.
This Gap Is Well Documented
What I am describing from my own experience is consistent with what the research shows.
A long-running finding in operations research is that many models are built and comparatively few actually drive decisions, and the breakdown is organizational, not technical. A widely cited review in the European Journal of Operational Research frames this as an implementation problem rooted in how models are connected (or not connected) to the people and processes that own the decision.
Professional credentialing bodies have recognized the same gap. The INFORMS Certified Analytics Professional blueprint explicitly lists business problem framing, stakeholder analysis, and business case development as core analytics competencies—not optional additions. The signal is clear: being analytically strong is necessary but not sufficient.
On the training side, a field study published in the European Journal of Operational Research tested the effects of structured decision training across roughly 1,000 decision makers and analysts. The results showed measurable improvement in proactive decision-making skills and decision satisfaction. The gap is real, and it is addressable. It is a training and design issue, not a talent issue.
The 4 C’s: A Decision-Focused Framework
At Georgia Tech SCL, we organize this thinking around what we call the 4 C’s. These soft skills play a key role in the decision process. Each one asks a specific question about whether the decision, not just the analysis, was made well.

Figure 2: The 4 C’s: A Decision-Focused Framework (accessible version)
Notice what this framework does not include: model accuracy, data quality, or visualization quality. Those matter, and they are inputs to the decision. But a team can have a perfect model, a clean dataset, and a compelling dashboard and still fail all four of these tests.
The Powerade situation failed the Collaboration test The supply chain team did not sufficiently interrogate the brand team’s assumptions. Little Rock failed the Critical Thinking test: the right question was not asked about what the model was not capturing. In both cases, the Communication and Change Management failures followed directly from those upstream gaps.
When all four are present, analysis becomes a decision. When one or more is missing, the analysis and translation to a solid recommendation are at risk.
Where to Start
This topic keeps coming up in conversations with companies, in work with practitioners, and in what we hear from students as they move into industry roles.
The tools are not the problem. AI-assisted analytics, optimization models, and advanced forecasting are real assets. But tools amplify the thinking behind them. Weak decision discipline and better tools is a faster path to the wrong answer.
If this shows up in your org, try the five-question diagnostic on your next recommendation before it hits leadership. If it surfaces gaps you cannot close quickly, SCL can help. We are building workshops and courseware on decision-focused critical thinking, and we will cover this in our June Lunch and Learn.
Questions or comments? Reach out to SCL.
Apr. 15, 2026
Generative artificial intelligence (AI) is best known for creating images and text. Now, it is helping industries make better planning decisions.
Georgia Tech researchers have created a new AI model for decision-focused learning (DFL), called Diffusion-DFL. Recent tests showed it makes more accurate decisions than current approaches.
Along with optimizing industrial output, Diffusion-DFL lowers costs and reduces risk. Experiments also showed it performs across different fields.
Diffusion-DFL doesn’t just surpass current methods; it also predicts more accurately as problem sizes grow. The model requires less computing power despite these high-performance marks, making it more accessible to smaller enterprises.
Diffusion-DFL runs on diffusion models, the same technology that powers DALL-E and other AI image generators. It is the first DFL framework based on diffusion models.
“Anyone who makes high-stakes decisions under uncertainty, including supply chain managers, energy operators, and financial planners, benefits from Diffusion-DFL,” said Zihao Zhao, a Georgia Tech Ph.D. student who led the project.
“Instead of optimizing around a single forecast, the model evaluates many possible scenarios, so decisions account for real-world risk and become more robust.”
To test Diffusion-DFL, the team ran experiments based on real-world settings, including:
- Factory manufacturing to meet product demand
- Power grid scheduling to meet energy demand
- Stock market portfolio optimization
In each case, Diffusion-DFL made more accurate decisions than current methods. It also performed better as problems became larger and more complex. These results confirm the model’s ability to make important decisions in real-world scenarios with noisy data and uncertainty.
The experiments also show that Diffusion-DFL is practical, not just accurate. Training diffusion models is expensive, so the team developed a way to reduce memory use. This cut training costs by more than 99.7%. As a result, Diffusion-DFL can reach more researchers and practitioners.
“Our score-function estimator cuts GPU memory from over 60 gigabytes to 0.13 with almost no loss in decision quality, reducing the requirement for massive computing resources,” Zhao said. “I hope this expands Diffusion-DFL into other domains, like healthcare, where decisions must be made quickly under complex uncertainty."
Beyond decision-making applications, Diffusion-DFL marks a shift in DFL techniques and in the broader use of generative AI models.
In supply chain management, planners estimate future demand before deciding how much product to stock. In this DFL problem, engineers align ML models with predetermined decision objectives, like minimizing risk or reducing costs.
One flaw of DFL methods is that they optimize around a single, deterministic prediction in an uncertain future.
Diffusion-DFL takes a different approach. Instead of making a single guess, it determines a range of possible outcomes. This leads to decisions based on many likely scenarios, rather than on a single assumed future.
To do this, the framework uses diffusion models. These generative AI models create high-quality data from images, text, and audio.
The forward diffusion process involves adding noise to data until it becomes pure noise. Models trained via forward diffusion can reverse diffusion. This means they can start with noisy data and then produce meaningful insights from training examples.
Real-world data is often noisy and uncertain. Traditional DFL methods struggle in these conditions, but diffusion models are designed to handle them.
Because of this, Diffusion-DFL can explore many possible outcomes and choose better actions. Like image-generation AI, the model works well with complex data from different sources. This enables its use across different industries.
“Diffusion models have achieved significant success in generative AI and image synthesis, but our work shows their potential extends far beyond that,” said Kai Wang, an assistant professor in the School of Computational Science and Engineering (CSE).
“What makes Diffusion-DFL unique is that the specific downstream application guides how the model learns to handle uncertainty.
“Whether we are scheduling energy for power grids, balancing risk in financial portfolios, or developing early warning systems in healthcare, we can explicitly train these highly expressive models to navigate the unique complexities of each domain.”
Zhao and Wang collaborated with Caltech Ph.D. candidate Christopher Yeh and Harvard University postdoctoral fellow Lingkai Kong on Diffusion-DFL. Kong earned his Ph.D. in CSE from Georgia Tech in 2024.
Wang will present Diffusion-DFL on behalf of the group at the upcoming International Conference on Learning Representations (ICLR 2026). Occurring April 23-27 in Rio de Janeiro, ICLR is one of the world’s most prestigious conferences dedicated to artificial intelligence research.
“ICLR is the perfect stage for Diffusion-DFL because it brings together the exact community that needs to see the bridge between generative modeling and high-stakes decision-making for real-world applications,” Wang said.
“Presenting Diffusion-DFL allows us to challenge the traditional training framework of diffusion models. It’s about sparking a broader conversation on how we can align the training objectives of generative AI directly with actual, downstream decision-making needs.”
News Contact
Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu
Apr. 21, 2026
Walton County, Georgia, didn’t ask to become a test case for the artificial intelligence (AI) infrastructure boom. Meta, the company behind Facebook, Instagram, and WhatsApp, made the decision for them.
In 2018, the company broke ground in Social Circle, a small town an hour east of Atlanta with about 5,000 residents, to build one of its largest U.S. data centers. It opened in 2020.
Local officials called it a win. Shane Short, president and CEO of the Development Authority of Walton County, said the plant generates about $10 million annually in property tax revenue and has led to road improvements and expanded broadband.
Electric vehicle maker Rivian followed Meta’s lead and began construction on a plant near Social Circle in September 2025, adding to the area’s rapid industrial growth.
But for residents, the shift from a largely rural, agricultural economy to an energy-intensive industrial one has put new pressure on power and water systems.
“They’re seeing higher water and power bills, worse air quality, and very few jobs in return for this, while large corporations get tax benefits,” said Ahmed Saeed, an assistant professor in Georgia Tech’s School of Computer Science, describing why residents in some communities push back on new data center development.
Saeed and Josiah Hester, associate professor of interactive computing and computer science and director of the Center for Advancing Responsible AI, have spent the past year studying the energy, water, and financial demands associated with these facilities, and how those costs are distributed.
Betting on Demand
AI data centers run on specialized chips that use large amounts of electricity. That power generates heat, which requires energy- and water-intensive cooling.
The state is adding capacity based on expected demand, not current use.
Last year, the Georgia Public Service Commission approved an estimated $16 billion expansion for Georgia Power to support that growth. It is expected to produce about 10 gigawatts of electricity at a given time. That’s enough energy to power about 7.5 million homes for a year.
If that demand materializes, the electricity is used. If it doesn’t, the cost still has to be paid.
Grid Stability
“Those workloads can put a very large demand on the grid all at once, and then remove it just as quickly,” Saeed said. “That sudden change is difficult for the system to handle.”
That volatility is a separate issue.
Even if data center operators pay for the infrastructure they use, large swings in demand can still strain grid operations, especially during peak periods or extreme weather.
What Comes Next
Back in Walton County, the Meta facility is already attracting additional data centers.
Each new site adds power and water infrastructure designed to operate for decades.
The servers inside need to be upgraded every few years.
Saeed and Hester said if Georgia wants to remain an AI and cloud hub, the state needs to set the terms and companies need to meet them.
That starts with disclosure — how much power data centers draw from the grid, how that demand spikes, and how much water they use. It includes clear expectations for how those facilities respond when the grid is under stress, and protections for the communities where they’re built.
The researchers maintain that “build it and hope” is not a strategy.
News Contact
Michelle Azriel
Sr. Writer-Editor
Research Communications
mazriel3@gatech.edu
Apr. 20, 2026
How did the earliest life on Earth build complex biological machinery with so few tools? A new study explores how the simplest building blocks of proteins — once limited to just half of today’s amino acids — could still form the sophisticated structures life depends on.
The paper, The Borderlands of Foldability: Lessons from Simplified Proteins, is a meta-analysis of six decades of protein research and reveals that ancient proteins may have been far more complicated and dynamic than previously thought.
Recently published in the journal Trends in Chemistry, the study includes Georgia Tech researchers Lynn Kamerlin, professor in the School of Chemistry and Biochemistry and Georgia Research Alliance Vasser-Woolley Chair in Molecular Design, and Quantitative Biosciences Ph.D. candidate Alfie-Louise Brownless.
Co-authors also include Institute of Science Tokyo graduate student Koh Seya and Liam M. Longo, who serves as a specially appointed associate professor at Science Tokyo and as an affiliate research scientist at the Blue Marble Space Institute of Science.
The research has implications ranging from the origins of life and the search for life in the universe to cutting-edge medical innovation. “One of the biggest unanswered questions in science is how life first began,” says Kamerlin, who is a corresponding author of the study. “Understanding how the first protein-like molecules formed and what the earliest proteins may have been like is a key part of that puzzle.”
“Proteins power our bodies — and all life on Earth,” she adds. “Simply put, the evolution of proteins is the reason that we’re able to have this conversation at all.”
A Protein Folding Paradox
If proteins are the scaffolding of life, amino acids are the components that make up that scaffolding. “Today, an average protein is constructed from a chain of about 300 amino acids, involving 20 different types of amino acids,” Kamerlin shares. Proteins fold when these chains twist into a specific 3-dimensional shape, creating structures critical for biology.
However, while these folds are essential, exactly how a protein knows which way to fold remains a mystery. “We know that proteins didn’t just fold randomly,” Kamerlin shares, “because randomly trying all possible configurations would take a protein longer than the age of the universe.”
It’s a cornerstone problem in biological science called “Levinthal’s Paradox,” and highlights a fundamental mystery: Proteins fold incredibly quickly into very specific combinations — but like a sheet of paper spontaneously folding into an origami swan, researchers don’t know how proteins “choose” the folds they make.
“We can predict what a protein will look like, but can’t tell you how it got there,” Kamerlin adds. “That’s what we’re interested in exploring: how small early proteins developed into the complex proteins that support every living thing on today’s Earth.”
Simple Letters, Sophisticated Structures
Early proteins likely had access to just half of today’s amino acids. “About 10-12 amino acids were likely available on early Earth,” Kamerlin says. Like writing a story with just the letters “A” through “L,” researchers assumed that the ‘vocabulary’ proteins could build from such a limited amino acid alphabet would also be constrained.
“There is a language to protein folding,” Kamerlin explains. “That language is hidden in their structures. Our research is in trying to understand the rules — the grammar and vocabulary that dictate a protein fold.”
The grammar they discovered was surprising: with a combination of creative techniques and environmental support, complex structures can arise from limited amino acid alphabets.
“We found that it is possible to develop complex folds with very simple tools — and certain environments, like salty ones, can help support that,” Kamerlin shares. “Early proteins could also cross-link and associate, interacting like LEGO blocks to create more complex structures.”
Pioneering Proteins
Now, the team is conducting research in environments that could mimic conditions on early Earth — aiming to discover more about how these regions could have given rise to today’s complex proteins. “This aspect of our research also ties into the amazing space research happening at Georgia Tech,” Kamerlin says. “While we’re interested in understanding early life on Earth, our work could help inform where best to look for evidence of life beyond our planet.”
Kamerlin specializes in creating computer models that simulate possible scenarios – creating an opportunity to quickly and efficiently test many theories. The most compelling of these can then be tested by her collaborator and co-author at Science Tokyo, Liam Longo, in lab experiments.
Protein folding is also at the forefront of medical innovation, ranging from diagnostic tools to cancer treatments and neurodegenerative diseases. “In the broader scope, we’re interested in discovering what we can design, what we can stress test, and what we can reconstruct with AI and other computational tools,” Kamerlin says. “Because if you can understand how proteins fold, you gain the ability to design them.”
Funding: NASA, the Human Frontier Science Program, and the Knut and Alice Wallenberg Foundation
Apr. 13, 2026
Artificial intelligence has been touted as the most transformative technology of our time. With only a few years of mainstream use, it’s changed how we work and communicate, generated billions of dollars in investments, and sparked global debate. But according to leading neuroethics expert Karen Rommelfanger, the race isn’t over yet.
“Can you think of a more transformative technology than one that intervenes with the fundamental organ that drives your experience in the world?”
That fundamental organ is the brain.
Technologies interfacing directly with the brain have been reserved for treating severe injury or disease for decades. Now, neurotechnology is expanding into brain-responsive wearables meant to enhance, augment, and monitor everyday life. As these technologies accelerate and AI is incorporated, the question is no longer if neurotechnology will transform society, but how — and who will shape the boundaries.
These are some of the questions on which Karen Rommelfanger has built her career. Trained as a biomedical researcher and neuroscientist, Rommelfanger went on to found the Institute for Neuroethics, the world’s first think and do tank devoted entirely to neuroethics, public engagement, and policy implementation.
“The brain is special; it’s central to who we are,” says Rommelfanger, who was also an inaugural recipient of the Dana Foundation Neuroscience and Society Award. “And that means when you intervene with the brain, there are unique responsibilities. The field of neuroethics addresses things like: How do you ensure mental privacy? How do you protect free will? How do you ensure that people have the power to be narrators of their own lives and their cognitive experience?”
Now, Rommelfanger is joining Georgia Tech’s Institute for Neuroscience, Neurotechnology, and Society (INNS) as a professor of the practice, where she will work to further embed neuroethics into Georgia Tech’s research and technology development ecosystem.
“Georgia Tech is producing the next generation of neurotechnologists, and Karen’s expertise will help ensure we’re preparing them to think about societal impact as deeply as they think about the technical and scientific aspects of their work,” says Christopher Rozell, executive director of INNS. “Her leadership strengthens the Institute in exactly the way this moment in neurotechnology demands.”
“Georgia Tech has many, many ways that it leads in the technology ecosystem. But one of the powerful, unique ways it can lead is through neurotechnology,” says Rommelfanger. “I hope that the INNS, given its unique mandate for neuroscience, neurotechnology, and society, can be a lighthouse for these types of conversations.”
Neuroethics by Design
From institutional review boards to mandatory responsible research conduct training, ethics are a foundational part of scientific research. But designing neurotechnologies raises ethical challenges beyond the scope of typical training. What happens when discoveries leave the lab and enter people’s lives?
That question sits at the core of Rommelfanger’s work. She argues it’s a neurotechnologist’s responsibility to recognize and proactively address the need for unique safeguards for privacy, autonomy, and long-term responsibility. Her solution is to move neuroethics upstream, embedding it directly into the research, design, and deployment of neurotechnology through an approach she calls “neuroethics by design.”
“Neuroethics by design considers ethics as a core criterion where principles can drive innovation with more of a lens toward societal outcomes,” she says — an approach informed by years of advising national-level brain research initiatives and her experience at the intersection of clinical practice and ethics scholarship.
Rather than treating ethics as a compliance checklist or a post hoc review, neuroethics by design integrates ethical thinking throughout the entire innovation lifecycle, from early ideation and research questions to product requirements, governance strategies, and long-term sustainability. She has used the approach for years as an embedded partner for neurotechnology startups in her neuroethics consultancy, Ningen Co-Lab.
After decades as a traditional academic professor and then years advising companies and policymakers with this philosophy, Rommelfanger says Georgia Tech is the right place to scale this work. With its strength in neurotechnology and INNS’s rare focus on neuroscience and society, “I could not think of a better place to launch and pilot this neuroethics by design scaling effort.”
She will work with INNS to help equip researchers, students, and industry partners with practical tools for ethical decision-making. Her vision is not to create neuroethicists as a standalone profession, but to cultivate ethically engaged neurotechnologists and engineers.
Central to her plans at INNS are hands-on training programs that bring ethics out of the abstract and into practice. “I wanted to be a professor of the practice because, while the field does need more scholars, what it really needs most at this point are practitioners.”
Rommelfanger is exploring modular content that can be embedded into existing courses across disciplines, as well as immersive training — such as neuroethics boot camps and problem-solving hackathons — that bring together students, faculty, and professionals to tackle real-world challenges collaboratively.
“No one discipline can solve all the ethical challenges ahead,” says Rommelfanger. She is particularly interested in creating spaces where experts from across science and engineering, policy and law, design and the arts, and philosophy can work side by side with people with lived experience of neurological conditions. “The onus is not on scientists alone, but is a shared responsibility that benefits immensely from dialogue, accountability, and action across diverse communities.”
By situating neuroethics within Georgia Tech’s broader research ecosystem, Rommelfanger hopes INNS can help shift how the field evolves globally.
“It's really difficult to get your arms around something once it's out of the gate,” she says, citing the rapid adoption of AI without proper ethical or policy guidelines. “With neurotechnology, we still have a little bit of time, but not that much time. We are at that moment where we could change the course of global history.”
News Contact
Audra Davidson
Research Communications Program Manager
Institute for Neuroscience, Neurotechnology, and Society (INNS)
Pagination
- Page 1
- Next page