The Next Industrial Revolution
Industrial AI - one of the most exciting opportunities of the next 10 years
One of the most surprising things about recent advances in AI is how instead of the media concurrently covering advances in many different industries, we often see concentrated bursts of publicity around specific domains.
I concede this could be due to the way the media organises itself, as it does paint interesting competitive narratives, or it could also just be a reflection of my perspective in the Venture Capital echo chamber.
One downside of this, however, is that it puts too much attention on fields where lots of work is already being done: fields like foundation models, customer service, and software development - areas already saturated with attention.
This article highlights what I see as one of the highest-potential opportunities that remains largely untouched.
If you’re considering investing in this space, starting a related company, or simply want to discuss the topic further, feel free to reach out.
Industrial AI
Industrial AI has to do with the application of advanced AI techniques to complicated problems in industrial fields. Examples include airplane airflow modelling, electrical grid optimisation, or chemical simulations.
This is a large contrast to the majority of AI applications we see today - beautifully designed, convenient, and entertaining solutions focused on making life better through quick fixes to relatively straightforward problems/requests.
Sure, maybe I could take the time to read a long technical article and summarise the key insights myself, but Claude is just so much more convenient and pleasant to use.
Ok, maybe I should take the time to debug my buggy C++ but I’d really rather just plug it into ChatGPT and ask it to give me the correct version.
But unfortunately, I can’t (at least not yet!) ask a foundation model to integrate 25 data sources with billions of data points to predict in realtime, for safety critical reasons, exactly when my electrical grid will start to deteriorate over the next 10 years.
Here are some of the most important Industrial AI characteristics:
Usually integrated with a large number of data sources and processes
Solves complicated technical problems
Handles enormous amounts of data
Must be incredibly robust and reliable
Usually must operate within complex safety regulations
Typically need to be custom built to the use case (although more on exact product packaging later)
Simulation
While there are a number of interesting categories in Industrial AI, the one I have been obsessing over most lately is that of Simulation.
Here are a few key reasons:
The market is massive and projected to reach $36.22 billion in 2030
It powers some of our most important processes - from the ways we design and build cars to the ways in which identify and test new drugs
It’s dominated by slow moving legacy players - the main companies in the space are slow moving giants that have long captured the vast majority of the market and have less incentive/ability to continue to innovate
The largest bottlenecks in the space are now potentially solvable by novel AI approaches (keep reading for the reasoning behind this)
The Current Biggest Challenges In Simulation
Let’s start with the main bottlenecks currently faced in the space.
Most simulations are slow
This is especially true for high-fidelity simulations, such as those used in Computational Fluid Dynamics (CFD) or Finite Element Analysis (FEA). Having to wait days for simulation results makes it incredibly challenging to run the iterative processes needed to make real time decisions.
Higher fidelity = incredibly large computational demand
Simulations with higher fidelity - those requiring more detailed and accurate models, such as those with hundreds of millions of mesh points - demand immense computational resources. This makes running these simulations on standard computer hardware impractical and challenging, as the increased level of detail requires exponentially more processing power.
Difficult to simulate over long time horizons
Some of the most interesting simulations, such as structural fatigue analysis on utility infrastructure, span months or even years, making them exceptionally resource intensive.
Can’t generalise well
Once a simulation is completed for a specific scenario, reconfiguring it for new situations is highly labor intensive and inefficient. This inability to easily generalise highly limits scalability.
These are significant challenges that have historically hindered the simulation landscape - but recent technical advances may offer interesting solutions.
Surrogate Models
Surrogate models may be about to radically impact the simulation space by addressing all the aforementioned challenges.
Surrogate models are neural networks which capture concise approximations of complex systems.
Rather than having to learn the rigorous underlying equations that the physical world is based off, they structure their behaviour off previously recorded input-output relationships.
It is worth mentioning that they form these approximations without necessarily having to compromise on accuracy. If training is done with high quality data, and we actively dedicate computational resources to capture the most important aspects of the system, we can maintain incredibly high levels of accuracy.
In certain cases, we can even make use of Physics Informed Neural Networks (PINNs) to take into account the underlying physics to ground results in reality.
But what are the advantages of these surrogate models though?
Near real time processing
By having their behaviour modelled off previously recorded input-output relationships, these models can approximate simulation results at a fraction of the computational resources. This enables us to speed up the iteration process to the point where it effectively becomes real time (a massive unlock).
Breaking the fidelity to computational demand scaling relationship
These surrogate models are able to compress simulation representations in a way that maintains accuracy by capturing essential aspects with fewer parameters. One way in which this can be done is through AI boosted adaptive meshing, where we focus computational resources on the most critical regions of the simulation.
All this to say we may be able to have incredible resolution without an accompanying incredibly large computational demand.
Ability to simulate over much longer time periods
Time stepping AI techniques will enable us to simulate over much longer time periods by reducing the need to simulate every intermediate state. This will be a massive unlock for long term simulations without exuberant computational requirements.
Generalising better than ever before
While not an inherent advantage of surrogate models alone, the overall development of AI technology has meant we have increased access to high quality diverse datasets - more so than ever before.
This means AI models are better trained and can now generalise better than ever before, allowing improved applicability without extensive retraining. Customisation will continue to be enabled through transfer learning and domain adaptation methods.
Pushbacks, Challenges, and Opportunities
I think it is important to also highlight some key topics that often come up in my discussions with friends about this space.
Is this really only possible now?
In a world where AI seems to be the tool that is presented to perfectly solve every problem, is this really a case where recent AI advances enable the simulation industry to transform?
I would say yes, due to a number of factors including: improved model architectures, access to high quality data, advances in modern hardware enabling accelerated model training, and breakthroughs in transfer learning.
What are the biggest challenges?
For businesses entering the space, there are a number of important things to figure out. Many of the largest challenges I have been thinking about will occur with product packaging, GTM approaches, and managing customer use cases.
On-premises or cloud deployment?
High touch point managed service approach or SaaS application style?
Highly specific one-off use case models or more broadly applicable?
Exclusivity to specific customers?
3. What do future directions and opportunities look like?
If you are excited about the space and want to research into some of my favourite rabbit holes, here are some good ones:
Surrogates that can dynamically switch between fidelity levels depending on computational resources or accuracy requirements
Surrogates that can generalise incredibly well across highly disparate domains, reducing the need for retraining for each new application
Surrogates that adaptively allocate computational resources during simulations based on the complexity of different regions or phases
Surrogates that not only provide predictions but also quantify uncertainty and ensure reliability, which would be key for safety critical applications
The ability to run surrogate models on edge devices to enable real-time, low-latency predictions directly at the point of need (e.g., on a wind turbine or autonomous vehicle)
I hope you found this interesting! Feel free to reach out to chat a bit more about any of the points mentioned (especially so if you disagree with something I said!).


