Montash Managing Consultant Ash Fellows recently spoke with Brad Heller, Co-Founder and CTO of Tower.dev, about what engineering leaders are seeing across cloud infrastructure and AI systems in 2026.
Brad previously ran the control plane team at Snowflake and now focuses on building developer tooling that helps data engineers work with more composable, integrated infrastructure.
The discussion reflects many of the themes raised across the industry this year, including conversations taking place at events such as Tech Show London.
In this conversation, he shares his perspective on:
• How AI is changing engineering productivity
• The ongoing shortage of senior infrastructure engineers
• The growing role of smaller language models
• What engineering leaders should prioritise next
How AI Is Changing Development (But Not Infrastructure)
There’s a difference between how AI is affecting software development and how little it has changed infrastructure.
AI tools are already increasing developer productivity. Engineers are writing code faster, and strong engineers are made even more powerful.
But the infrastructure side of engineering has not changed in the same way.
“The process of building and managing infrastructure has not gotten any better…it’s completely untouched frankly.”
Teams may be shipping code faster, but the work of operating infrastructure has not become simpler.
Brad believes infrastructure will need to evolve to better support AI systems. That means platforms that are:
• More AI-native, built with AI workloads in mind
• More headless and API-driven, making them easier for systems and tools to interact with
• Better integrated with developer workflows, so infrastructure fits more naturally into how engineers already work
Why Hiring Senior Infrastructure Engineers Is Still the Biggest Constraint
When discussing bottlenecks in AI infrastructure, Brad says the biggest challenge isn’t hardware.
From his perspective, the real constraint across the industry is still hiring experienced engineers who understand infrastructure. In particular, he points to roles such as platform engineers and distributed systems engineers, people who work deeper in the stack and understand how to build products on top of infrastructure.
Brad also pushed back on the idea that AI reduces the need for senior engineers.
“The way we differentiate senior engineers from principal engineers is their ability to reason about trade-offs in different approaches to solving a problem.”
AI tools can increase productivity, but the biggest gains come when experienced engineers know how to use and communicate them well.
Why Smaller Language Models Are Getting More Attention
Rather than relying only on large, general-purpose models, Brad is seeing increasing interest in smaller language models for specialised tasks.
At Tower.dev, these models are being used to support agent-based workflows and specialised workloads.
“Small language models have been super interesting… both from a cost perspective, but also because you need fewer GPUs.”
He has also noticed the same trend when speaking with engineers across London, particularly in sectors such as fintech.
Some teams are experimenting with relatively small models to support highly specific decisions or workflows.
“I have the same conversation every week with guys in fintech… they’re squeezing these small models to make better trading decisions.”
In many cases, these models are not trying to compete with large frontier models. Instead, they are being tuned to perform well in a narrow task.
That approach can make them easier to run, cheaper to operate, and more practical for certain production environments.
The Cost of AI Infrastructure Is Still Playing Out
“The dirty little secret in the industry right now is that no one actually is [balancing cost and performance]. It’s an arms race.”
Brad compares the current moment to the early days of cloud adoption. At that time, companies were moving infrastructure into the cloud and building new systems quickly, often before they fully understood the long-term cost implications.
A similar pattern is emerging with AI.
Many organisations are currently absorbing high token costs from model providers.
“Everyone is beholden to these massive token bills… it’s just the cost of doing business right now.”
Brad also pointed to another shift happening across the software stack. AI capabilities are increasingly being embedded into SaaS products, which in many cases is pushing subscription costs higher.
He gave the example of AI features being added across productivity tools, even when users may not actively rely on them.
Over time, Brad expects companies to pay closer attention to how these systems are used and where efficiency gains can be made.
For now, the focus for many organisations is still on experimentation and speed.
Advice for Engineering Leaders
AI tools are changing how engineers work, and they’re not going away. Brad’s advice to leaders is to learn how to work with them and give teams space to experiment.
He also emphasises the value of engineers who are willing to test ideas and challenge their own assumptions as teams explore new tools.
Ultimately, organisations that actively engage with AI will move faster than those that hesitate.
“You want to be remembered as one of the companies that leaned in, not one of the companies that leaned out.”
For engineering leaders, the priority is not reacting to every new development, but helping teams build the skills and confidence to work with the tools that are now part of the ecosystem.
Are you navigating the challenges of scaling AI infrastructure and platform teams?
If you’d like to discuss how other organisations are approaching AI infrastructure hiring, platform engineering team design, or the growing demand for senior distributed systems engineers, contact Ash Fellows (ash.fellows@montash.com) for a second perspective on how to build the technical leadership needed to support AI systems in production.