skip navigation
skip mega-menu

From Code to Intelligent Systems: Reflections from AWS Summit London

Nimble at AWS Summit London

By Adam Slack, Head of Engineering at Nimble Approach

I spent the day at AWS Summit London on 22nd April – moving between keynotes, sessions, demos, and conversations across the ecosystem.

Stepping back afterwards, a few things stuck out. Not just the volume of AI content, but the shape of the conversation around it. A lot of what was being presented as the direction of travel felt very familiar. Not because it was old news, but because many of the hard parts being described are the same problems our teams are overcoming when building these systems for real.

Here are some of the key takeaways.

Familiar Problems, Now Becoming Products

One of the most noticeable things about the day was how much of it reflected issues we have already had to work through in practice.

At Nimble, we’ve been working closely with partners and customers on exactly these kinds of problems. Having recently deployed containerised agents into production using A2A-style patterns on ECS, much of what AWS is now talking about around agent runtimes, orchestration, observability, governance, and the operational realities of these systems does not feel theoretical. These are the same areas where teams are having to make real design decisions today.

What seems to be changing now is that AWS is starting to package more of those answers into products.

Even with this shift, much of today’s engineering effort still lives at the edges – integrating services, establishing patterns, designing workflows, managing observability, and putting guardrails in place so the system behaves predictably. As more of this becomes managed or standardised, the space becomes more accessible without obscuring the underlying complexity.

The Opportunity Is Real, and So Is The Complexity

What also came through strongly for me is that the barriers here are not just technical in the narrow sense. There’s a dense roadmap ahead for many organisations to keep up – not only in the UK.

Regional differences across AWS will present real roadblocks for many systems that need to operate at scale. The managed story may be improving, but it is still shaped by where services are available, where capabilities arrive first, and what that means for organisations dealing with practical deployment constraints.

Cost controls and quotas are another very real barrier to entry. There are already digital skills shortages in the UK and globally, and a lot of what was discussed will only increase demand for people who understand not just AI tooling, but the engineering, architecture, operations, and governance that sit around it.

It also requires a genuine shift in how you think about problem solving. Working with unpredictable data, self-organising systems, and designing for that sort of flexibility is architecturally different from what many teams are used to building. That is part of why this space is so interesting, but it is also why it is hard.

For organisations with the right conditions, particularly those with fewer data sovereignty constraints, the opportunity is significant. But there is still a world of difference between a compelling proof of concept and a production-grade intelligent system that is secure, observable, maintainable, and actually useful.

Where Observability Becomes Action

One of the demos that stood out to me was not interesting because it was flashy. It was interesting because it looked more like a worked example of what an incident response might look like with a DevOps agent alongside you.

A lot of teams already have plenty of observability data. CloudWatch, Datadog, Dynatrace, and dashboards everywhere. The issue is often not a lack of data, but what you actually do with it. Many dashboards can end up being fairly superficial unless there is an obvious spike, alarm, or failure staring back at you. Otherwise, you can spend time looking at charts without really learning much.

What was interesting here was the sense that these tools will help engineering teams interrogate their own data, explore where to look next, and get more value from the operational picture they already have.

This Isn’t “AI Features” – It’s a Platform Shift

Another thing that came through strongly was that this is not just about bolting AI onto existing products.

AWS is clearly trying to build an ecosystem around agentic systems – how they run, how they interact, how they are governed, and how they sit alongside data, models, and infrastructure inside a broader platform story.

That is a different proposition from just exposing a model endpoint or adding a chatbot to an application.

What becomes interesting is not AI in isolation, but how all of these pieces fit together inside an AWS ecosystem. That feels closer to a platform shift than a feature trend.

From Software to Intelligent Systems

We are also seeing a shift in what we are actually building.

The focus is no longer just on writing code, but on designing systems made up of AI agents, data pipelines, automation, and continuous feedback loops. Systems that can reason, act, and evolve over time, rather than simply execute predefined logic.

This shift becomes tangible when looking at how agentic AI is being applied.

Across areas like DevOps, incident management, and data analysis, agents are now capable of analysing logs, making changes across codebases, and supporting real decision-making. This goes far beyond chatbot-style interactions and into systems that actively participate in operations.

That, to me, is one of the more important shifts underway. The job is no longer just building software in the traditional sense. It is increasingly about designing the behaviour, controls, and interactions of broader intelligent systems.

Speed Isn’t the Goal – Cohesion Is

Something else I was glad to see was that the better conversations were not really about speed for its own sake.

There is no shortage of messaging in this space about faster delivery and more output. But shipping more code faster is not automatically helpful. In plenty of cases, it just means creating problems more quickly.

That lines up with DORA’s recent research on AI in software delivery. Their work describes AI as an amplifier: strong teams with solid internal platforms, clear workflows, mature version control, automated testing, and fast feedback loops tend to see better outcomes, while weaker foundations mean higher throughput can come at the expense of stability. In other words, if the basics are not in place, AI does not remove bottlenecks – it often amplifies them.

That is why the more useful conversation is about foundations. Clearer upfront definition. Better standards. Stronger engineering discipline. Tooling that helps teams do the right things consistently, rather than helping them bypass the hard parts.

Treat Everything as Untrusted

One of the sessions I found particularly useful focused on securing agentic AI systems, and one point in particular stuck with me: Large Language Models (LLMs) do not understand the difference between data and instructions.

If you are building agents, that has a pretty immediate implication – you have to treat everything as untrusted. User input, data sources, tools, memory – all of it. OWASP’s Agentic Top 10 is as relevant as ever.

The examples shown made that feel very real. One demo showed a finance agent being hijacked via prompt injection hidden in a data source. Not visible to the user. Just enough to nudge the model into taking an unintended action. Another example showed a supply chain attack where a “trusted” tool had been slightly modified to leak data while keeping the same interface.

What was also interesting was how easily one issue can lead into another once there is a foothold.

There was a good question on whether it is actually sensible to trust an LLM to act as a judge, given it has many of the same underlying weaknesses. I think that concern is valid – but the key point is that you cannot trust the negative side of that judgement. If it flags something and blocks it, it’s useful. But you cannot assume that what it lets through is safe.

That still needs to be scrutinised.

So this becomes part of a broader approach: sanitising inputs, applying least privilege, constraining tools, and validating what actually gets executed. In a lot of ways, it is just zero trust applied to agentic systems.

False negatives are where the remaining risk sits in this context.

Legacy Modernisation: A Huge Opportunity

Legacy modernisation was another area that felt genuinely significant.

There is still a huge amount of spend tied up in maintaining older systems, and AWS is clearly leaning into AI as a means of helping organisations understand, extract, and modernise those estates. Whether that is around code, business rules, documentation, or migration effort, the opportunity is obvious.

For many organisations, the blocker is not a lack of intent. It is cost, time, risk, and the practical difficulty of unpicking what they already have. Anything that can help reduce that burden in a credible way is worth paying attention to.

Data Still Wins

For all the discussion around models and agents, one thing hasn’t changed.

Data remains the foundation. Whether it’s structured datasets, documents, or millions of images, the effectiveness of AI systems is directly tied to the quality and accessibility of the data behind them.

Final Takeaway

The biggest takeaway from the day is that AI isn’t just making developers faster – it’s changing the nature of what we build.

We are moving from delivering discrete pieces of software to designing intelligent, adaptive systems. Systems that can reason, act, and evolve.

The tooling is still catching up in places, and there are real trade-offs today. But the direction of travel is clear – and increasingly, it’s aligning with what many teams are already experiencing firsthand.

Subscribe to our newsletter

Sign up here