skip navigation
skip mega-menu
Posts

Our delivery teams at Made Tech fully embrace the devops culture and mindset, which means that we take on many more responsibilities across the software development lifecycle than just writing code. Apart from building well-engineered digital services, we also contribute to the provisioning and managing of the infrastructure that these services run on, the automation and running of deployments, and the handling of post-deployment monitoring and management. After all, if you built it you get to run it too!

But software delivery is not only about providing technical outputs such as code, infrastructure, or deployment packages. As software engineers we are also responsible for focusing on the outcomes that we are contributing to in our day-to-day development work. Outcomes are a great way to measure the success of a delivery because they focus on the net effect that the software we release has on our citizens and users, rather than on pure technical deliverables such as code or provisioned infrastructure.

The work doesn’t stop once a new release has been successfully pushed to production, as our teams also need to ensure that we have achieved these desired outcomes. We consider all of these post-deployment activities to be as important as the development work itself. In other words, delivery doesn’t finish in production.

Measuring Outcomes

Once a digital service has successfully gone live, we need to understand whether it is actually achieving the outcomes we aimed for. There are many ways to gather this information and we find it to be most effective when the whole team is involved in this process.

There are, of course, several standard technical routes available to collect this information. Examples include anonymised web and data analytics, analysis of call center statistics, and real-time feedback capabilities built into the services themselves. But it can also come from more direct, personal sources.

We find that user research with the citizens who use the digital services we build gives us access to a deep lake of useful feedback which we can use to measure the effectiveness of these services. There are several approaches that we find to be effective including (but are not limited to) surveys, feedback forms, testing directly with users, as well as partnerships on more formalised studies with user research organisations.

We also gather feedback from the public sector service teams who interact directly with citizens in order to understand usage patterns. This information gives us a strong indication of whether our services are providing the value and outcomes that we expected them to.

Apart from checking whether our services are meeting the needs of citizens, we also make sure to check that they are simple to navigate, easy to understand and accessible to as many people as possible. The guidelines in the UK government’s Service Standard are a good starting point for this sort of analysis, and we recommend that anyone with an interest in measuring their outcomes in the public sector reads through this standard.

Evolution and Enhancement

Software projects are akin to living entities in that they require constant nurturing throughout their lifecycle, including in the post-deployment phase.

Once we have collated all of the data from the measurement phase, we have a set of high-quality data points which help us to understand whether the expected outcomes have been achieved. Following lean principles, we are able to use these learnings to feed back into the next delivery cycle. This helps us to ensure that we always measure before building.

Additionally, it is sometimes necessary to consciously take on a bit of technical debt during the development lifecycle. We find that post-deployment is a good time to analyse if there are major debt items that need to be tackled. These items are especially important to consider before moving onto our next build phase, as letting technical debt pile up can lead to fragile systems that are hard to change and take longer than they should to enhance.

We make this process easier for ourselves by ensuring that we keep a log of our decisions to take on technical debt during development. We then address this debt as soon and as often as possible. Doing so ensures that we consistently maintain the stability of the digital services that we build to serve our citizens.

Technical debt, learnings from data, testing with users and strategic organisational aims all help to drive our future roadmap, both in terms of desired outcomes and technical deliverables.

Monitoring – Performance and Proactivity

The primary outcome we strive for is to build services which help public sector organisations to better serve citizens. While the delivery of features is important, a digital service that performs its task with slow lead times and low reliability is not a useful one.

This means that our teams are responsible for ensuring that the services they build are both performant and reliable. While this is something that we focus on strongly during the development phase, it is also extremely important to make sure that these metrics are logged and monitored post-deployment, as this is the time when the system experiences real-world load levels and usage patterns.

For example, we will alway ensure that we have automated logging, monitoring and alerting in place and that we consistently monitor these data streams post-deployment. Our preference is to build our monitoring with preemptive warnings in mind, meaning that we can often detect and resolve issues before they occur.

This post-deployment focus on performance and reliability also extends back to the codebase. It is often easier to analyse your code with a fresh set of eyes once a production delivery is completed. This, paired with statistics gathered from monitoring mechanisms, gives us a strong set of data points to plan performance- and reliability-related codebase enhancements. 

You can find more detailed information about our approach to monitoring and production-readiness in our Productionisation Checklist.

Ongoing Focus On Security And Privacy

Public sector organisations have an extremely important responsibility to ensure that both the security and privacy of citizens are maintained throughout their I.T. infrastructure and services. Therefore, the developers who build these services must collaborate with security teams on an ongoing basis to make sure what they build is as attack-proof as possible.

This responsibility can range from keeping support software and operating systems up to date with security advisories, to providing security breach monitoring, to implementing and deploying any required software-level security enhancements.

Some of the methods we commonly use to maintain stringent security standards include:

  • Building threat models to create a shared understanding of the risks we are trying to mitigate.
  • Implementing automated dependency-vulnerability checking using tools such as Dependabot.
  • Applying the principle of least privilege for cloud services and system access.
  • Maintaining strong isolation wherever possible between networks, environments and containers.
  • Bringing in external security teams to penetration test our services and infrastructure when necessary.

Empowering In-House Development Teams

From a customer’s point of view, there is always a danger when outsourcing development to external vendors that they will build your software with a “fire and forget” mentality. In these cases, your software becomes legacy software as soon as the vendor signs off delivery and moves on to their next engagement. This leaves you in a predicament where you are unable to maintain or enhance your digital services without either relying on the vendor who initially delivered them, or introducing risk to your organisation as you attempt to maintain them yourself.

We believe that this (sadly common) approach is not only counter-productive but also extremely unscrupulous on the part of software vendors. The services we build affect many people’s lives every day and we have a responsibility to make sure that they are managed by teams who have a clear understanding of how to maintain and upgrade them, even if we are no longer directly involved.

In an ideal scenario we prefer to work in collaboration with in-house developers during the development process, but this is not always possible. In either case, if we are moving onto other projects after we have managed a successful “final” production release, it is essential that we provide a full, comprehensive handover to the teams that will go on to maintain and upgrade the services we build. This is equally important both when handing over to in-house teams and to other external suppliers.

Techniques for ensuring that this handover is managed smoothly and effectively can include the provision of sufficient (but not excessive) documentation, training sessions for the teams taking over the codebase, and formalised handover sessions.

We also find that our focus on DevOps and automation simplifies the handover process. The fact that aspects such as automated regression testing and continuous delivery are implemented as part of our delivery typically lowers the volume of information required for handover, which lowers the cognitive load on the teams taking over the services we have built.

Final Thoughts

Our responsibilities as software delivery professionals extend well beyond the build and deployment phase of the software delivery lifecycle. When following the Build-Measure-Learn feedback loop, you should treat the deployment phase not as a final step, but rather as a stepping stone to the following phase of a successful delivery.

This approach will allow you to go above and beyond as a delivery organisation and to consistently exceed your customers’ expectations. It enables you to deliver sustainable technology that achieves your desired outcomes, both now and in the future.

If you would like to read more from our blog, you can visit our most recent posts here.

Subscribe to our newsletter

Sign up here