Data Engineering & Tech Lead - Infected Blood Compensation Authority - G6
Government Digital & Data -
Location
Glasgow, Newcastle-upon-Tyne
About the job
Job summary
The Infected Blood Compensation Authority (IBCA) is a new arm’s-length body set up, at unprecedented pace, to administer compensation to people whose lives have been impacted by the infected blood scandal.
IBCA will ensure payment is made in recognition of the wrongs experienced by those who have been infected by HIV, Hepatitis B or C, as well as those who love and care for them. They have been frustrated and distressed by the delays in achieving proper recognition, and we must help put this right.
We are committed to putting the infected and affected blood community at the centre of every decision we make and every step we take to build our organisation to deliver compensation payments.
IBCA employees will be public servants. If successful in this role you will be appointed directly into IBCA, on IBCA terms and conditions as a public servant.
Successful applicants will join the Civil Service Pension Scheme.
Please note that the mission of IBCA means that it is likely to be operational for a period of approximately 5 to 7 years. When IBCA’s work begins to wind down, IBCA employees will receive support and practical guidance to find a new role, whether in the Civil Service, another Arms Length Body (ALB), or an external employer.
Job description
The Infected Blood Compensation Authority (IBCA) is responsible for delivering a compensation scheme that has been long awaited by the infected blood community to provide financial compensation to victims of infected blood on a UK-wide basis.
This role sits within the Data Operations arm of the IBCA Data Directorate. Working in close collaboration with the Principal Data & Systems Architect, you will be responsible for turning architectural blueprints into a hardened, high-performance data ecosystem.
The Data Operations team is responsible for developing and running safe and secure data solutions that provide a single source of truth for those going through their compensation journey. They are building a new data platform using Amazon Web Services (AWS) and data management and intelligence products using Databricks and Quantexa.
We are taking a product-centric approach treating data as a product and are building squads around our products, with a focus on paying compensation to those impacted by the infected blood scandal seamlessly.
As the Data Engineering & Tech Lead, you will spearhead the engineering execution—designing and implementing robust, automated data pipelines and Master Data Management (MDM) processes. You will ensure code quality, drive a rigorous culture of Test-Driven Development (TDD) and Data Observability, and ensure our engineering practices are scalable and secure by design.
Working at IBCA gives you a huge opportunity to make an impact on those who deserve compensation, and this role suits a candidate who can spearhead solutions from the ground up to take from ideation to reality so that data is an enabler to everything IBCA does.
As the Principal Data Engineering & Tech Lead, you will work in close partnership with the Principal Data & Systems Architect, you will translate high-level designs into high-performance, production-ready code. You are responsible for the "how"—driving the technical standards, framework development, and engineering discipline required to deliver a world-class data platform built on AWS, Databricks, and Quantexa.
You will:
• Set the technical direction for Data, Analytics, Data Test and Data DevOps engineering. You will lead the development of robust, automated data pipelines and master data management processes, ensuring that architectural visions are operationalised using DevSecOps and DataOps best practices.
• Ensure our engineering stack is scalable, secure, and efficient. You will develop reusable engineering frameworks and libraries that accelerate delivery across all squads while maintaining strict quality gates.
• Lead the recruitment and development of our data engineering teams. You will shape career paths and L&D routeways, ensuring our engineers have the skills to handle high-complexity tasks like Spark optimisation and entity resolution.
• Provide direct leadership to engineering squads. You will perform deep-dive code reviews, and unblock high-level technical hurdles that require expert-level intervention.
• Work across digital service teams and with business stakeholders to translate complex requirements into concrete technical roadmaps and Quality Assurance (QAT) approaches that are feasible and performant.
• Collaborate with industry partners (e.g. Databricks, Quantexa) to influence their product roadmaps and leverage new features that give IBCA a technical edge.
• Engineering Implementation Lead: Partner with the Principal Data & Systems Architect to operationalise architectural patterns. You will lead technical execution on the ground, ensuring that data pipelines and data platform features are built to be resilient, reusable, and performant.
• Engineering Standards & Frameworks: Work in partnership with the Chief Technology Officer and the wider IBCA engineering community to define and enforce the "gold standard" for code quality. You will develop shared engineering libraries, common ETL/ELT frameworks, and standardized CI/CD patterns to accelerate delivery across all data squads.
• Deep Technical Mentorship: You will spend significant time performing deep-dive code reviews, pair programming on complex logic (e.g. Spark optimisation or Quantexa entity resolution), and unblocking technical hurdles in delivery teams.
• Quality & Reliability Engineering: Lead the transition from basic testing to data observability. You will implement sophisticated automated testing, data quality monitoring, and "circuit breaker" patterns to ensure platform reliability and trust.
• DevSecOps & Automation: Take ownership of the deployment lifecycle. You will lead the automation of our infrastructure (IaC) and data deployments, ensuring that security "shift-left" principles are embedded directly into the engineering workflow.
• Technical Resource & Capability Building: Shape the data engineering culture. You will identify skill gaps, design internal L&D "routeways" for Data Engineers, Analytical Engineers, Data Testers and Data DevOps, and lead the technical assessment of new talent to ensure a high bar for data engineering excellence.
• Delivery Integration: Work alongside Data Delivery Managers to provide high-confidence technical estimates and risk assessments. You ensure that the data engineering reality aligns with the delivery timeline.
Person specification
• Accomplished expert with extensive hands-on experience at a senior level in building and delivering high-scale data pipelines. You must have a proven track record of implementing automated CI/CD, data quality, and master data management (MDM) processes within a rapid-delivery cloud environment. (Lead Essential Criteria)
• Deep proficiency in the AWS ecosystem and Databricks and strong in writing clean, concise, and parameterized code in Python, SQL, or Scala, with the ability to set the "gold standard" for code maintainability across an organisation.
• Strong expertise in the end-to-end data lifecycle. You have extensive knowledge of data modelling concepts (Data Vault, Star Schema, Medallion) to ensure the physical build of Data Lakes and Lakehouses is performant and scalable.
• Deep familiarity with advanced Databricks features such as Unity Catalogue, Delta Live Tables (DLT), and Spark performance tuning to drive platform efficiency.
• Expert in setting the strategic direction for reusable data engineering frameworks. You have a deep understanding of test automation and containerisation (e.g. Docker, Kubernetes) to ensure that engineering patterns are consistent, repeatable, and decoupled from underlying infrastructure.
• Significant experience in leading and managing engineering teams. You have the interpersonal skills to motivate individuals, and foster a culture of technical excellence.
• Extensive experience mentoring teams in Agile/DevOps delivery. You are comfortable working across multi-disciplinary teams to bridge the gap between technical possibility and delivery reality.
• Demonstrable experience in building or radically maturing data, analytical and test engineering processes from scratch. You know how to implement DevSecOps at scale, moving an organisation from manual deployments to a high-maturity, automated cloud environment.
Desirable
• Hands-on experience working with the Quantexa Decision Intelligence Platform (or similar entity resolution/graph technologies), specifically in optimising network generation and data ingestion patterns.
• Experience managing contracts and relationships with 3rd party software providers and professional services.
Additional information:
A minimum 60% of your working time should be spent at your principal workplace. Although requirements to attend other locations for official business will also count towards this level of attendance.