£500-£600 per day
Outside IR35
6-months
Join one of the UK's leading online retailers as they evolve their next-generation data platform. This is an opportunity to shape the backbone of a modern data ecosystem, empowering analysts, ML engineers, and data scientists to deliver smarter, faster insights at scale.
The RoleYou'll play a key role in designing and engineering platform services that treat data as a core product. This means building scalable, secure, and observable systems that help teams confidently leverage data across the business.
You'll work closely with a wide range of technical and non-technical partners to deliver resilient infrastructure, champion data governance, and mentor others in engineering excellence.
In this role, you will:
Shape the data platform roadmap: Introduce modern observability, quality, and governance frameworks that elevate how teams access and trust data.
Build and scale infrastructure: Develop services, APIs, and data pipelines using modern cloud tooling and automation-first principles.
Drive engineering best practices: Implement CI/CD pipelines, testing frameworks, and container-based deployments to ensure reliability and repeatability.
Lead cross-functional initiatives: Collaborate with product engineers, data scientists, and ML practitioners to understand their workflows and deliver high-impact platform solutions.
Champion operational reliability: Proactively monitor system performance, automate incident response, and strengthen platform resilience.
Strong proficiency in Python (or a similar high-level language) with a deep understanding of software engineering best practices - testing, automation, clean code, and CI/CD.
Proven track record building and maintaining scalable data platforms in production, enabling advanced users such as ML and analytics engineers.
Hands-on experience with modern data stack tools - Airflow, DBT, Databricks, and data catalogue/observability solutions like Monte Carlo, Atlan, or DataHub.
Solid understanding of cloud environments (AWS or GCP), including IAM, S3, ECS, RDS, or equivalent services.
Experience implementing Infrastructure as Code (Terraform) and CI/CD pipelines (e.g., Jenkins, GitHub Actions).
A mindset focused on continuous improvement, learning, and staying at the forefront of emerging technologies.
Experience rolling out data governance and observability frameworks, including lineage tracking, SLAs, and data quality monitoring.
Familiarity with modern data lake table formats such as Delta Lake, Iceberg, or Hudi.
Background in stream processing (Kafka, Flink, or similar ecosystems).
Exposure to containerisation and orchestration technologies such as Docker and Kubernetes.