it job board logo
  • Home
  • Find IT Jobs
  • Register CV
  • Career Advice
  • Contact us
  • Employers
    • Register as Employer
    • Pricing Plans
  • Recruiting? Post a job
  • Sign in
  • Sign up
  • Home
  • Find IT Jobs
  • Register CV
  • Career Advice
  • Contact us
  • Employers
    • Register as Employer
    • Pricing Plans
Sorry, that job is no longer available. Here are some results that may be similar to the job you were looking for.

34 jobs found

Email me jobs like this
Refine Search
Current Search
lead pyspark engineer
Young's Employment Services Ltd
Data Engineer
Young's Employment Services Ltd
Data Engineer London + 2 or 3 days work from home Circ £60,000 - £70,000 + Excellent Benefits Package A fantastic opportunity is available for a Data Engineer that enjoys working in a fast paced and collaborative team playing work environment. Our client has been expanding at a remarkable pace and have transformed their technical landscape with leading edge solutions. Having implemented a new MS Fabric based Data platform, the need is now to scale up and deliver data driven insights and strategies right across the business globally. The Data Engineer will be joining a close-knit team that is the hub of our client's global data & analytics operation. Previous experience with MS Fabric would be beneficial but is by no means essential. Interested candidates must have experience in a similar role with MS Azure Data Platforms, Synapse, Databricks or other Cloud platforms such as AWS, GCP, Snowflake etc. Key Responsibilities will include; Design, implement, and optimize end-to-end solutions using Fabric components: o Data Factory (pipelines, orchestration) o Data Engineering (Lakehouse, notebooks, Apache Spark) o Data Warehouse (SQL endpoints, schemas, MPP performance tuning) o Real-Time Analytics (KQL databases, event ingestion) o Manage and enhance OneLake architecture, delta lake tables, security policies, and data governance within Fabric. o Build scalable, reusable data assets and engineering patterns that support analytics, reporting, and machine learning workloads. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver effective solutions. Troubleshoot and resolve data-related issues in a timely manner. Key Experience, Skills and Knowledge: Proven 2 yrs+ experience as a Data Engineer or similar role, with a strong focus on PySpark, SQL, Microsoft Azure Data platforms and Power BI an advantage Proficiency in development languages suitable for intermediate-level data engineers, such as: Python / PySpark: Widely used for data manipulation, analysis, and scripting. SQL: Essential for querying and managing relational databases. Understanding of D365 F&O Data Structures is highly desirable Strong problem-solving skills and attention to detail. Excellent communication and collaboration abilities. This is a hybrid role based in Central / West London with the flexibility to work from home 2 or 3 days per week. Salary will be dependent on experience and expected to be in the region of £60,000 - £70,000 + an attractive benefits package including bonus scheme. For further information, please send your CV to Wayne Young at Young's Employment Services Ltd. YES are operating as both a recruitment Agency and Recruitment Business
19/03/2026
Full time
Data Engineer London + 2 or 3 days work from home Circ £60,000 - £70,000 + Excellent Benefits Package A fantastic opportunity is available for a Data Engineer that enjoys working in a fast paced and collaborative team playing work environment. Our client has been expanding at a remarkable pace and have transformed their technical landscape with leading edge solutions. Having implemented a new MS Fabric based Data platform, the need is now to scale up and deliver data driven insights and strategies right across the business globally. The Data Engineer will be joining a close-knit team that is the hub of our client's global data & analytics operation. Previous experience with MS Fabric would be beneficial but is by no means essential. Interested candidates must have experience in a similar role with MS Azure Data Platforms, Synapse, Databricks or other Cloud platforms such as AWS, GCP, Snowflake etc. Key Responsibilities will include; Design, implement, and optimize end-to-end solutions using Fabric components: o Data Factory (pipelines, orchestration) o Data Engineering (Lakehouse, notebooks, Apache Spark) o Data Warehouse (SQL endpoints, schemas, MPP performance tuning) o Real-Time Analytics (KQL databases, event ingestion) o Manage and enhance OneLake architecture, delta lake tables, security policies, and data governance within Fabric. o Build scalable, reusable data assets and engineering patterns that support analytics, reporting, and machine learning workloads. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver effective solutions. Troubleshoot and resolve data-related issues in a timely manner. Key Experience, Skills and Knowledge: Proven 2 yrs+ experience as a Data Engineer or similar role, with a strong focus on PySpark, SQL, Microsoft Azure Data platforms and Power BI an advantage Proficiency in development languages suitable for intermediate-level data engineers, such as: Python / PySpark: Widely used for data manipulation, analysis, and scripting. SQL: Essential for querying and managing relational databases. Understanding of D365 F&O Data Structures is highly desirable Strong problem-solving skills and attention to detail. Excellent communication and collaboration abilities. This is a hybrid role based in Central / West London with the flexibility to work from home 2 or 3 days per week. Salary will be dependent on experience and expected to be in the region of £60,000 - £70,000 + an attractive benefits package including bonus scheme. For further information, please send your CV to Wayne Young at Young's Employment Services Ltd. YES are operating as both a recruitment Agency and Recruitment Business
Hays Technology
Lead Data Engineer
Hays Technology Hull, Yorkshire
Lead Data Engineer Hull, HU10 + 2 days home working Up to 80,000 + Benefits Your new role I am currently recruiting for a Lead Data Engineer to build and strengthen the foundations of the data platform, delivering reliable pipelines, governed, high-quality data products that teams across Sales, Network, Customer Experience, Finance and Operations can trust. Responsibilities Build, optimise and operate ELT/ETL pipelines into our data platform using SQL and Python (PySpark), with a focus on reliability, performance and maintainability. Develop and maintain core data models (curated layers, dimensional models, shared definitions) that enable consistent KPI reporting and analysis. Implement and embed data quality controls (freshness, completeness, accuracy, reconciliation checks) and monitoring so issues are detected early and fixed at source where possible. Partner with analysts and stakeholders to turn business questions into reusable, well-governed data products rather than one-off reporting. Improve engineering standards: Git workflows, code review, documentation, repeatable deployments, and sensible environment separation. Support governance by helping define data contracts, ownership, lineage and "what does this metric mean?" clarity, so teams can use and challenge the numbers confidently. Contribute to the wider platform roadmap while keeping delivery outcomes front and centre. Lead by example on engineering quality: set the bar for production-grade delivery (testing, monitoring, documentation, code review, release discipline) and help the team consistently meet it. Coach and uplift others: mentor junior engineers/analysts, run pairing sessions, provide practical feedback, and help raise SQL/Python capability across the function. Experience needed Strong hands-on experience as a data engineer in complex, high-growth or technology-led organisations. A track record of taking data pipelines and models from "fragile and fragmented" to "trusted, governed and embedded" through practical engineering improvements. Solid experience across the data engineering lifecycle: ingestion, transformation/modelling, and enabling consumption through BI/semantic conventions. Hands-on capability with modern cloud data platforms and tooling, and a clear view of what "good" looks like for testing, monitoring, environments and deployment. Proven approach to data quality: not just fixing reports, but improving definitions, controls and root causes in upstream systems and processes. Strong communication skills: able to explain trade-offs, risks, and delivery choices clearly to non-technical stakeholders, and comfortable being challenged. A high-standards, low-ego working style: collaborative, pragmatic, and focused on outcomes that stick (not dashboards that nobody uses). Must have developed a Data Platform from inception to completion. Managed and developed data engineers, forming a high-performing team. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at (url removed)
18/03/2026
Full time
Lead Data Engineer Hull, HU10 + 2 days home working Up to 80,000 + Benefits Your new role I am currently recruiting for a Lead Data Engineer to build and strengthen the foundations of the data platform, delivering reliable pipelines, governed, high-quality data products that teams across Sales, Network, Customer Experience, Finance and Operations can trust. Responsibilities Build, optimise and operate ELT/ETL pipelines into our data platform using SQL and Python (PySpark), with a focus on reliability, performance and maintainability. Develop and maintain core data models (curated layers, dimensional models, shared definitions) that enable consistent KPI reporting and analysis. Implement and embed data quality controls (freshness, completeness, accuracy, reconciliation checks) and monitoring so issues are detected early and fixed at source where possible. Partner with analysts and stakeholders to turn business questions into reusable, well-governed data products rather than one-off reporting. Improve engineering standards: Git workflows, code review, documentation, repeatable deployments, and sensible environment separation. Support governance by helping define data contracts, ownership, lineage and "what does this metric mean?" clarity, so teams can use and challenge the numbers confidently. Contribute to the wider platform roadmap while keeping delivery outcomes front and centre. Lead by example on engineering quality: set the bar for production-grade delivery (testing, monitoring, documentation, code review, release discipline) and help the team consistently meet it. Coach and uplift others: mentor junior engineers/analysts, run pairing sessions, provide practical feedback, and help raise SQL/Python capability across the function. Experience needed Strong hands-on experience as a data engineer in complex, high-growth or technology-led organisations. A track record of taking data pipelines and models from "fragile and fragmented" to "trusted, governed and embedded" through practical engineering improvements. Solid experience across the data engineering lifecycle: ingestion, transformation/modelling, and enabling consumption through BI/semantic conventions. Hands-on capability with modern cloud data platforms and tooling, and a clear view of what "good" looks like for testing, monitoring, environments and deployment. Proven approach to data quality: not just fixing reports, but improving definitions, controls and root causes in upstream systems and processes. Strong communication skills: able to explain trade-offs, risks, and delivery choices clearly to non-technical stakeholders, and comfortable being challenged. A high-standards, low-ego working style: collaborative, pragmatic, and focused on outcomes that stick (not dashboards that nobody uses). Must have developed a Data Platform from inception to completion. Managed and developed data engineers, forming a high-performing team. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at (url removed)
Harnham - Data & Analytics Recruitment
Lead Data Platform Engineer
Harnham - Data & Analytics Recruitment Leeds, Yorkshire
Lead Data Platform Engineer £75,000 - £85,000 Remote You are a well-experienced Lead Data Platform Engineer, looking to take ownership of a modern lakehouse platform and shape best-in-class data engineering practices. THE COMPANY This industry-leading organisation serves specialist, people-first knowledge to its clients and is looking to provide further expertise by improving its data capabilities. THE ROLE As a Lead Data Engineer you will build a Databricks platform with ownership, shaping how data is engineered, deployed, automated, and governed in Azure. Specifically, you can expect to be involved in the following: Designing and developing end-to-end data platform solutions in Azure using Databricks Building and maintaining automation for cluster management, monitoring, tagging, and platform optimisation. Implementing CI/CD and DevOps best practices across data pipelines and platform components. Taking ownership of design decisions and long-term architectural direction. SKILLS AND EXPERIENCE The successful Lead Data Engineer will have the following skills and experience: Deep expertise with Azure Databricks, including development, deployment, and platform-level engineering. Strong software engineering capability across Pyspark, Spark SQL and Azure Data Factory. Solid understanding of DevOps practices including CI/CD and automated testing. Experience with monitoring, logging, and alerting solutions. Ability to work effectively with stakeholders across engineering functions. BENEFITS The successful Lead Data Platform Engineer will receive the following benefits: Salary between £75,000 - £85,000 - depending on experience Competitive bonus and benefits. Remote work with a flexible 35-hour week that can be taken as a 4-day week. HOW TO APPLY Please register your interest by sending your resume to Majid Latif via the Apply link on this page.
18/03/2026
Full time
Lead Data Platform Engineer £75,000 - £85,000 Remote You are a well-experienced Lead Data Platform Engineer, looking to take ownership of a modern lakehouse platform and shape best-in-class data engineering practices. THE COMPANY This industry-leading organisation serves specialist, people-first knowledge to its clients and is looking to provide further expertise by improving its data capabilities. THE ROLE As a Lead Data Engineer you will build a Databricks platform with ownership, shaping how data is engineered, deployed, automated, and governed in Azure. Specifically, you can expect to be involved in the following: Designing and developing end-to-end data platform solutions in Azure using Databricks Building and maintaining automation for cluster management, monitoring, tagging, and platform optimisation. Implementing CI/CD and DevOps best practices across data pipelines and platform components. Taking ownership of design decisions and long-term architectural direction. SKILLS AND EXPERIENCE The successful Lead Data Engineer will have the following skills and experience: Deep expertise with Azure Databricks, including development, deployment, and platform-level engineering. Strong software engineering capability across Pyspark, Spark SQL and Azure Data Factory. Solid understanding of DevOps practices including CI/CD and automated testing. Experience with monitoring, logging, and alerting solutions. Ability to work effectively with stakeholders across engineering functions. BENEFITS The successful Lead Data Platform Engineer will receive the following benefits: Salary between £75,000 - £85,000 - depending on experience Competitive bonus and benefits. Remote work with a flexible 35-hour week that can be taken as a 4-day week. HOW TO APPLY Please register your interest by sending your resume to Majid Latif via the Apply link on this page.
Harnham - Data & Analytics Recruitment
Lead Data Platform Engineer
Harnham - Data & Analytics Recruitment Sheffield, Yorkshire
Lead Data Platform Engineer £75,000 - £85,000 Remote You are a well-experienced Lead Data Platform Engineer, looking to take ownership of a modern lakehouse platform and shape best-in-class data engineering practices. THE COMPANY This industry-leading organisation serves specialist, people-first knowledge to its clients and is looking to provide further expertise by improving its data capabilities. THE ROLE As a Lead Data Engineer you will build a Databricks platform with ownership, shaping how data is engineered, deployed, automated, and governed in Azure. Specifically, you can expect to be involved in the following: Designing and developing end-to-end data platform solutions in Azure using Databricks Building and maintaining automation for cluster management, monitoring, tagging, and platform optimisation. Implementing CI/CD and DevOps best practices across data pipelines and platform components. Taking ownership of design decisions and long-term architectural direction. SKILLS AND EXPERIENCE The successful Lead Data Engineer will have the following skills and experience: Deep expertise with Azure Databricks, including development, deployment, and platform-level engineering. Strong software engineering capability across Pyspark, Spark SQL and Azure Data Factory. Solid understanding of DevOps practices including CI/CD and automated testing. Experience with monitoring, logging, and alerting solutions. Ability to work effectively with stakeholders across engineering functions. BENEFITS The successful Lead Data Platform Engineer will receive the following benefits: Salary between £75,000 - £85,000 - depending on experience Competitive bonus and benefits. Remote work with a flexible 35-hour week that can be taken as a 4-day week. HOW TO APPLY Please register your interest by sending your resume to Majid Latif via the Apply link on this page.
18/03/2026
Full time
Lead Data Platform Engineer £75,000 - £85,000 Remote You are a well-experienced Lead Data Platform Engineer, looking to take ownership of a modern lakehouse platform and shape best-in-class data engineering practices. THE COMPANY This industry-leading organisation serves specialist, people-first knowledge to its clients and is looking to provide further expertise by improving its data capabilities. THE ROLE As a Lead Data Engineer you will build a Databricks platform with ownership, shaping how data is engineered, deployed, automated, and governed in Azure. Specifically, you can expect to be involved in the following: Designing and developing end-to-end data platform solutions in Azure using Databricks Building and maintaining automation for cluster management, monitoring, tagging, and platform optimisation. Implementing CI/CD and DevOps best practices across data pipelines and platform components. Taking ownership of design decisions and long-term architectural direction. SKILLS AND EXPERIENCE The successful Lead Data Engineer will have the following skills and experience: Deep expertise with Azure Databricks, including development, deployment, and platform-level engineering. Strong software engineering capability across Pyspark, Spark SQL and Azure Data Factory. Solid understanding of DevOps practices including CI/CD and automated testing. Experience with monitoring, logging, and alerting solutions. Ability to work effectively with stakeholders across engineering functions. BENEFITS The successful Lead Data Platform Engineer will receive the following benefits: Salary between £75,000 - £85,000 - depending on experience Competitive bonus and benefits. Remote work with a flexible 35-hour week that can be taken as a 4-day week. HOW TO APPLY Please register your interest by sending your resume to Majid Latif via the Apply link on this page.
83Zero Ltd
Senior Data Engineer
83Zero Ltd
Company Overview We are working with an innovative organisation that recognises the increasing complexity of project delivery. Since 2013, our client has been helping companies of all sizes improve the way projects are delivered. Their mission is to become the number one provider of innovative project solutions, driven by a community of experienced, caring, and passionate professionals who are committed to improving project delivery. Why Join Our Client? Our client is currently in an exciting phase of growth, making this an excellent time to join their journey. They are building something special-scaling the business while maintaining a strong people-first approach. Investment in their teams is a key priority, creating an environment where development is encouraged and individuals are supported to grow with the organisation. Their culture sets them apart from other consulting practices, and they are looking to build a team that is equally ambitious. Position Overview Our client is seeking a Senior Data Engineer who thrives on building scalable, cloud-first data systems. In this role, you will design and manage data pipelines that support analytics, AI, and automation across complex infrastructure programmes. Your work will play a key part in enabling data-driven transformation across critical UK industries. Core Responsibilities Design, build, and optimise data pipelines using Azure Data Factory, Synapse, and Databricks Develop and maintain ETL/ELT workflows to ensure high data quality and reliability Collaborate with analysts and AI engineers to deliver robust and reusable data products Manage data lakes and warehouses using formats such as Delta Lake and Parquet Implement best practices for data governance, performance, and security Continuously evaluate and adopt new technologies to evolve the organisation's data platform Provide technical guidance to junior engineers and contribute to team capability building Technical Stack Core: Azure Data Factory Azure Synapse Analytics Azure Data Lake Storage Gen2 SQL Server Databricks Enhancements: Python (PySpark, Pandas) CI/CD (Azure DevOps) Infrastructure as Code (Terraform, Bicep) REST APIs GitHub Actions Desirable: Microsoft Fabric Delta Live Tables Power BI dataset automation DataOps practices What You'll Bring Professional experience in data engineering or cloud data development Strong understanding of data architecture, APIs, and modern data pipeline design Hands-on experience within Microsoft's Azure ecosystem, with an interest in emerging technologies such as Fabric, AI-enhanced ETL, and real-time data streaming Proven ability to lead technical workstreams and mentor junior team members A strong alignment with the organisation's IDEAL values: Integrity, Drive, Empathy, Adaptability, and Loyalty Ready to Apply? This is a fantastic opportunity to join a forward-thinking organisation at a key stage of growth, working on impactful projects across critical industries. If you're looking to take the next step in your career within a collaborative and innovative environment, we'd love to hear from you.
18/03/2026
Full time
Company Overview We are working with an innovative organisation that recognises the increasing complexity of project delivery. Since 2013, our client has been helping companies of all sizes improve the way projects are delivered. Their mission is to become the number one provider of innovative project solutions, driven by a community of experienced, caring, and passionate professionals who are committed to improving project delivery. Why Join Our Client? Our client is currently in an exciting phase of growth, making this an excellent time to join their journey. They are building something special-scaling the business while maintaining a strong people-first approach. Investment in their teams is a key priority, creating an environment where development is encouraged and individuals are supported to grow with the organisation. Their culture sets them apart from other consulting practices, and they are looking to build a team that is equally ambitious. Position Overview Our client is seeking a Senior Data Engineer who thrives on building scalable, cloud-first data systems. In this role, you will design and manage data pipelines that support analytics, AI, and automation across complex infrastructure programmes. Your work will play a key part in enabling data-driven transformation across critical UK industries. Core Responsibilities Design, build, and optimise data pipelines using Azure Data Factory, Synapse, and Databricks Develop and maintain ETL/ELT workflows to ensure high data quality and reliability Collaborate with analysts and AI engineers to deliver robust and reusable data products Manage data lakes and warehouses using formats such as Delta Lake and Parquet Implement best practices for data governance, performance, and security Continuously evaluate and adopt new technologies to evolve the organisation's data platform Provide technical guidance to junior engineers and contribute to team capability building Technical Stack Core: Azure Data Factory Azure Synapse Analytics Azure Data Lake Storage Gen2 SQL Server Databricks Enhancements: Python (PySpark, Pandas) CI/CD (Azure DevOps) Infrastructure as Code (Terraform, Bicep) REST APIs GitHub Actions Desirable: Microsoft Fabric Delta Live Tables Power BI dataset automation DataOps practices What You'll Bring Professional experience in data engineering or cloud data development Strong understanding of data architecture, APIs, and modern data pipeline design Hands-on experience within Microsoft's Azure ecosystem, with an interest in emerging technologies such as Fabric, AI-enhanced ETL, and real-time data streaming Proven ability to lead technical workstreams and mentor junior team members A strong alignment with the organisation's IDEAL values: Integrity, Drive, Empathy, Adaptability, and Loyalty Ready to Apply? This is a fantastic opportunity to join a forward-thinking organisation at a key stage of growth, working on impactful projects across critical industries. If you're looking to take the next step in your career within a collaborative and innovative environment, we'd love to hear from you.
Tenth Revolution Group
Lead Azure Databricks Engineer - Insurance (Lloyd's Market)
Tenth Revolution Group
Lead Azure Databricks Engineer - Consulting (Lloyd's of London) Base Salary: £78,000-£95,000 Location: Hybrid - London Sector: Data Engineering / Cloud / Insurance Type: Permanent We are searching for an accomplished Lead Azure Databricks Engineer to join our consulting practice and take technical ownership of complex cloud data engineering projects across the Lloyd's of London insurance market. This position is ideal for someone who is deeply hands-on, highly delivery-driven, and experienced in navigating the regulatory, operational and data challenges unique to the London Market. Working as part of a specialist consultancy, you'll guide clients through the design, build and optimisation of enterprise-scale Azure data platforms-shaping strategies, solving difficult engineering problems, and elevating engineering capability across engagements. What you'll do Lead the engineering, optimisation and governance of Azure-based data platforms, spanning ADF, Data Lake, Azure Functions, Key Vault, Databricks, Delta Lake, PySpark and Unity Catalog . Collaborate closely with insurance domain teams- Underwriting, Actuarial, Delegated Authority, Bordereaux, Exposure Management, Reinsurance, Finance, Risk and Solvency II -delivering solutions aligned to Lloyd's Market standards. Translate business, regulatory and operational requirements into scalable and robust cloud data solutions. Quickly diagnose and resolve high-complexity technical challenges, providing clear direction and decisive technical leadership. Establish and champion best-practice patterns across data lifecycle management, CICD, architecture design principles and cloud-native engineering. Provide mentoring, guidance and leadership to engineers across consulting engagements, setting the bar for delivery quality and engineering maturity. Identify opportunities for performance improvements, cost reduction, platform resilience and automation. Oversee the end-to-end development of data pipelines, ensuring reliability, observability and efficient operation. What you'll bring Expert-level hands-on experience with Azure data services and Databricks in enterprise-scale environments. Strong knowledge of Delta Lake , Medallion architecture, distributed compute and Lakehouse engineering patterns. Advanced Python, PySpark and Spark SQL skills, including deploying data workloads through Azure DevOps CICD , branching/merging and automated testing. Solid understanding of data governance, lineage, access controls, FinOps practices and secure cloud engineering. Excellent communication skills with the ability to articulate technical topics clearly to senior stakeholders and guide cross-functional discussions. A strong delivery focus with a consistent track record of taking ownership and completing major engineering initiatives. Experience operating in fast-paced consulting environments with evolving priorities and demanding timelines. Essential: hands-on experience working within the Lloyd's of London / London Market insurance sector. What's on offer £78,000-£95,000 base salary Hybrid working model based in London High-impact consulting role with exposure to top-tier insurance clients Opportunity to influence cloud engineering standards and contribute to major data transformation initiatives
18/03/2026
Full time
Lead Azure Databricks Engineer - Consulting (Lloyd's of London) Base Salary: £78,000-£95,000 Location: Hybrid - London Sector: Data Engineering / Cloud / Insurance Type: Permanent We are searching for an accomplished Lead Azure Databricks Engineer to join our consulting practice and take technical ownership of complex cloud data engineering projects across the Lloyd's of London insurance market. This position is ideal for someone who is deeply hands-on, highly delivery-driven, and experienced in navigating the regulatory, operational and data challenges unique to the London Market. Working as part of a specialist consultancy, you'll guide clients through the design, build and optimisation of enterprise-scale Azure data platforms-shaping strategies, solving difficult engineering problems, and elevating engineering capability across engagements. What you'll do Lead the engineering, optimisation and governance of Azure-based data platforms, spanning ADF, Data Lake, Azure Functions, Key Vault, Databricks, Delta Lake, PySpark and Unity Catalog . Collaborate closely with insurance domain teams- Underwriting, Actuarial, Delegated Authority, Bordereaux, Exposure Management, Reinsurance, Finance, Risk and Solvency II -delivering solutions aligned to Lloyd's Market standards. Translate business, regulatory and operational requirements into scalable and robust cloud data solutions. Quickly diagnose and resolve high-complexity technical challenges, providing clear direction and decisive technical leadership. Establish and champion best-practice patterns across data lifecycle management, CICD, architecture design principles and cloud-native engineering. Provide mentoring, guidance and leadership to engineers across consulting engagements, setting the bar for delivery quality and engineering maturity. Identify opportunities for performance improvements, cost reduction, platform resilience and automation. Oversee the end-to-end development of data pipelines, ensuring reliability, observability and efficient operation. What you'll bring Expert-level hands-on experience with Azure data services and Databricks in enterprise-scale environments. Strong knowledge of Delta Lake , Medallion architecture, distributed compute and Lakehouse engineering patterns. Advanced Python, PySpark and Spark SQL skills, including deploying data workloads through Azure DevOps CICD , branching/merging and automated testing. Solid understanding of data governance, lineage, access controls, FinOps practices and secure cloud engineering. Excellent communication skills with the ability to articulate technical topics clearly to senior stakeholders and guide cross-functional discussions. A strong delivery focus with a consistent track record of taking ownership and completing major engineering initiatives. Experience operating in fast-paced consulting environments with evolving priorities and demanding timelines. Essential: hands-on experience working within the Lloyd's of London / London Market insurance sector. What's on offer £78,000-£95,000 base salary Hybrid working model based in London High-impact consulting role with exposure to top-tier insurance clients Opportunity to influence cloud engineering standards and contribute to major data transformation initiatives
Tenth Revolution Group
Databricks Architect
Tenth Revolution Group Edinburgh, Midlothian
Data Architect - Databricks (Hybrid, UK) Locations: London, Manchester, or Edinburgh Hybrid: 2-3 days per week on-site Salary: Competitive (Manager & Senior Manager grades available) About the Role We are seeking an experienced Data Architect with deep expertise in Databricks to help our clients design, build, and scale modern data platforms. You will play a pivotal role in shaping Lakehouse architectures that enable advanced analytics, AI/ML, and enterprise-wide data-driven decision-making. Working closely with clients early in their data journey, you will assess business needs, define architectural direction, and guide the implementation of robust, secure, and scalable solutions. This is a hands-on architecture role suited to someone who has spent the last 2-3 years working directly with Databricks at an architectural level and is ready to progress towards programmes such as the Databricks DPP. Key Responsibilities * Architect and implement Databricks Lakehouse solutions across ingestion, processing, storage, and analytics layers. * Recommend best practices and innovative approaches for modern data platforms. * Build strong client relationships and confidently present architectural decisions to senior stakeholders. * Shape client data strategies and promote governance, quality, and security standards. * Lead architectural engagements and ensure delivery within scope, budget, and timelines. * Optimise Databricks workloads for performance, scalability, and cost efficiency. * Implement governance and compliance frameworks using Unity Catalog, Purview, and cloud-native controls. * Develop CI/CD pipelines using Databricks Repos, GitHub Actions, or Azure DevOps. * Contribute to RFI/RFP responses and deliver innovative Proofs of Concept. * Support the internal Architecture Practice by developing reusable patterns and accelerators. Skills & Experience * Proven experience delivering enterprise-scale Databricks solutions end-to-end. * Strong background in Lakehouse Architecture, including structured and unstructured data. * Expertise in Spark, PySpark, Delta Lake, and Databricks workflows. * Experience building scalable ETL/ELT pipelines, including Delta Live Tables. * Strong programming skills in Python, Scala, or SQL. * Solid understanding of data modelling (3NF, Kimball, Data Vault). * Experience integrating Lakehouse architectures with BI tools such as Power BI and Tableau. * Hands-on experience with at least one major cloud platform (Azure, AWS, or GCP) and understanding of Databricks implications across each. * Knowledge of Databricks security best practices (RBAC, IAM, encryption). * Excellent communication, stakeholder engagement, and problem-solving skills. Highly Valued Certifications * Databricks Certified Data Engineer (Associate/Professional) * Databricks Certified Machine Learning (Associate/Professional) * Databricks Generative AI Fundamentals * Databricks Lakehouse Fundamentals Why Join Us? * Generous annual leave and private medical insurance. * Strong focus on wellbeing and personal development. * A culture that rewards high performance and nurtures talent. * Opportunities to work on impactful client projects and drive meaningful change. * Supportive environment with investment in certifications and career progression. Additional Information * This role is fully signed off and part of a growing Databricks capability. * Candidates must be willing to travel between UK offices when required. * Suitable for individuals with strong architectural experience rather than purely engineering backgrounds. Please can you send me a copy of your CV if you're interested
18/03/2026
Full time
Data Architect - Databricks (Hybrid, UK) Locations: London, Manchester, or Edinburgh Hybrid: 2-3 days per week on-site Salary: Competitive (Manager & Senior Manager grades available) About the Role We are seeking an experienced Data Architect with deep expertise in Databricks to help our clients design, build, and scale modern data platforms. You will play a pivotal role in shaping Lakehouse architectures that enable advanced analytics, AI/ML, and enterprise-wide data-driven decision-making. Working closely with clients early in their data journey, you will assess business needs, define architectural direction, and guide the implementation of robust, secure, and scalable solutions. This is a hands-on architecture role suited to someone who has spent the last 2-3 years working directly with Databricks at an architectural level and is ready to progress towards programmes such as the Databricks DPP. Key Responsibilities * Architect and implement Databricks Lakehouse solutions across ingestion, processing, storage, and analytics layers. * Recommend best practices and innovative approaches for modern data platforms. * Build strong client relationships and confidently present architectural decisions to senior stakeholders. * Shape client data strategies and promote governance, quality, and security standards. * Lead architectural engagements and ensure delivery within scope, budget, and timelines. * Optimise Databricks workloads for performance, scalability, and cost efficiency. * Implement governance and compliance frameworks using Unity Catalog, Purview, and cloud-native controls. * Develop CI/CD pipelines using Databricks Repos, GitHub Actions, or Azure DevOps. * Contribute to RFI/RFP responses and deliver innovative Proofs of Concept. * Support the internal Architecture Practice by developing reusable patterns and accelerators. Skills & Experience * Proven experience delivering enterprise-scale Databricks solutions end-to-end. * Strong background in Lakehouse Architecture, including structured and unstructured data. * Expertise in Spark, PySpark, Delta Lake, and Databricks workflows. * Experience building scalable ETL/ELT pipelines, including Delta Live Tables. * Strong programming skills in Python, Scala, or SQL. * Solid understanding of data modelling (3NF, Kimball, Data Vault). * Experience integrating Lakehouse architectures with BI tools such as Power BI and Tableau. * Hands-on experience with at least one major cloud platform (Azure, AWS, or GCP) and understanding of Databricks implications across each. * Knowledge of Databricks security best practices (RBAC, IAM, encryption). * Excellent communication, stakeholder engagement, and problem-solving skills. Highly Valued Certifications * Databricks Certified Data Engineer (Associate/Professional) * Databricks Certified Machine Learning (Associate/Professional) * Databricks Generative AI Fundamentals * Databricks Lakehouse Fundamentals Why Join Us? * Generous annual leave and private medical insurance. * Strong focus on wellbeing and personal development. * A culture that rewards high performance and nurtures talent. * Opportunities to work on impactful client projects and drive meaningful change. * Supportive environment with investment in certifications and career progression. Additional Information * This role is fully signed off and part of a growing Databricks capability. * Candidates must be willing to travel between UK offices when required. * Suitable for individuals with strong architectural experience rather than purely engineering backgrounds. Please can you send me a copy of your CV if you're interested
Tenth Revolution Group
Databricks Architect
Tenth Revolution Group
Data Architect - Databricks (Hybrid, UK) Locations: London, Manchester, or Edinburgh Hybrid: 2-3 days per week on-site Salary: Competitive (Manager & Senior Manager grades available) About the Role We are seeking an experienced Data Architect with deep expertise in Databricks to help our clients design, build, and scale modern data platforms. You will play a pivotal role in shaping Lakehouse architectures that enable advanced analytics, AI/ML, and enterprise-wide data-driven decision-making. Working closely with clients early in their data journey, you will assess business needs, define architectural direction, and guide the implementation of robust, secure, and scalable solutions. This is a hands-on architecture role suited to someone who has spent the last 2-3 years working directly with Databricks at an architectural level and is ready to progress towards programmes such as the Databricks DPP. Key Responsibilities * Architect and implement Databricks Lakehouse solutions across ingestion, processing, storage, and analytics layers. * Recommend best practices and innovative approaches for modern data platforms. * Build strong client relationships and confidently present architectural decisions to senior stakeholders. * Shape client data strategies and promote governance, quality, and security standards. * Lead architectural engagements and ensure delivery within scope, budget, and timelines. * Optimise Databricks workloads for performance, scalability, and cost efficiency. * Implement governance and compliance frameworks using Unity Catalog, Purview, and cloud-native controls. * Develop CI/CD pipelines using Databricks Repos, GitHub Actions, or Azure DevOps. * Contribute to RFI/RFP responses and deliver innovative Proofs of Concept. * Support the internal Architecture Practice by developing reusable patterns and accelerators. Skills & Experience * Proven experience delivering enterprise-scale Databricks solutions end-to-end. * Strong background in Lakehouse Architecture, including structured and unstructured data. * Expertise in Spark, PySpark, Delta Lake, and Databricks workflows. * Experience building scalable ETL/ELT pipelines, including Delta Live Tables. * Strong programming skills in Python, Scala, or SQL. * Solid understanding of data modelling (3NF, Kimball, Data Vault). * Experience integrating Lakehouse architectures with BI tools such as Power BI and Tableau. * Hands-on experience with at least one major cloud platform (Azure, AWS, or GCP) and understanding of Databricks implications across each. * Knowledge of Databricks security best practices (RBAC, IAM, encryption). * Excellent communication, stakeholder engagement, and problem-solving skills. Highly Valued Certifications * Databricks Certified Data Engineer (Associate/Professional) * Databricks Certified Machine Learning (Associate/Professional) * Databricks Generative AI Fundamentals * Databricks Lakehouse Fundamentals Why Join Us? * Generous annual leave and private medical insurance. * Strong focus on wellbeing and personal development. * A culture that rewards high performance and nurtures talent. * Opportunities to work on impactful client projects and drive meaningful change. * Supportive environment with investment in certifications and career progression. Additional Information * This role is fully signed off and part of a growing Databricks capability. * Candidates must be willing to travel between UK offices when required. * Suitable for individuals with strong architectural experience rather than purely engineering backgrounds. Please can you send me a copy of your CV if you're interested
18/03/2026
Full time
Data Architect - Databricks (Hybrid, UK) Locations: London, Manchester, or Edinburgh Hybrid: 2-3 days per week on-site Salary: Competitive (Manager & Senior Manager grades available) About the Role We are seeking an experienced Data Architect with deep expertise in Databricks to help our clients design, build, and scale modern data platforms. You will play a pivotal role in shaping Lakehouse architectures that enable advanced analytics, AI/ML, and enterprise-wide data-driven decision-making. Working closely with clients early in their data journey, you will assess business needs, define architectural direction, and guide the implementation of robust, secure, and scalable solutions. This is a hands-on architecture role suited to someone who has spent the last 2-3 years working directly with Databricks at an architectural level and is ready to progress towards programmes such as the Databricks DPP. Key Responsibilities * Architect and implement Databricks Lakehouse solutions across ingestion, processing, storage, and analytics layers. * Recommend best practices and innovative approaches for modern data platforms. * Build strong client relationships and confidently present architectural decisions to senior stakeholders. * Shape client data strategies and promote governance, quality, and security standards. * Lead architectural engagements and ensure delivery within scope, budget, and timelines. * Optimise Databricks workloads for performance, scalability, and cost efficiency. * Implement governance and compliance frameworks using Unity Catalog, Purview, and cloud-native controls. * Develop CI/CD pipelines using Databricks Repos, GitHub Actions, or Azure DevOps. * Contribute to RFI/RFP responses and deliver innovative Proofs of Concept. * Support the internal Architecture Practice by developing reusable patterns and accelerators. Skills & Experience * Proven experience delivering enterprise-scale Databricks solutions end-to-end. * Strong background in Lakehouse Architecture, including structured and unstructured data. * Expertise in Spark, PySpark, Delta Lake, and Databricks workflows. * Experience building scalable ETL/ELT pipelines, including Delta Live Tables. * Strong programming skills in Python, Scala, or SQL. * Solid understanding of data modelling (3NF, Kimball, Data Vault). * Experience integrating Lakehouse architectures with BI tools such as Power BI and Tableau. * Hands-on experience with at least one major cloud platform (Azure, AWS, or GCP) and understanding of Databricks implications across each. * Knowledge of Databricks security best practices (RBAC, IAM, encryption). * Excellent communication, stakeholder engagement, and problem-solving skills. Highly Valued Certifications * Databricks Certified Data Engineer (Associate/Professional) * Databricks Certified Machine Learning (Associate/Professional) * Databricks Generative AI Fundamentals * Databricks Lakehouse Fundamentals Why Join Us? * Generous annual leave and private medical insurance. * Strong focus on wellbeing and personal development. * A culture that rewards high performance and nurtures talent. * Opportunities to work on impactful client projects and drive meaningful change. * Supportive environment with investment in certifications and career progression. Additional Information * This role is fully signed off and part of a growing Databricks capability. * Candidates must be willing to travel between UK offices when required. * Suitable for individuals with strong architectural experience rather than purely engineering backgrounds. Please can you send me a copy of your CV if you're interested
Data Engineering Manager
Primus Connect Ltd Guildford, Surrey
Are you a hands on Data Engineering leader who loves building high impact data platforms while mentoring and growing teams? We're hiring a Data Engineering Manager to join a fast moving, product focused environment where collaboration is high, decisions are quick, and your work will directly shape real products used by the business. The Role This is a true player coach position, roughly 50% hands on engineering and 50% team leadership. You'll lead a small but growing team of Data Engineers (currently 5, mostly junior), helping them mature technically while remaining deeply involved in building and optimising modern data pipelines on Databricks. The team works in a highly collaborative office environment, enabling rapid delivery and close cross functional teamwork. What You'll Be Doing Leading and mentoring a team of Data Engineers Designing and building scalable data pipelines in Databricks Remaining hands on with Python/PySpark development Working closely with product, Front End, and Back End teams Integrating multiple data sources, including APIs Driving best practice across the data platform Helping shape the future data architecture What We're Looking For Essential: Strong commercial experience with Databricks Advanced Python and PySpark Solid SQL skills Experience building production data pipelines Experience working with API based data ingestion Familiarity with Azure storage connectivity Prior technical leadership or mentoring experience Nice to have: Experience with Dynamics integrations Background in product or startup environments Broader Azure ecosystem exposure Working Pattern Predominantly onsite role Expected 5 days/week initially to embed with the team Increased flexibility likely after initial onboarding Fast paced, highly collaborative office culture Why Join? High impact role in a growing data function Strong investment in people and development Free onsite restaurant (breakfast, lunch & snacks) Fully equipped onsite gym Collaborative, delivery focused culture Opportunity to shape and scale a modern Databricks platform If you are looking for your next exciting opportunity please apply today and I will be in touch!
17/03/2026
Full time
Are you a hands on Data Engineering leader who loves building high impact data platforms while mentoring and growing teams? We're hiring a Data Engineering Manager to join a fast moving, product focused environment where collaboration is high, decisions are quick, and your work will directly shape real products used by the business. The Role This is a true player coach position, roughly 50% hands on engineering and 50% team leadership. You'll lead a small but growing team of Data Engineers (currently 5, mostly junior), helping them mature technically while remaining deeply involved in building and optimising modern data pipelines on Databricks. The team works in a highly collaborative office environment, enabling rapid delivery and close cross functional teamwork. What You'll Be Doing Leading and mentoring a team of Data Engineers Designing and building scalable data pipelines in Databricks Remaining hands on with Python/PySpark development Working closely with product, Front End, and Back End teams Integrating multiple data sources, including APIs Driving best practice across the data platform Helping shape the future data architecture What We're Looking For Essential: Strong commercial experience with Databricks Advanced Python and PySpark Solid SQL skills Experience building production data pipelines Experience working with API based data ingestion Familiarity with Azure storage connectivity Prior technical leadership or mentoring experience Nice to have: Experience with Dynamics integrations Background in product or startup environments Broader Azure ecosystem exposure Working Pattern Predominantly onsite role Expected 5 days/week initially to embed with the team Increased flexibility likely after initial onboarding Fast paced, highly collaborative office culture Why Join? High impact role in a growing data function Strong investment in people and development Free onsite restaurant (breakfast, lunch & snacks) Fully equipped onsite gym Collaborative, delivery focused culture Opportunity to shape and scale a modern Databricks platform If you are looking for your next exciting opportunity please apply today and I will be in touch!
Harnham - Data & Analytics Recruitment
Data Engineer
Harnham - Data & Analytics Recruitment
Data Engineer London, £550 per day Outside IR35, Hybrid This is an exciting opportunity to play a key role in a major data modernisation programme, focused on migrating a large SQL Server estate into a cloud-native Azure Databricks environment. You will be central to transforming legacy reporting and data logic into scalable, modernised pipelines and models, helping the business unlock faster, more reliable insights. The Company They are a well-established organisation undergoing a significant transformation of their data landscape. With a strong commitment to modern BI practices and cloud engineering, they are investing in next-generation technology to improve analytics capabilities across the business. You will join a collaborative environment where engineering excellence, trusted data, and high-quality reporting are core priorities. The Role and Deliverables Lead the migration of SQL Server stored procedures, functions, views, and legacy reporting logic into Azure Databricks. Reengineer and optimise SQL workloads for Databricks using Databricks SQL, dbt, and PySpark. Support the uplift of SSRS and Tableau reporting so that all outputs are powered by Databricks-based datasets. Validate migrated datasets and reporting outputs, ensuring high levels of accuracy and performance. Document pipelines, models, and migration processes for long-term maintainability. Collaborate with BI, data warehouse, and project teams to ensure smooth delivery across the programme. Your Skills and Experience Strong experience working with Azure Databricks, including SQL development, data modelling, and PySpark. Proven capability in SQL Server, including complex T-SQL logic, stored procedures, and performance optimisation. Hands-on experience with dbt for modular, testable data model development. Solid understanding of legacy BI environments, particularly SSRS. Knowledge of Tableau and how to optimise dashboards against cloud-based data sources. Ability to work collaboratively within a BI, data warehouse, or reporting team during large-scale migrations. How to Apply If this project aligns with your experience, please apply with your most recent CV.
16/03/2026
Contractor
Data Engineer London, £550 per day Outside IR35, Hybrid This is an exciting opportunity to play a key role in a major data modernisation programme, focused on migrating a large SQL Server estate into a cloud-native Azure Databricks environment. You will be central to transforming legacy reporting and data logic into scalable, modernised pipelines and models, helping the business unlock faster, more reliable insights. The Company They are a well-established organisation undergoing a significant transformation of their data landscape. With a strong commitment to modern BI practices and cloud engineering, they are investing in next-generation technology to improve analytics capabilities across the business. You will join a collaborative environment where engineering excellence, trusted data, and high-quality reporting are core priorities. The Role and Deliverables Lead the migration of SQL Server stored procedures, functions, views, and legacy reporting logic into Azure Databricks. Reengineer and optimise SQL workloads for Databricks using Databricks SQL, dbt, and PySpark. Support the uplift of SSRS and Tableau reporting so that all outputs are powered by Databricks-based datasets. Validate migrated datasets and reporting outputs, ensuring high levels of accuracy and performance. Document pipelines, models, and migration processes for long-term maintainability. Collaborate with BI, data warehouse, and project teams to ensure smooth delivery across the programme. Your Skills and Experience Strong experience working with Azure Databricks, including SQL development, data modelling, and PySpark. Proven capability in SQL Server, including complex T-SQL logic, stored procedures, and performance optimisation. Hands-on experience with dbt for modular, testable data model development. Solid understanding of legacy BI environments, particularly SSRS. Knowledge of Tableau and how to optimise dashboards against cloud-based data sources. Ability to work collaboratively within a BI, data warehouse, or reporting team during large-scale migrations. How to Apply If this project aligns with your experience, please apply with your most recent CV.
Harvey Nash IT Recruitment UK
Technical Architect
Harvey Nash IT Recruitment UK Chester, Cheshire
Technical Architect - Microsoft Fabric Chester - Hybrid working 2 x per week Salary: Up to £90,000 per annum A leading client in Chester seeks a Technical Architect to design and deliver data and AI solutions on the Microsoft Fabric platform. As Technical Lead for a small team, you'll oversee end-to-end architecture, develop scalable analytics solutions, and stay hands-on. Responsibilities include delivering Fabric solutions (OneLake, Lakehouse, Warehouses, Power BI), leading architecture and performance optimisation, and enabling advanced analytics and machine learning with Fabric and Azure ML. Key skills and responsibilities: Design and deliver scalable, secure end-to-end Microsoft Fabric solutions aligned with the businesses objectives (OneLake, Lakehouse, Warehouses and Power BI) Designing and managing enterprise-grade workloads and semantic models to enable robust analytics and reporting, utilising Fabric and Azure ML Planning and sizing Fabric initiatives, including capacity needs, effort estimates, infrastructure requirements, team resourcing, and delivery timelines. Extensive knowledge in data modelling, DAX, Python/PySpark, SQL/KQL. Proven record of technical leadership and effective stakeholder engagement. Lead architectural design, capacity planning, and performance optimisation efforts. Implement governance, security, and DevOps best practices. Lead a small team of data engineers and contribute to the development of the platform strategy. Interested? Please submit your updated CV to Emma Siwicki at Harvey Nash for immediate consideration.
16/03/2026
Full time
Technical Architect - Microsoft Fabric Chester - Hybrid working 2 x per week Salary: Up to £90,000 per annum A leading client in Chester seeks a Technical Architect to design and deliver data and AI solutions on the Microsoft Fabric platform. As Technical Lead for a small team, you'll oversee end-to-end architecture, develop scalable analytics solutions, and stay hands-on. Responsibilities include delivering Fabric solutions (OneLake, Lakehouse, Warehouses, Power BI), leading architecture and performance optimisation, and enabling advanced analytics and machine learning with Fabric and Azure ML. Key skills and responsibilities: Design and deliver scalable, secure end-to-end Microsoft Fabric solutions aligned with the businesses objectives (OneLake, Lakehouse, Warehouses and Power BI) Designing and managing enterprise-grade workloads and semantic models to enable robust analytics and reporting, utilising Fabric and Azure ML Planning and sizing Fabric initiatives, including capacity needs, effort estimates, infrastructure requirements, team resourcing, and delivery timelines. Extensive knowledge in data modelling, DAX, Python/PySpark, SQL/KQL. Proven record of technical leadership and effective stakeholder engagement. Lead architectural design, capacity planning, and performance optimisation efforts. Implement governance, security, and DevOps best practices. Lead a small team of data engineers and contribute to the development of the platform strategy. Interested? Please submit your updated CV to Emma Siwicki at Harvey Nash for immediate consideration.
Synapri
Lead Platform Engineer
Synapri
Lead Data Platform Engineer - Databricks - IAC - Terraform - Azure Data Factory - Data Lakehouse The Data Platform Engineer designs, develops, automates, and maintains secure, scalable, and compliant data platforms that enable the firm to efficiently manage, analyse, and utilise data. The role ensures that data solutions are robust and reliable while meeting regulatory obligations and safeguarding client confidentiality. Key Responsibilities Design and architect scalable, secure, and compliant data platforms and solutions, producing technical documentation and securing approvals through governance bodies such as Architecture Review Boards. Build and deliver robust data solutions using Databricks, PySpark, Spark SQL, Azure Data Factory, and Azure services. Develop APIs and write efficient Python, PySpark, and SQL code to support data integration, processing, and automation. Implement and manage CI/CD pipelines and automated deployments using Azure DevOps to enable reliable releases across environments. Develop and maintain infrastructure-as-code (eg, Terraform, ARM) to provision and manage cloud resources, including ADF pipelines, Databricks assets, and Unity Catalog components. Monitor, troubleshoot, and optimise data platform performance, reliability, and costs, identifying bottlenecks and recommending improvements. Create dashboards and observability tools to report on platform performance, usage, incidents, and operational KPIs. Knowledge, Skills & Experience Degree in Computer Science, Data Engineering, or a related field. Proven experience designing and building cloud-based data platforms, ideally within Azure. Strong hands-on expertise with Databricks, PySpark, Spark SQL, and Azure Data Factory. Solid understanding of Data Lakehouse architecture and modern data platform design. Proficiency in Python for data engineering, automation, and data processing. Experience developing and integrating REST APIs for data services. Strong DevOps experience, including CI/CD, automated testing, and release management for data platforms. Experience with Infrastructure as Code tools such as Terraform or ARM templates. Knowledge of data modelling, ETL/ELT pipelines, and data warehousing concepts. Familiarity with monitoring, logging, and alerting tools (eg, Azure Monitor). Desirable Experience with additional Azure services (eg, Fabric, Azure Functions, Logic Apps). Knowledge of cloud cost optimisation for data platforms. Understanding of data governance and regulatory compliance (eg, GDPR). Experience working in regulated or professional services environments.
11/03/2026
Full time
Lead Data Platform Engineer - Databricks - IAC - Terraform - Azure Data Factory - Data Lakehouse The Data Platform Engineer designs, develops, automates, and maintains secure, scalable, and compliant data platforms that enable the firm to efficiently manage, analyse, and utilise data. The role ensures that data solutions are robust and reliable while meeting regulatory obligations and safeguarding client confidentiality. Key Responsibilities Design and architect scalable, secure, and compliant data platforms and solutions, producing technical documentation and securing approvals through governance bodies such as Architecture Review Boards. Build and deliver robust data solutions using Databricks, PySpark, Spark SQL, Azure Data Factory, and Azure services. Develop APIs and write efficient Python, PySpark, and SQL code to support data integration, processing, and automation. Implement and manage CI/CD pipelines and automated deployments using Azure DevOps to enable reliable releases across environments. Develop and maintain infrastructure-as-code (eg, Terraform, ARM) to provision and manage cloud resources, including ADF pipelines, Databricks assets, and Unity Catalog components. Monitor, troubleshoot, and optimise data platform performance, reliability, and costs, identifying bottlenecks and recommending improvements. Create dashboards and observability tools to report on platform performance, usage, incidents, and operational KPIs. Knowledge, Skills & Experience Degree in Computer Science, Data Engineering, or a related field. Proven experience designing and building cloud-based data platforms, ideally within Azure. Strong hands-on expertise with Databricks, PySpark, Spark SQL, and Azure Data Factory. Solid understanding of Data Lakehouse architecture and modern data platform design. Proficiency in Python for data engineering, automation, and data processing. Experience developing and integrating REST APIs for data services. Strong DevOps experience, including CI/CD, automated testing, and release management for data platforms. Experience with Infrastructure as Code tools such as Terraform or ARM templates. Knowledge of data modelling, ETL/ELT pipelines, and data warehousing concepts. Familiarity with monitoring, logging, and alerting tools (eg, Azure Monitor). Desirable Experience with additional Azure services (eg, Fabric, Azure Functions, Logic Apps). Knowledge of cloud cost optimisation for data platforms. Understanding of data governance and regulatory compliance (eg, GDPR). Experience working in regulated or professional services environments.
Randstad Technologies Recruitment
Lead PySpark Engineer
Randstad Technologies Recruitment City, London
PySpark Engineer Lead As the Technical Lead, you will drive the high-stakes migration of legacy SAS analytics to a modern, cloud-native PySpark ecosystem on AWS. This isn't just a lift and shift you will refactor complex procedural logic into scalable, production-ready distributed pipelines for a Tier-1 financial services environment. Core Responsibilities Engineering Leadership: Design and develop complex ETL/ELT pipelines and Data Marts using PySpark, EMR, and Glue. Legacy Modernisation: Architect the conversion of SAS Base/Macros into modular, testable Python code using SAS2PY and manual refactoring. Performance Tuning: Optimise Spark execution (partitioning, shuffling, caching) to ensure cost-efficient processing of massive financial datasets. Quality & Governance: Implement rigorous CI/CD, unit testing, and data reconciliation frameworks to ensure "penny-perfect" accuracy. Technical Stack Engine: PySpark (Expert), Python (Clean Code/SOLID principles). AWS: EMR, Glue, S3, Athena, IAM, Lambda. Data Modeling: SCD Type 2, Fact/Dimension tables, Data Vault/Star Schema. Legacy: Proficiency in reading/debugging SAS (Base, Macros, DI Studio). DevOps: Git-based workflows, Jenkins/GitLab CI, Terraform. Randstad Technologies is acting as an Employment Business in relation to this vacancy.
10/03/2026
Contractor
PySpark Engineer Lead As the Technical Lead, you will drive the high-stakes migration of legacy SAS analytics to a modern, cloud-native PySpark ecosystem on AWS. This isn't just a lift and shift you will refactor complex procedural logic into scalable, production-ready distributed pipelines for a Tier-1 financial services environment. Core Responsibilities Engineering Leadership: Design and develop complex ETL/ELT pipelines and Data Marts using PySpark, EMR, and Glue. Legacy Modernisation: Architect the conversion of SAS Base/Macros into modular, testable Python code using SAS2PY and manual refactoring. Performance Tuning: Optimise Spark execution (partitioning, shuffling, caching) to ensure cost-efficient processing of massive financial datasets. Quality & Governance: Implement rigorous CI/CD, unit testing, and data reconciliation frameworks to ensure "penny-perfect" accuracy. Technical Stack Engine: PySpark (Expert), Python (Clean Code/SOLID principles). AWS: EMR, Glue, S3, Athena, IAM, Lambda. Data Modeling: SCD Type 2, Fact/Dimension tables, Data Vault/Star Schema. Legacy: Proficiency in reading/debugging SAS (Base, Macros, DI Studio). DevOps: Git-based workflows, Jenkins/GitLab CI, Terraform. Randstad Technologies is acting as an Employment Business in relation to this vacancy.
Peregrine
Data Engineer
Peregrine
We are Data Services, our mission is to unlock the value of data by delivering high-quality, reliable, and secure data services that are accessible, understandable, and actionable. We continuously evolve our offerings, leveraging modern cloud-based technologies, and fostering strong partnerships to help our colleagues in the Bank navigate the complexities of a data-driven world and achieve their strategic objectives. Active SC Clearance Job Description: The world of data in Central Banking is evolving rapidly. With the rise of detailed data collection in financial regulation and the swift advancements in cloud-native data technologies, the demand for visionary data engineers is growing. We re seeking a senior Data Engineer to join our Data Engineering team and play a pivotal role in shaping the Bank s strategic cloud-first data platform. As a senior member of the team, you will play a key role in designing and delivering robust, scalable data solutions that support the Bank s core responsibilities around monetary policy, financial stability, and regulatory supervision. You ll contribute to technical design decisions, mentor engineers, and collaborate across teams to ensure our data infrastructure continues to evolve and meet future demands. Role Responsibilities Lead the design, development, and deployment of scalable, secure, and cost-effective distributed data solutions using Azure services (e.g., Azure Databricks, Azure Data Lake Storage, Azure Data Factory). Architect and implement advanced data pipelines using Databricks, Delta Lake, Python and Spark, ensuring performance, reliability, and maintainability across cloud and on-prem environments. Champion data quality, governance, and observability, ensuring data is accurate, timely, and fit-for-purpose for analytics, BI, and operational use cases. Drive the modernization of legacy systems, leading the migration of data infrastructure to Azure with minimal disruption and long-term scalability. Act as a technical authority on Azure-native data engineering, guiding best practices and setting standards across the team. Mentor and coach junior and mid-level engineers, fostering a culture of continuous learning, innovation, and technical excellence. Collaborate with architects, analysts, and stake holders to align data engineering efforts with strategic business goals and enterprise data strategy. Evaluate and introduce emerging technologies, tools, and methodologies to enhance the Bank s data capabilities. Own the end-to-end delivery of complex data solutions, from requirements gathering to production deployment and support. Contribute to the development of reusable frameworks, templates, and patterns to accelerate delivery and ensure consistency across projects. Minimum Criteria Extensive experience with Azure services including Azure Databricks, Azure Data Lake Storage, and Azure Data Factory. Advanced proficiency in SQL, Python, and Spark (PySpark), with a strong focus on performance optimization and distributed processing. Proven experience in CI/CD practices using industry-standard tools (e.g., GitHub Actions, Azure DevOps). Strong understanding of data architecture principles and cloud-native design patterns. Essential Criteria Demonstrated ability to lead technical delivery, mentor engineering teams and collaborate with stakeholders to ensure alignment between data solutions and business strategy. Proficiency in Linux/Unix environments and shell scripting. Deep understanding of source control, testing strategies, and agile development practices. Self-motivated with a strategic mindset and a passion for driving innovation in data engineering. Desirable Criteria Experience delivering data pipelines on Hortonworks/Cloudera on-prem and leading cloud migration initiatives. Familiarity with: Apache Airflow Data modelling and metadata management Experience influencing enterprise data strategy and contributing to architectural governance.
10/03/2026
Full time
We are Data Services, our mission is to unlock the value of data by delivering high-quality, reliable, and secure data services that are accessible, understandable, and actionable. We continuously evolve our offerings, leveraging modern cloud-based technologies, and fostering strong partnerships to help our colleagues in the Bank navigate the complexities of a data-driven world and achieve their strategic objectives. Active SC Clearance Job Description: The world of data in Central Banking is evolving rapidly. With the rise of detailed data collection in financial regulation and the swift advancements in cloud-native data technologies, the demand for visionary data engineers is growing. We re seeking a senior Data Engineer to join our Data Engineering team and play a pivotal role in shaping the Bank s strategic cloud-first data platform. As a senior member of the team, you will play a key role in designing and delivering robust, scalable data solutions that support the Bank s core responsibilities around monetary policy, financial stability, and regulatory supervision. You ll contribute to technical design decisions, mentor engineers, and collaborate across teams to ensure our data infrastructure continues to evolve and meet future demands. Role Responsibilities Lead the design, development, and deployment of scalable, secure, and cost-effective distributed data solutions using Azure services (e.g., Azure Databricks, Azure Data Lake Storage, Azure Data Factory). Architect and implement advanced data pipelines using Databricks, Delta Lake, Python and Spark, ensuring performance, reliability, and maintainability across cloud and on-prem environments. Champion data quality, governance, and observability, ensuring data is accurate, timely, and fit-for-purpose for analytics, BI, and operational use cases. Drive the modernization of legacy systems, leading the migration of data infrastructure to Azure with minimal disruption and long-term scalability. Act as a technical authority on Azure-native data engineering, guiding best practices and setting standards across the team. Mentor and coach junior and mid-level engineers, fostering a culture of continuous learning, innovation, and technical excellence. Collaborate with architects, analysts, and stake holders to align data engineering efforts with strategic business goals and enterprise data strategy. Evaluate and introduce emerging technologies, tools, and methodologies to enhance the Bank s data capabilities. Own the end-to-end delivery of complex data solutions, from requirements gathering to production deployment and support. Contribute to the development of reusable frameworks, templates, and patterns to accelerate delivery and ensure consistency across projects. Minimum Criteria Extensive experience with Azure services including Azure Databricks, Azure Data Lake Storage, and Azure Data Factory. Advanced proficiency in SQL, Python, and Spark (PySpark), with a strong focus on performance optimization and distributed processing. Proven experience in CI/CD practices using industry-standard tools (e.g., GitHub Actions, Azure DevOps). Strong understanding of data architecture principles and cloud-native design patterns. Essential Criteria Demonstrated ability to lead technical delivery, mentor engineering teams and collaborate with stakeholders to ensure alignment between data solutions and business strategy. Proficiency in Linux/Unix environments and shell scripting. Deep understanding of source control, testing strategies, and agile development practices. Self-motivated with a strategic mindset and a passion for driving innovation in data engineering. Desirable Criteria Experience delivering data pipelines on Hortonworks/Cloudera on-prem and leading cloud migration initiatives. Familiarity with: Apache Airflow Data modelling and metadata management Experience influencing enterprise data strategy and contributing to architectural governance.
Datatech
Senior Data Engineer - (Python & SQL)
Datatech
Senior Data Engineer (Python & SQL) Location London with hybrid working Monday to Wednesday in the office Salary 70,000 to 85,000 depending on experience Reference J13026 An AI first SaaS business that transforms high quality first party data into trusted, decision ready insight at scale is looking for a Senior Data Engineer to join its growing data and engineering team. This role sits at the core of data engineering. You will work with data that is often imperfect and transform it into well structured, reliable datasets that other teams can depend on. The focus is on engineering high quality data foundations rather than analytics or cloud infrastructure alone. You will design and build clear, maintainable data pipelines using Python and SQL within a modern data and AI platform, with a strong focus on data quality, robustness, and long term reliability. You will also play an important mentoring role within the team, supporting and guiding other data engineers and helping to raise engineering standards through thoughtful, hands on leadership. Why join A supportive and inclusive environment where different perspectives are welcomed and people are encouraged to contribute and be heard Clear progression with space to deepen your technical expertise and grow your confidence at a sustainable pace A team that values collaboration, good communication, and shared ownership over hero culture The opportunity to work on meaningful data engineering problems where quality genuinely matters What you will be doing Designing and building cloud based data and machine learning pipelines that prepare data for analytics, AI, and product use Writing clear, well-structured Python, PySpark, and SQL to transform and validate data from multiple upstream sources Taking ownership of data quality, consistency, and reliability across the pipeline lifecycle Shaping scalable data models that support a wide range of downstream use cases Working closely with Product, Engineering, and Data Science teams to understand data needs and constraints Mentoring and supporting other data engineers, sharing knowledge and encouraging good engineering practices Contributing to the long term health of the data platform through thoughtful design and continuous improvement What we are looking for Strong experience using Python and SQL to transform large, real world datasets in production environments A deep understanding of data structures, data quality challenges, and how to design reliable transformation logic Experience working with modern data platforms such as Azure, GCP, AWS, Databricks, Snowflake, or similar Confidence working with imperfect data and making it fit for consumption downstream Experience supporting or mentoring other engineers through code reviews, pairing, or informal guidance Clear, thoughtful communication and a collaborative mindset You do not need to meet every requirement listed. What matters most is strong, hands on experience using Python and SQL to work confidently with complex, real world data, apply sound engineering judgement, and help others grow through your experience. Right to work in the UK is required. Sponsorship is not available now or in the future. Apply to find out more about the role. If you have a friend or colleague who may be interested, referrals are welcome. For each successful placement, you will be eligible for our general gift or voucher scheme. Datatech is one of the UK's leading recruitment agencies specialising in analytics and is the host of the critically acclaimed Women in Data event. For more information, visit (url removed)
09/03/2026
Full time
Senior Data Engineer (Python & SQL) Location London with hybrid working Monday to Wednesday in the office Salary 70,000 to 85,000 depending on experience Reference J13026 An AI first SaaS business that transforms high quality first party data into trusted, decision ready insight at scale is looking for a Senior Data Engineer to join its growing data and engineering team. This role sits at the core of data engineering. You will work with data that is often imperfect and transform it into well structured, reliable datasets that other teams can depend on. The focus is on engineering high quality data foundations rather than analytics or cloud infrastructure alone. You will design and build clear, maintainable data pipelines using Python and SQL within a modern data and AI platform, with a strong focus on data quality, robustness, and long term reliability. You will also play an important mentoring role within the team, supporting and guiding other data engineers and helping to raise engineering standards through thoughtful, hands on leadership. Why join A supportive and inclusive environment where different perspectives are welcomed and people are encouraged to contribute and be heard Clear progression with space to deepen your technical expertise and grow your confidence at a sustainable pace A team that values collaboration, good communication, and shared ownership over hero culture The opportunity to work on meaningful data engineering problems where quality genuinely matters What you will be doing Designing and building cloud based data and machine learning pipelines that prepare data for analytics, AI, and product use Writing clear, well-structured Python, PySpark, and SQL to transform and validate data from multiple upstream sources Taking ownership of data quality, consistency, and reliability across the pipeline lifecycle Shaping scalable data models that support a wide range of downstream use cases Working closely with Product, Engineering, and Data Science teams to understand data needs and constraints Mentoring and supporting other data engineers, sharing knowledge and encouraging good engineering practices Contributing to the long term health of the data platform through thoughtful design and continuous improvement What we are looking for Strong experience using Python and SQL to transform large, real world datasets in production environments A deep understanding of data structures, data quality challenges, and how to design reliable transformation logic Experience working with modern data platforms such as Azure, GCP, AWS, Databricks, Snowflake, or similar Confidence working with imperfect data and making it fit for consumption downstream Experience supporting or mentoring other engineers through code reviews, pairing, or informal guidance Clear, thoughtful communication and a collaborative mindset You do not need to meet every requirement listed. What matters most is strong, hands on experience using Python and SQL to work confidently with complex, real world data, apply sound engineering judgement, and help others grow through your experience. Right to work in the UK is required. Sponsorship is not available now or in the future. Apply to find out more about the role. If you have a friend or colleague who may be interested, referrals are welcome. For each successful placement, you will be eligible for our general gift or voucher scheme. Datatech is one of the UK's leading recruitment agencies specialising in analytics and is the host of the critically acclaimed Women in Data event. For more information, visit (url removed)
Data Idols
Staff Data Engineer
Data Idols
Staff Data Engineer Salary: 85,000 - 95,000 Location: London, hybrid Data Idols are working with one of the best-known retail brands in the UK that are investing heavily in its data platform. They are looking for a Staff Data Engineer to play a key role in scaling production data systems and raising engineering standards across the wider data function. This role sits at the centre of a major data transformation and offers the chance to work on high-impact data platforms used across the business. The Opportunity As a Staff Data Engineer, you'll take ownership of complex, production-grade data pipelines and act as a technical leader within the data engineering team. You'll work on cloud-native solutions built on Azure and Databricks, making key decisions around data processing, modelling, and performance. Alongside hands-on delivery, you'll help set best practices, support other engineers, and influence how data engineering is done across the organisation. Skills & Experience Strong hands-on experience with Azure data platforms Advanced SQL skills Commercial experience using Databricks and PySpark Proven background building and maintaining scalable data pipelines If you're looking for a role where you can combine technical depth, ownership, and influence, please submit your CV for initial screening and further details. Staff Data Engineer
28/02/2026
Full time
Staff Data Engineer Salary: 85,000 - 95,000 Location: London, hybrid Data Idols are working with one of the best-known retail brands in the UK that are investing heavily in its data platform. They are looking for a Staff Data Engineer to play a key role in scaling production data systems and raising engineering standards across the wider data function. This role sits at the centre of a major data transformation and offers the chance to work on high-impact data platforms used across the business. The Opportunity As a Staff Data Engineer, you'll take ownership of complex, production-grade data pipelines and act as a technical leader within the data engineering team. You'll work on cloud-native solutions built on Azure and Databricks, making key decisions around data processing, modelling, and performance. Alongside hands-on delivery, you'll help set best practices, support other engineers, and influence how data engineering is done across the organisation. Skills & Experience Strong hands-on experience with Azure data platforms Advanced SQL skills Commercial experience using Databricks and PySpark Proven background building and maintaining scalable data pipelines If you're looking for a role where you can combine technical depth, ownership, and influence, please submit your CV for initial screening and further details. Staff Data Engineer
Data Engineer
Youngs Employment Services
Data Engineer London + 2 or 3 days work from home Circ £60,000 - £70,000 + Excellent Benefits Package A fantastic opportunity is available for a Data Engineer that enjoys working in a fast paced and collaborative team playing work environment. Our client has been expanding at a remarkable pace and have transformed their technical landscape with leading edge solutions. Having implemented a new MS Fabric based Data platform, the need is now to scale up and deliver data driven insights and strategies right across the business globally. The Data Engineer will be joining a close-knit team that is the hub of our client s global data & analytics operation. Previous experience with MS Fabric would be beneficial but is by no means essential. Interested candidates must have experience in a similar role with MS Azure Data Platforms, Synapse, Databricks or other Cloud platforms such as AWS, GCP, Snowflake etc. Key Responsibilities will include; Design, implement, and optimize end-to-end solutions using Fabric components: o Data Factory (pipelines, orchestration) o Data Engineering (Lakehouse, notebooks, Apache Spark) o Data Warehouse (SQL endpoints, schemas, MPP performance tuning) o Real-Time Analytics (KQL databases, event ingestion) o Manage and enhance OneLake architecture, delta lake tables, security policies, and data governance within Fabric. o Build scalable, reusable data assets and engineering patterns that support analytics, reporting, and machine learning workloads. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver effective solutions. Troubleshoot and resolve data-related issues in a timely manner. Key Experience, Skills and Knowledge: Proven 2 yrs+ experience as a Data Engineer or similar role, with a strong focus on PySpark, SQL, Microsoft Azure Data platforms and Power BI an advantage Proficiency in development languages suitable for intermediate-level data engineers, such as: Python / PySpark: Widely used for data manipulation, analysis, and scripting. SQL: Essential for querying and managing relational databases. Understanding of D365 F&O Data Structures is highly desirable Strong problem-solving skills and attention to detail. Excellent communication and collaboration abilities. This is a hybrid role based in Central / West London with the flexibility to work from home 2 or 3 days per week. Salary will be dependent on experience and expected to be in the region of £60,000 - £70,000 + an attractive benefits package including bonus scheme. For further information, please send your CV to Wayne Young at Young's Employment Services Ltd. YES are operating as both a recruitment Agency and Recruitment Business
27/02/2026
Full time
Data Engineer London + 2 or 3 days work from home Circ £60,000 - £70,000 + Excellent Benefits Package A fantastic opportunity is available for a Data Engineer that enjoys working in a fast paced and collaborative team playing work environment. Our client has been expanding at a remarkable pace and have transformed their technical landscape with leading edge solutions. Having implemented a new MS Fabric based Data platform, the need is now to scale up and deliver data driven insights and strategies right across the business globally. The Data Engineer will be joining a close-knit team that is the hub of our client s global data & analytics operation. Previous experience with MS Fabric would be beneficial but is by no means essential. Interested candidates must have experience in a similar role with MS Azure Data Platforms, Synapse, Databricks or other Cloud platforms such as AWS, GCP, Snowflake etc. Key Responsibilities will include; Design, implement, and optimize end-to-end solutions using Fabric components: o Data Factory (pipelines, orchestration) o Data Engineering (Lakehouse, notebooks, Apache Spark) o Data Warehouse (SQL endpoints, schemas, MPP performance tuning) o Real-Time Analytics (KQL databases, event ingestion) o Manage and enhance OneLake architecture, delta lake tables, security policies, and data governance within Fabric. o Build scalable, reusable data assets and engineering patterns that support analytics, reporting, and machine learning workloads. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver effective solutions. Troubleshoot and resolve data-related issues in a timely manner. Key Experience, Skills and Knowledge: Proven 2 yrs+ experience as a Data Engineer or similar role, with a strong focus on PySpark, SQL, Microsoft Azure Data platforms and Power BI an advantage Proficiency in development languages suitable for intermediate-level data engineers, such as: Python / PySpark: Widely used for data manipulation, analysis, and scripting. SQL: Essential for querying and managing relational databases. Understanding of D365 F&O Data Structures is highly desirable Strong problem-solving skills and attention to detail. Excellent communication and collaboration abilities. This is a hybrid role based in Central / West London with the flexibility to work from home 2 or 3 days per week. Salary will be dependent on experience and expected to be in the region of £60,000 - £70,000 + an attractive benefits package including bonus scheme. For further information, please send your CV to Wayne Young at Young's Employment Services Ltd. YES are operating as both a recruitment Agency and Recruitment Business
Square One Resources
Snowflake Data Engineer
Square One Resources City, London
Job Title: Snowflake Data Engineer Location: London (2 days on-site per week) Salary/Rate: 550 - 600 per day inside IR35 Start Date: March Job Type: Initial 3-6 month contract Company Introduction We have an exciting opportunity now available with one of our sector-leading consultancy clients! They are currently looking for a skilled Snowflake Data Engineer to help on their cloud migration project. Job Responsibilities/Objectives You will be responsible for designing and building scalable data pipelines, Data Vault models/Dimension Model, and Snowflake/dbt workloads for cloud migration projects. ? Implement Data Vault 2.0 (Hubs, Links, Satellites) /Dimension Model on Snowflake. ? Build ELT pipelines using Snowflake, dbt, Python/PySpark. ? Develop ingestion from APIs, databases, streams. ? Optimize Snowflake warehouses, cost, and performance. ? Collaborate with architects, analysts, and DevOps. ? Maintain documentation, lineage, governance standards. Required Skills/Experience The ideal candidate will have the following: ? Strong SQL; Snowflake ELT; dbt experience. ? Python/PySpark, ETL/ELT design. ? Data Vault 2.0 or dimensional modeling. ? AWS services (S3, Glue, Lambda, Redshift) or GCP equivalents. ? Experience with CI/CD for data pipelines. Good to have skills Although not essential, the following skills are desired by the client: ? Kafka/Kinesis, Airflow, CodePipeline. ? BI tools (Power BI/Tableau). ? Docker/OpenShift; metadata driven pipelines. ? 3-8+ years Data Engineering experience. ? Cloud data engineering and Snowflake/dbt hands on exposure. If you are interested in this opportunity, please apply now with your updated CV in Microsoft Word/PDF format. Disclaimer Notwithstanding any guidelines given to level of experience sought, we will consider candidates from outside this range if they can demonstrate the necessary competencies. Square One is acting as both an employment agency and an employment business, and is an equal opportunities recruitment business. Square One embraces diversity and will treat everyone equally. Please see our website for our full diversity statement.
23/02/2026
Contractor
Job Title: Snowflake Data Engineer Location: London (2 days on-site per week) Salary/Rate: 550 - 600 per day inside IR35 Start Date: March Job Type: Initial 3-6 month contract Company Introduction We have an exciting opportunity now available with one of our sector-leading consultancy clients! They are currently looking for a skilled Snowflake Data Engineer to help on their cloud migration project. Job Responsibilities/Objectives You will be responsible for designing and building scalable data pipelines, Data Vault models/Dimension Model, and Snowflake/dbt workloads for cloud migration projects. ? Implement Data Vault 2.0 (Hubs, Links, Satellites) /Dimension Model on Snowflake. ? Build ELT pipelines using Snowflake, dbt, Python/PySpark. ? Develop ingestion from APIs, databases, streams. ? Optimize Snowflake warehouses, cost, and performance. ? Collaborate with architects, analysts, and DevOps. ? Maintain documentation, lineage, governance standards. Required Skills/Experience The ideal candidate will have the following: ? Strong SQL; Snowflake ELT; dbt experience. ? Python/PySpark, ETL/ELT design. ? Data Vault 2.0 or dimensional modeling. ? AWS services (S3, Glue, Lambda, Redshift) or GCP equivalents. ? Experience with CI/CD for data pipelines. Good to have skills Although not essential, the following skills are desired by the client: ? Kafka/Kinesis, Airflow, CodePipeline. ? BI tools (Power BI/Tableau). ? Docker/OpenShift; metadata driven pipelines. ? 3-8+ years Data Engineering experience. ? Cloud data engineering and Snowflake/dbt hands on exposure. If you are interested in this opportunity, please apply now with your updated CV in Microsoft Word/PDF format. Disclaimer Notwithstanding any guidelines given to level of experience sought, we will consider candidates from outside this range if they can demonstrate the necessary competencies. Square One is acting as both an employment agency and an employment business, and is an equal opportunities recruitment business. Square One embraces diversity and will treat everyone equally. Please see our website for our full diversity statement.
Pontoon
Lead Data Engineer
Pontoon Warwick, Warwickshire
Job Title: Lead Data Engineer Location: Warwick (Hybrid - once per week onsite) Remuneration: From 700 per day (via umbrella company) Contract Details: Fixed Term Contract, 6 months, Full Time Are you ready to take the lead in transforming data engineering? Our client, a forward-thinking organisation, is seeking a highly skilled and hands-on Lead Data Engineer to drive their Data & Insights platform built on the Azure stack. This is a Databricks-heavy role , ideal for someone who thrives in technical environments and enjoys working directly with cutting-edge tools. Why This Role Stands Out This is not just a leadership role - it's a hands-on engineering opportunity where your Databricks expertise will be front and centre . You'll be the go-to person for all things Databricks, from architecture and configuration to pipeline promotion and troubleshooting. Key Responsibilities Lead and deliver hands-on data engineering across all layers of the Azure-based data platform. Act as the Databricks SME (Subject Matter Expert) , overseeing architecture, configuration, and documentation. Guide and support the engineering team while contributing directly to development efforts. Manage DevOps practices including branch supervision and merging. Promote data pipelines and notebooks through development, testing, and production environments. Enhance monitoring and control frameworks to ensure platform reliability. Provide technical leadership and mentorship to a small team of Data Engineers. Essential Skills Extensive hands-on experience with Databricks - this is the core of the role. Strong background in Synapse and Azure DevOps. Proficiency in SQL and PySpark within a Databricks environment. Proven experience leading small engineering teams. Skilled in configuration management and technical documentation. If you're a Databricks expert looking for a role that blends leadership with deep technical involvement, this is your chance to make a real impact. Join a dynamic team and help shape the future of data engineering. Ready to elevate your career? Apply now and be a vital part of this exciting transformation! Pontoon is an employment consultancy. We put expertise, energy, and enthusiasm into improving everyone's chance of being part of the workplace. We respect and appreciate people of all ethnicities, generations, religious beliefs, sexual orientations, gender identities, and more. We do this by showcasing their talents, skills, and unique experience in an inclusive environment that helps them thrive. If you require reasonable adjustments at any stage, please let us know and we will be happy to support you.
07/10/2025
Contractor
Job Title: Lead Data Engineer Location: Warwick (Hybrid - once per week onsite) Remuneration: From 700 per day (via umbrella company) Contract Details: Fixed Term Contract, 6 months, Full Time Are you ready to take the lead in transforming data engineering? Our client, a forward-thinking organisation, is seeking a highly skilled and hands-on Lead Data Engineer to drive their Data & Insights platform built on the Azure stack. This is a Databricks-heavy role , ideal for someone who thrives in technical environments and enjoys working directly with cutting-edge tools. Why This Role Stands Out This is not just a leadership role - it's a hands-on engineering opportunity where your Databricks expertise will be front and centre . You'll be the go-to person for all things Databricks, from architecture and configuration to pipeline promotion and troubleshooting. Key Responsibilities Lead and deliver hands-on data engineering across all layers of the Azure-based data platform. Act as the Databricks SME (Subject Matter Expert) , overseeing architecture, configuration, and documentation. Guide and support the engineering team while contributing directly to development efforts. Manage DevOps practices including branch supervision and merging. Promote data pipelines and notebooks through development, testing, and production environments. Enhance monitoring and control frameworks to ensure platform reliability. Provide technical leadership and mentorship to a small team of Data Engineers. Essential Skills Extensive hands-on experience with Databricks - this is the core of the role. Strong background in Synapse and Azure DevOps. Proficiency in SQL and PySpark within a Databricks environment. Proven experience leading small engineering teams. Skilled in configuration management and technical documentation. If you're a Databricks expert looking for a role that blends leadership with deep technical involvement, this is your chance to make a real impact. Join a dynamic team and help shape the future of data engineering. Ready to elevate your career? Apply now and be a vital part of this exciting transformation! Pontoon is an employment consultancy. We put expertise, energy, and enthusiasm into improving everyone's chance of being part of the workplace. We respect and appreciate people of all ethnicities, generations, religious beliefs, sexual orientations, gender identities, and more. We do this by showcasing their talents, skills, and unique experience in an inclusive environment that helps them thrive. If you require reasonable adjustments at any stage, please let us know and we will be happy to support you.
Oliver James
Data Engineer - Manchester Hybrid
Oliver James Manchester, Lancashire
Data Engineer Salary : £60,000 Location : Manchester Hybrid We're looking for an experienced Senior Data Engineer to join our team and lead the delivery of advanced Business Intelligence solutions using the latest Microsoft Azure technologies. Our client is a forward-thinking organisation that thrives on innovation, data-driven insights, and excellence. Their Data Analytics Team is at the forefront of driving commercial success and operational efficiency, and we are looking for a skilled Engineer to join a dynamic team. Reporting into the Head of Data and Business Intelligence, you will lead the development and implementation of performant, scalable, and cost-effective data and reporting solutions. You'll work across data sources including Microsoft Dynamics, in-house systems, and third-party platforms leveraging the latest Microsoft technologies. Key Responsibilities: Successful delivery of complex Business Intelligence solutions using modern data platform & reporting technologies and services in Microsoft Azure. Especially Synapse, ADF and Power BI (Datasets and Reports). Ideally SSIS, SSRS, SSAS with some understanding of Power App design and delivery Proficient in SQL and Python (PySpark) languages Understanding of data modelling concepts Experience of working with code management & deployment tools Proficient in debugging, monitoring, tuning and troubleshooting BI solutions. Knowledge and a proven track record in data governance / data quality management Desirable Experience: Familiarity with Microsoft Dynamics CRM, F&O, or Field Services as data sources If you're a technically strong, delivery-focused data professional with a passion for modern Azure-based BI solutions, we'd love to hear from you.
06/10/2025
Full time
Data Engineer Salary : £60,000 Location : Manchester Hybrid We're looking for an experienced Senior Data Engineer to join our team and lead the delivery of advanced Business Intelligence solutions using the latest Microsoft Azure technologies. Our client is a forward-thinking organisation that thrives on innovation, data-driven insights, and excellence. Their Data Analytics Team is at the forefront of driving commercial success and operational efficiency, and we are looking for a skilled Engineer to join a dynamic team. Reporting into the Head of Data and Business Intelligence, you will lead the development and implementation of performant, scalable, and cost-effective data and reporting solutions. You'll work across data sources including Microsoft Dynamics, in-house systems, and third-party platforms leveraging the latest Microsoft technologies. Key Responsibilities: Successful delivery of complex Business Intelligence solutions using modern data platform & reporting technologies and services in Microsoft Azure. Especially Synapse, ADF and Power BI (Datasets and Reports). Ideally SSIS, SSRS, SSAS with some understanding of Power App design and delivery Proficient in SQL and Python (PySpark) languages Understanding of data modelling concepts Experience of working with code management & deployment tools Proficient in debugging, monitoring, tuning and troubleshooting BI solutions. Knowledge and a proven track record in data governance / data quality management Desirable Experience: Familiarity with Microsoft Dynamics CRM, F&O, or Field Services as data sources If you're a technically strong, delivery-focused data professional with a passion for modern Azure-based BI solutions, we'd love to hear from you.

Modal Window

  • Home
  • Contact
  • About Us
  • FAQs
  • Terms & Conditions
  • Privacy
  • Employer
  • Post a Job
  • Search Resumes
  • Sign in
  • Job Seeker
  • Find Jobs
  • Create Resume
  • Sign in
  • IT blog
  • Facebook
  • Twitter
  • LinkedIn
  • Youtube
© 2008-2026 IT Job Board