it job board logo
  • Home
  • Find IT Jobs
  • Register CV
  • Register as Employer
  • Contact us
  • Career Advice
  • Recruiting? Post a job
  • Sign in
  • Sign up
  • Home
  • Find IT Jobs
  • Register CV
  • Register as Employer
  • Contact us
  • Career Advice
Sorry, that job is no longer available. Here are some results that may be similar to the job you were looking for.

25 jobs found

Email me jobs like this
Refine Search
Current Search
azure databricks engineer contract
Tenth Revolution Group
DataBricks Data Engineer
Tenth Revolution Group
Data Engineer (Databricks & Azure) - 3-Month Rolling Contract Rate: 400- 450 per day Location: Remote IR35 Status: Outside IR35 Duration: Initial 3 months (rolling) About the Company Join a leading Databricks Partner delivering innovative data solutions for enterprise clients. You'll work on cutting-edge projects leveraging Databricks and Azure to transform data into actionable insights. About the Role We are seeking an experienced Data Engineer with strong expertise in Databricks and Azure to join our team on a 3-month rolling contract. This is a fully remote position, offering flexibility and autonomy while working on high-impact data engineering initiatives. Key Responsibilities Design, develop, and optimize data pipelines using Databricks and Azure Data Services . Implement best practices for data ingestion, transformation, and storage. Collaborate with stakeholders to ensure data solutions meet business requirements. Monitor and troubleshoot data workflows for performance and reliability. Essential Skills Proven experience with Databricks (including Spark-based data processing). Strong knowledge of Azure Data Platform (Data Lake, Synapse, etc.). Proficiency in Python and SQL for data engineering tasks. Understanding of data architecture and ETL processes. Ability to work independently in a remote environment. Nice-to-Have Experience with CI/CD pipelines for data solutions. Familiarity with Delta Lake and ML pipelines . Start Date: ASAP Contract Type: Outside IR35 Apply Now: If you're a skilled Data Engineer looking for a flexible, remote opportunity with a Databricks Partner , we'd love to hear from you!
03/12/2025
Contractor
Data Engineer (Databricks & Azure) - 3-Month Rolling Contract Rate: 400- 450 per day Location: Remote IR35 Status: Outside IR35 Duration: Initial 3 months (rolling) About the Company Join a leading Databricks Partner delivering innovative data solutions for enterprise clients. You'll work on cutting-edge projects leveraging Databricks and Azure to transform data into actionable insights. About the Role We are seeking an experienced Data Engineer with strong expertise in Databricks and Azure to join our team on a 3-month rolling contract. This is a fully remote position, offering flexibility and autonomy while working on high-impact data engineering initiatives. Key Responsibilities Design, develop, and optimize data pipelines using Databricks and Azure Data Services . Implement best practices for data ingestion, transformation, and storage. Collaborate with stakeholders to ensure data solutions meet business requirements. Monitor and troubleshoot data workflows for performance and reliability. Essential Skills Proven experience with Databricks (including Spark-based data processing). Strong knowledge of Azure Data Platform (Data Lake, Synapse, etc.). Proficiency in Python and SQL for data engineering tasks. Understanding of data architecture and ETL processes. Ability to work independently in a remote environment. Nice-to-Have Experience with CI/CD pipelines for data solutions. Familiarity with Delta Lake and ML pipelines . Start Date: ASAP Contract Type: Outside IR35 Apply Now: If you're a skilled Data Engineer looking for a flexible, remote opportunity with a Databricks Partner , we'd love to hear from you!
Qualient Technology Solutions UK Limited
Data Engineer
Qualient Technology Solutions UK Limited
Role Title: Data Engineer Role Type: Contract (inside ir35) Role Location: London, UK Mandatory: Active Security Clearance Job Description:- Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Good proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and Azure SQL Database. Experience working with large datasets and complex data pipelines. Experience with data architecture design and data pipeline optimization. Proven expertise with Databricks, including hands-on implementation experience and certifications. Experience with SQL and NoSQL databases. Experience with data quality and data governance processes. Experience with version control systems (e.g., Git). Experience with Agile development methodologies.
29/11/2025
Contractor
Role Title: Data Engineer Role Type: Contract (inside ir35) Role Location: London, UK Mandatory: Active Security Clearance Job Description:- Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Good proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and Azure SQL Database. Experience working with large datasets and complex data pipelines. Experience with data architecture design and data pipeline optimization. Proven expertise with Databricks, including hands-on implementation experience and certifications. Experience with SQL and NoSQL databases. Experience with data quality and data governance processes. Experience with version control systems (e.g., Git). Experience with Agile development methodologies.
Experis
Data Engineer
Experis City, Swindon
Data Engineer 6 months initially Remote working - but visits to Swindon office once per week Inside IR35 - Umbrella only Role Overview We are seeking an experienced Data Engineer to join our team. The ideal candidate will bring 5-7 years of experience in data engineering, with a strong focus on Azure Databricks. Key Skills & Experience Proven expertise in Azure Databricks (must-have) Hands-on experience with DBT (Data Build Tool) Strong knowledge of Snowflake Solid background in designing, building, and optimizing data pipelines Ability to collaborate effectively in hybrid working arrangements All profiles will be reviewed against the required skills and experience. Due to the high number of applications we will only be able to respond to successful applicants in the first instance. We thank you for your interest and the time taken to apply!
27/11/2025
Contractor
Data Engineer 6 months initially Remote working - but visits to Swindon office once per week Inside IR35 - Umbrella only Role Overview We are seeking an experienced Data Engineer to join our team. The ideal candidate will bring 5-7 years of experience in data engineering, with a strong focus on Azure Databricks. Key Skills & Experience Proven expertise in Azure Databricks (must-have) Hands-on experience with DBT (Data Build Tool) Strong knowledge of Snowflake Solid background in designing, building, and optimizing data pipelines Ability to collaborate effectively in hybrid working arrangements All profiles will be reviewed against the required skills and experience. Due to the high number of applications we will only be able to respond to successful applicants in the first instance. We thank you for your interest and the time taken to apply!
Hays Technology
Python Data Engineer
Hays Technology
Data Engineer - Security Clearance (SC), Python, Azure, BDD Up to 475 per day (Inside IR35) Remote / London 6 months My client is an International Consultancy who require a Security Cleared Data Engineer, with Active Security Clearance (SC), and strong Python skills to design and deploy scalable Data solutions in a containerized Azure environment. Key requirements: Proven experience as a Data Engineer with Active Security Clearance (SC) Strong Python skills with modular, test-driven design Experience with Behave for unit and BDD testing (mocking, patching) Proficiency in PySpark and distributed Data processing Solid understanding of Delta Lake (design and maintenance) Hands-on with Docker for development and deployment Familiarity with Azure services: Functions, Key Vault, Blob Storage Ability to build configurable, parameter-driven applications Exposure to CI/CD pipelines (ideally Azure DevOps) and cloud security best practices Strong collaboration and communication skills Nice to have: Immediate availability Experience with Databricks or Synapse Knowledge of Data governance in Azure ecosystems Infrastructure as Code (IaC) tooling If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. If this job isn't quite right for you, but you are looking for a new position, please contact us for a confidential discussion about your career. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at (url removed)
27/11/2025
Contractor
Data Engineer - Security Clearance (SC), Python, Azure, BDD Up to 475 per day (Inside IR35) Remote / London 6 months My client is an International Consultancy who require a Security Cleared Data Engineer, with Active Security Clearance (SC), and strong Python skills to design and deploy scalable Data solutions in a containerized Azure environment. Key requirements: Proven experience as a Data Engineer with Active Security Clearance (SC) Strong Python skills with modular, test-driven design Experience with Behave for unit and BDD testing (mocking, patching) Proficiency in PySpark and distributed Data processing Solid understanding of Delta Lake (design and maintenance) Hands-on with Docker for development and deployment Familiarity with Azure services: Functions, Key Vault, Blob Storage Ability to build configurable, parameter-driven applications Exposure to CI/CD pipelines (ideally Azure DevOps) and cloud security best practices Strong collaboration and communication skills Nice to have: Immediate availability Experience with Databricks or Synapse Knowledge of Data governance in Azure ecosystems Infrastructure as Code (IaC) tooling If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. If this job isn't quite right for you, but you are looking for a new position, please contact us for a confidential discussion about your career. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at (url removed)
Tenth Revolution Group
Azure Data Engineer - £400PD - Remote
Tenth Revolution Group City, London
Azure Data Engineer - 400PD - Remote Seeking an experienced Data Engineer to design, build, and optimise data solutions within the Microsoft Azure ecosystem. The role focuses on pipeline development, data modelling, governance, and supporting analytics teams with high-quality, reliable data. Key Responsibilities: Develop and maintain scalable data pipelines using Azure Data Factory, Synapse, Databricks, and Microsoft Fabric. Build efficient ETL/ELT processes and data models to support analytics, reporting, and dashboards. Optimise existing pipelines for performance, reliability, and cost efficiency. Implement best practices for data quality, error handling, automation, and monitoring. Manage data security and governance, including Row-Level Security (RLS) and compliance standards. Maintain documentation and metadata for data assets and pipeline architecture. Collaborate with analysts, data scientists, and stakeholders to deliver trusted data products. Provide technical support and troubleshoot production issues. Contribute to improving data engineering processes and development lifecycle. Required Skills & Experience: Proven experience as a Data Engineer working with Azure. Strong skills in Azure Data Factory, Synapse Analytics, Databricks, SQL Database, and Azure Storage. Excellent SQL and data modelling (star/snowflake, dimensional modelling). Knowledge of Power BI dataflows, DAX, and RLS. Experience with Python, PySpark, or T-SQL for transformations. Understanding of CI/CD and DevOps (Git, YAML pipelines). Strong grasp of data governance, security, and performance tuning. To apply for this role please submit your CV or contact Dillon Blackburn on (phone number removed) or at (url removed). Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.
27/11/2025
Contractor
Azure Data Engineer - 400PD - Remote Seeking an experienced Data Engineer to design, build, and optimise data solutions within the Microsoft Azure ecosystem. The role focuses on pipeline development, data modelling, governance, and supporting analytics teams with high-quality, reliable data. Key Responsibilities: Develop and maintain scalable data pipelines using Azure Data Factory, Synapse, Databricks, and Microsoft Fabric. Build efficient ETL/ELT processes and data models to support analytics, reporting, and dashboards. Optimise existing pipelines for performance, reliability, and cost efficiency. Implement best practices for data quality, error handling, automation, and monitoring. Manage data security and governance, including Row-Level Security (RLS) and compliance standards. Maintain documentation and metadata for data assets and pipeline architecture. Collaborate with analysts, data scientists, and stakeholders to deliver trusted data products. Provide technical support and troubleshoot production issues. Contribute to improving data engineering processes and development lifecycle. Required Skills & Experience: Proven experience as a Data Engineer working with Azure. Strong skills in Azure Data Factory, Synapse Analytics, Databricks, SQL Database, and Azure Storage. Excellent SQL and data modelling (star/snowflake, dimensional modelling). Knowledge of Power BI dataflows, DAX, and RLS. Experience with Python, PySpark, or T-SQL for transformations. Understanding of CI/CD and DevOps (Git, YAML pipelines). Strong grasp of data governance, security, and performance tuning. To apply for this role please submit your CV or contact Dillon Blackburn on (phone number removed) or at (url removed). Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.
Hays Technology
Data Engineer (Databricks)
Hays Technology City, London
About the RoleWe are seeking a Databricks Data Engineer with strong expertise in designing and optimising large-scale data engineering solutions within the Databricks Data Intelligence Platform. This role is ideal for someone passionate about building high-performance data pipelines and ensuring robust data governance across modern cloud environments. Key Responsibilities Design, build, and maintain scalable data pipelines using Databricks Notebooks, Jobs, and Workflows for both batch and streaming data. Optimise Spark and Delta Lake performance through efficient cluster configuration, adaptive query execution, and caching strategies. Conduct performance testing and cluster tuning to ensure cost-efficient, high-performing workloads. Implement data quality, lineage tracking, and access control policies aligned with Databricks Unity Catalogue and governance best practices. Develop PySpark applications for ETL, data transformation, and analytics, following modular and reusable design principles. Create and manage Delta Lake tables with ACID compliance, schema evolution, and time travel for versioned data management. Integrate Databricks solutions with Azure services such as Azure Data Lake Storage, Key Vault, and Azure Functions. What We're Looking For Proven experience with Databricks, PySpark, and Delta Lake. Strong understanding of workflow orchestration, performance optimisation, and data governance. Hands-on experience with Azure cloud services. Ability to work in a fast-paced environment and deliver high-quality solutions. SC Cleared candidates If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. If this job isn't quite right for you, but you are looking for a new position, please contact us for a confidential discussion about your career. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at (url removed)
27/11/2025
Contractor
About the RoleWe are seeking a Databricks Data Engineer with strong expertise in designing and optimising large-scale data engineering solutions within the Databricks Data Intelligence Platform. This role is ideal for someone passionate about building high-performance data pipelines and ensuring robust data governance across modern cloud environments. Key Responsibilities Design, build, and maintain scalable data pipelines using Databricks Notebooks, Jobs, and Workflows for both batch and streaming data. Optimise Spark and Delta Lake performance through efficient cluster configuration, adaptive query execution, and caching strategies. Conduct performance testing and cluster tuning to ensure cost-efficient, high-performing workloads. Implement data quality, lineage tracking, and access control policies aligned with Databricks Unity Catalogue and governance best practices. Develop PySpark applications for ETL, data transformation, and analytics, following modular and reusable design principles. Create and manage Delta Lake tables with ACID compliance, schema evolution, and time travel for versioned data management. Integrate Databricks solutions with Azure services such as Azure Data Lake Storage, Key Vault, and Azure Functions. What We're Looking For Proven experience with Databricks, PySpark, and Delta Lake. Strong understanding of workflow orchestration, performance optimisation, and data governance. Hands-on experience with Azure cloud services. Ability to work in a fast-paced environment and deliver high-quality solutions. SC Cleared candidates If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. If this job isn't quite right for you, but you are looking for a new position, please contact us for a confidential discussion about your career. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at (url removed)
Tenth Revolution Group
Data Engineering Team Lead - Remote - Databricks - Azure - 80k
Tenth Revolution Group City, London
Data Engineering Team Lead - Remote - Databricks - Azure - 80k Join a Leading Microsoft Consultancy Driving Data Innovation I'm working with a well-established Microsoft Partner with an incredible project pipeline, rapid growth, and a reputation for delivering Tech for Good. They're working on cutting-edge projects using emerging technologies like Microsoft Fabric and Azure Databricks. We're looking for a Data engineering team lead who combines hands-on technical expertise with leadership skills to mentor a talented team and deliver exceptional solutions for our clients. What You'll Do Lead and mentor a team of Technical Consultants, driving engagement, growth, and alignment with our culture. Oversee resource planning, scheduling, and performance management. Collaborate with Pre-sales, Commercial, and Project Management teams to scope and deliver projects. Ensure consistent delivery of technical solutions aligned with best practices and standards. Support technical delivery when needed, including designing scalable data solutions in Microsoft/Azure environments. Contribute to innovation through cloud migrations, data lakes, and robust ETL/ELT solutions. What We're Looking For Hands-on Data Engineering experience (not Data Analyst or Data Scientist roles). Strong background in Azure Synapse, Databricks, or Microsoft Fabric. Expertise in ETL/ELT development using SQL and Python. Experience implementing data lakes and medallion lakehouse architecture. Skilled in managing large datasets and writing advanced SQL/Python queries. Solid understanding of BI and data warehousing concepts. Excellent communication skills and ability to build strong relationships. Ideally, experience in consulting environments and working within high-performing teams. The company Rapid Growth & Exciting Projects - Continuing to grow working on cutting-edge Microsoft Cloud solutions. Investment in You - They invest in training and development, with clear certification pathways for you. Home-Based Contract - Work remotely with all travel expenses covered. UK-Based Delivery - We never offshore; all consultants are UK-based. Competitive salary and benefits, including: 25 days holiday Private health insurance (after one year) Life assurance (4x base salary) Enhanced parental pay Perkbox, Cyclescheme, Electric Car Scheme Ready to lead and innovate? Apply now or send your CV directly to me: (url removed)
27/11/2025
Full time
Data Engineering Team Lead - Remote - Databricks - Azure - 80k Join a Leading Microsoft Consultancy Driving Data Innovation I'm working with a well-established Microsoft Partner with an incredible project pipeline, rapid growth, and a reputation for delivering Tech for Good. They're working on cutting-edge projects using emerging technologies like Microsoft Fabric and Azure Databricks. We're looking for a Data engineering team lead who combines hands-on technical expertise with leadership skills to mentor a talented team and deliver exceptional solutions for our clients. What You'll Do Lead and mentor a team of Technical Consultants, driving engagement, growth, and alignment with our culture. Oversee resource planning, scheduling, and performance management. Collaborate with Pre-sales, Commercial, and Project Management teams to scope and deliver projects. Ensure consistent delivery of technical solutions aligned with best practices and standards. Support technical delivery when needed, including designing scalable data solutions in Microsoft/Azure environments. Contribute to innovation through cloud migrations, data lakes, and robust ETL/ELT solutions. What We're Looking For Hands-on Data Engineering experience (not Data Analyst or Data Scientist roles). Strong background in Azure Synapse, Databricks, or Microsoft Fabric. Expertise in ETL/ELT development using SQL and Python. Experience implementing data lakes and medallion lakehouse architecture. Skilled in managing large datasets and writing advanced SQL/Python queries. Solid understanding of BI and data warehousing concepts. Excellent communication skills and ability to build strong relationships. Ideally, experience in consulting environments and working within high-performing teams. The company Rapid Growth & Exciting Projects - Continuing to grow working on cutting-edge Microsoft Cloud solutions. Investment in You - They invest in training and development, with clear certification pathways for you. Home-Based Contract - Work remotely with all travel expenses covered. UK-Based Delivery - We never offshore; all consultants are UK-based. Competitive salary and benefits, including: 25 days holiday Private health insurance (after one year) Life assurance (4x base salary) Enhanced parental pay Perkbox, Cyclescheme, Electric Car Scheme Ready to lead and innovate? Apply now or send your CV directly to me: (url removed)
Hays Technology
Databricks Data Engineer
Hays Technology
Data Engineer - Active SC, Databricks, PySpark Up to 475 per day (Inside IR35) Remote / London 6 months My client is an International Consultancy who are recruiting for an Data Engineer with Active Security Clearance (SC) and strong Databricks and Azure experience to deliver and optimise Data engineering solutions. Key requirements: Proven experience as a Data Engineer with Active Security Clearance (SC) Strong experience with Databricks, PySpark and Delta Lake Expertise in Jobs & Workflows, cluster tuning, and performance optimisation Solid understanding of Data governance (Unity Catalog, Lineage, Access Policies) Hands-on with Azure services: Data Lake Storage (Gen2), Key Vault, Azure Functions Familiarity with CI/CD for Databricks deployments Strong troubleshooting in distributed Data environments Excellent communication and collaboration skills Nice to have: Immediate availability Experience with enterprise-scale Databricks environments Knowledge of Azure Synapse, Power BI and cost optimisation strategies If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. If this job isn't quite right for you, but you are looking for a new position, please contact us for a confidential discussion about your career. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at (url removed)
27/11/2025
Contractor
Data Engineer - Active SC, Databricks, PySpark Up to 475 per day (Inside IR35) Remote / London 6 months My client is an International Consultancy who are recruiting for an Data Engineer with Active Security Clearance (SC) and strong Databricks and Azure experience to deliver and optimise Data engineering solutions. Key requirements: Proven experience as a Data Engineer with Active Security Clearance (SC) Strong experience with Databricks, PySpark and Delta Lake Expertise in Jobs & Workflows, cluster tuning, and performance optimisation Solid understanding of Data governance (Unity Catalog, Lineage, Access Policies) Hands-on with Azure services: Data Lake Storage (Gen2), Key Vault, Azure Functions Familiarity with CI/CD for Databricks deployments Strong troubleshooting in distributed Data environments Excellent communication and collaboration skills Nice to have: Immediate availability Experience with enterprise-scale Databricks environments Knowledge of Azure Synapse, Power BI and cost optimisation strategies If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. If this job isn't quite right for you, but you are looking for a new position, please contact us for a confidential discussion about your career. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at (url removed)
TXP
AWS Data Engineer
TXP
Role: Data Engineer (AWS) Contract: 450pd- 500pd (Inside IR35) Location: Remote working with occasional travel to site Duration: End of March 2026 (expected to extend at the new financial year) We are currently recruiting for a Data Engineer to work on a project within the Public Sector space. The role will be AWS focused and requires a Data Engineer who can come in and make an impact and difference to the project. Skills and experience required Experience of back-end / data engineering across a number of languages (including Python), and commonly used IDE's Experience with developing, scheduling, maintaining and resolving issues with batch or micro-batch jobs on AWS ETL or Azure ETL services Experience querying data stored on AWS S3 or Azure ADLSv2, or through a Lakehouse capability Experience in managing API-level and Database connectivity Experience using source control and DevOps tooling such as Gitlab Experience in use of terraform (or similar cloud native products) to build new data & analytics platform capabilities Experience with developing data features and associated transformation procedures on a modern data platform. Examples include (but not limited to) Azure Fabric, AWS Lakeformation, Databricks or Snowflake. Experience automating operations tasks with one or more scripting languages. Due to the nature of the project and the short turnaround required, the successful candidate must hold valid and live SC Clearance. If you are interested in the role and would like to apply, please click on the link for immediate consideration.
25/11/2025
Contractor
Role: Data Engineer (AWS) Contract: 450pd- 500pd (Inside IR35) Location: Remote working with occasional travel to site Duration: End of March 2026 (expected to extend at the new financial year) We are currently recruiting for a Data Engineer to work on a project within the Public Sector space. The role will be AWS focused and requires a Data Engineer who can come in and make an impact and difference to the project. Skills and experience required Experience of back-end / data engineering across a number of languages (including Python), and commonly used IDE's Experience with developing, scheduling, maintaining and resolving issues with batch or micro-batch jobs on AWS ETL or Azure ETL services Experience querying data stored on AWS S3 or Azure ADLSv2, or through a Lakehouse capability Experience in managing API-level and Database connectivity Experience using source control and DevOps tooling such as Gitlab Experience in use of terraform (or similar cloud native products) to build new data & analytics platform capabilities Experience with developing data features and associated transformation procedures on a modern data platform. Examples include (but not limited to) Azure Fabric, AWS Lakeformation, Databricks or Snowflake. Experience automating operations tasks with one or more scripting languages. Due to the nature of the project and the short turnaround required, the successful candidate must hold valid and live SC Clearance. If you are interested in the role and would like to apply, please click on the link for immediate consideration.
Randstad Technologies Recruitment
Data Architect (Insurance Domain)
Randstad Technologies Recruitment
Adword Job Title: Data Architect (Insurance Domain) Location: London UK Hybrid role Responsibilities: Experience Level 15 years with at least 5 years in Azure ecosystem Role: We are seeking a seasoned Data Architect to lead the design and implementation of scalable data solutions for a strategic insurance project The ideal candidate will have deep expertise in Azure cloud services Azure Data Factory and Databricks with a strong understanding of data modeling data integration and analytics in the insurance domain Key Responsibilities Architect and design end to end data solutions on Azure for insurance related data workflows Lead data ingestion transformation and orchestration using ADF and Databricks Collaborate with business stakeholders to understand data requirements and translate them into technical solutions Ensure data quality governance and security compliance across the data lifecycle Optimize performance and cost efficiency of data pipelines and storage Provide technical leadership and mentoring to data engineers and developers Mandatory Skillset: Azure Cloud Services Strong experience with Azure Data Lake Azure Synapse Azure SQL and Azure Storage Azure Data Factory ADF Expertise in building and managing complex data pipelines Databricks Handson experience with Spark based data processing notebooks and ML workflows Data Modeling Proficiency in conceptual logical and physical data modeling SQL Python Advanced skills for data manipulation and transformation Insurance Domain Knowledge Understanding of insurance data structures claims policy underwriting and regulatory requirements Preferred Skillset: Power BI Experience in building dashboards and visualizations Data Governance Tools Familiarity with tools like Purview or Collibra Machine Learning Exposure to ML model deployment and monitoring in Databricks CICD Knowledge of DevOps practices for data pipelines Certifications: Azure Data Engineer or Azure Solutions Architect certifications Skills Mandatory Skills: Python for DATA,Java,Python,Scala,Snowflake,Azure BLOB,Azure Data Factory, Azure Functions, Azure SQL, Azure Synapse Analytics, AZURE DATA LAKE,ANSI-SQL,Databricks,HDInsight If you're excited about this role then we would like to hear from you! Please apply with a copy of your CV or send it to Prasanna com and let's start the conversation! Randstad Technologies Ltd is a leading specialist recruitment business for the IT & Engineering industries. Please note that due to a high level of applications, we can only respond to applicants whose skills & qualifications are suitable for this position. No terminology in this advert is intended to discriminate against any of the protected characteristics that fall under the Equality Act 2010. For the purposes of the Conduct Regulations 2003, when advertising permanent vacancies we are acting as an Employment Agency, and when advertising temporary/contract vacancies we are acting as an Employment Business.
24/11/2025
Full time
Adword Job Title: Data Architect (Insurance Domain) Location: London UK Hybrid role Responsibilities: Experience Level 15 years with at least 5 years in Azure ecosystem Role: We are seeking a seasoned Data Architect to lead the design and implementation of scalable data solutions for a strategic insurance project The ideal candidate will have deep expertise in Azure cloud services Azure Data Factory and Databricks with a strong understanding of data modeling data integration and analytics in the insurance domain Key Responsibilities Architect and design end to end data solutions on Azure for insurance related data workflows Lead data ingestion transformation and orchestration using ADF and Databricks Collaborate with business stakeholders to understand data requirements and translate them into technical solutions Ensure data quality governance and security compliance across the data lifecycle Optimize performance and cost efficiency of data pipelines and storage Provide technical leadership and mentoring to data engineers and developers Mandatory Skillset: Azure Cloud Services Strong experience with Azure Data Lake Azure Synapse Azure SQL and Azure Storage Azure Data Factory ADF Expertise in building and managing complex data pipelines Databricks Handson experience with Spark based data processing notebooks and ML workflows Data Modeling Proficiency in conceptual logical and physical data modeling SQL Python Advanced skills for data manipulation and transformation Insurance Domain Knowledge Understanding of insurance data structures claims policy underwriting and regulatory requirements Preferred Skillset: Power BI Experience in building dashboards and visualizations Data Governance Tools Familiarity with tools like Purview or Collibra Machine Learning Exposure to ML model deployment and monitoring in Databricks CICD Knowledge of DevOps practices for data pipelines Certifications: Azure Data Engineer or Azure Solutions Architect certifications Skills Mandatory Skills: Python for DATA,Java,Python,Scala,Snowflake,Azure BLOB,Azure Data Factory, Azure Functions, Azure SQL, Azure Synapse Analytics, AZURE DATA LAKE,ANSI-SQL,Databricks,HDInsight If you're excited about this role then we would like to hear from you! Please apply with a copy of your CV or send it to Prasanna com and let's start the conversation! Randstad Technologies Ltd is a leading specialist recruitment business for the IT & Engineering industries. Please note that due to a high level of applications, we can only respond to applicants whose skills & qualifications are suitable for this position. No terminology in this advert is intended to discriminate against any of the protected characteristics that fall under the Equality Act 2010. For the purposes of the Conduct Regulations 2003, when advertising permanent vacancies we are acting as an Employment Agency, and when advertising temporary/contract vacancies we are acting as an Employment Business.
Tenth Revolution Group
Databricks Engineer - £400PD - Remote
Tenth Revolution Group
Databricks Engineer - 400PD - Remote About the Role We are seeking a highly skilled Azure Data Engineer with deep, hands-on Databricks experience to join our growing data engineering team. In this role, you will design, build, and optimise scalable data pipelines and lakehouse architectures on Azure, enabling advanced analytics and data-driven decision making across the business. Key Responsibilities Design, develop, and maintain ETL/ELT pipelines using Azure Databricks, PySpark, and Delta Lake. Build and optimise data lakehouse architectures on Azure Data Lake Storage ( ADLS ) . Develop high-performance data solutions using Azure Synapse, Azure Data Factory, and Databricks workflows Implement best practices for data governance, security, and quality across all pipelines. Collaborate with data scientists, analysts, and cross-functional teams to deliver reliable, production-grade data models. Monitor and tune pipeline performance to ensure efficiency, reliability, and cost optimisation. Participate in CI/CD processes and infrastructure-as-code solutions using tools like Terraform, GitHub Actions, or Azure DevOps Required Skills & Experience 3+ years' experience as a Data Engineer working in Azure environments. Strong hands-on experience with Databricks (PySpark, Delta Lake, cluster optimisation, job scheduling). Solid knowledge of Azure cloud services including: Azure Data Lake Storage Azure Data Factory Azure Synapse / SQL Pools Azure Key Vault Strong programming skills in Python and SQL. Experience building scalable, production-grade data pipelines. Understanding of data modelling, data warehousing concepts, and distributed computing. Familiarity with CI/CD, version control, and DevOps practices. Nice-to-Have Experience with streaming technologies (e.g., Spark Structured Streaming, Event Hub, Kafka). Knowledge of MLflow, Unity Catalog, or advanced Databricks features. Exposure to Terraform or other IaC tools. Experience working in Agile/Scrum environments. To apply for this role please submit your CV or contact Dillon Blackburn on (phone number removed) or at (url removed). Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.
19/11/2025
Contractor
Databricks Engineer - 400PD - Remote About the Role We are seeking a highly skilled Azure Data Engineer with deep, hands-on Databricks experience to join our growing data engineering team. In this role, you will design, build, and optimise scalable data pipelines and lakehouse architectures on Azure, enabling advanced analytics and data-driven decision making across the business. Key Responsibilities Design, develop, and maintain ETL/ELT pipelines using Azure Databricks, PySpark, and Delta Lake. Build and optimise data lakehouse architectures on Azure Data Lake Storage ( ADLS ) . Develop high-performance data solutions using Azure Synapse, Azure Data Factory, and Databricks workflows Implement best practices for data governance, security, and quality across all pipelines. Collaborate with data scientists, analysts, and cross-functional teams to deliver reliable, production-grade data models. Monitor and tune pipeline performance to ensure efficiency, reliability, and cost optimisation. Participate in CI/CD processes and infrastructure-as-code solutions using tools like Terraform, GitHub Actions, or Azure DevOps Required Skills & Experience 3+ years' experience as a Data Engineer working in Azure environments. Strong hands-on experience with Databricks (PySpark, Delta Lake, cluster optimisation, job scheduling). Solid knowledge of Azure cloud services including: Azure Data Lake Storage Azure Data Factory Azure Synapse / SQL Pools Azure Key Vault Strong programming skills in Python and SQL. Experience building scalable, production-grade data pipelines. Understanding of data modelling, data warehousing concepts, and distributed computing. Familiarity with CI/CD, version control, and DevOps practices. Nice-to-Have Experience with streaming technologies (e.g., Spark Structured Streaming, Event Hub, Kafka). Knowledge of MLflow, Unity Catalog, or advanced Databricks features. Exposure to Terraform or other IaC tools. Experience working in Agile/Scrum environments. To apply for this role please submit your CV or contact Dillon Blackburn on (phone number removed) or at (url removed). Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.
Syntax Consultancy Ltd
Databricks Engineer (SC Cleared)
Syntax Consultancy Ltd City, London
Databricks Engineer (SC Cleared) London (Hybrid) 6 Month Contract £(Apply online only)/day (Inside IR35) Databricks Engineer needed with active SC Security Clearance for 6 Month Contract based in Central London (Hybrid). Developing a cutting-edge Azure Databricks platform for economic data modelling, analysis, and forecasting. Start ASAP in Nov/Dec 2025. Hybrid Working - 2 days/week remote (WFH), and 3 days/week working on-site from the Central London office. A chance to work with a leading global IT and Digital transformation business specialising in Government projects: In-depth Data Engineering + strong hands-on Azure Databricks expertise. Azure Data Services, Azure Data Factory, Azure Blob Storage + Azure SQL Database. Designing, developing, building + optimising data pipelines, implementing data transformations, ensuring data quality and reliability. Deep Data Warehousing knowledge including data modelling techniques + data integration patterns. Experience of working with complex data pipelines, large data sets, data pipeline optimization + data architecture design. Implementing complex data transformations using Spark, PySpark or Scala + working with SQL / MySQL databases. Experience with data quality, data governance processes, Git version control + Agile development environments. Azure Data Engineer certification preferred -eg- Azure Data Engineer Associate. Advantageous skills: Azure Event Hubs, Kafka, data visualisation tools, Power BI, Tableau, Azure DevOps, Docker, Kubernetes, Jenkins.
17/11/2025
Contractor
Databricks Engineer (SC Cleared) London (Hybrid) 6 Month Contract £(Apply online only)/day (Inside IR35) Databricks Engineer needed with active SC Security Clearance for 6 Month Contract based in Central London (Hybrid). Developing a cutting-edge Azure Databricks platform for economic data modelling, analysis, and forecasting. Start ASAP in Nov/Dec 2025. Hybrid Working - 2 days/week remote (WFH), and 3 days/week working on-site from the Central London office. A chance to work with a leading global IT and Digital transformation business specialising in Government projects: In-depth Data Engineering + strong hands-on Azure Databricks expertise. Azure Data Services, Azure Data Factory, Azure Blob Storage + Azure SQL Database. Designing, developing, building + optimising data pipelines, implementing data transformations, ensuring data quality and reliability. Deep Data Warehousing knowledge including data modelling techniques + data integration patterns. Experience of working with complex data pipelines, large data sets, data pipeline optimization + data architecture design. Implementing complex data transformations using Spark, PySpark or Scala + working with SQL / MySQL databases. Experience with data quality, data governance processes, Git version control + Agile development environments. Azure Data Engineer certification preferred -eg- Azure Data Engineer Associate. Advantageous skills: Azure Event Hubs, Kafka, data visualisation tools, Power BI, Tableau, Azure DevOps, Docker, Kubernetes, Jenkins.
Head Resourcing
PowerBI Engineer
Head Resourcing
? Power BI Report Engineer (Azure / Databricks) Glasgow 3-4 days onsite Exclusive Opportunity with a Leading UK Consumer Brand Are you a Power BI specialist who loves clean, governed data and high-performance semantic models? Do you want to work with a business that's rebuilding its entire BI estate the right way-proper Lakehouse architecture, curated Gold tables, PBIP, Git, and end-to-end governance? If so, this is one of the most modern, forward-thinking Power BI engineering roles in Scotland. Our Glasgow-based client is transforming its reporting platform using Azure + Databricks , with Power BI sitting on top of a fully curated Gold Layer. They develop everything using PBIP + Git + Tabular Editor 3 , and semantic modelling is treated as a first-class engineering discipline. This is your chance to own the creation of high-quality datasets and dashboards used across Operations, Finance, Sales, Logistics and Customer Care-turning trusted Lakehouse data into insights the business relies on every day. ? Why This Role Exists To turn clean, curated Gold Lakehouse data into trusted, enterprise-grade Power BI insights. You'll own semantic modelling, dataset optimisation, governance and best-practice delivery across a modern BI ecosystem. ? What You'll Do Semantic Modelling with PBIP + Git Build and maintain enterprise PBIP datasets fully version-controlled in Git. Use Tabular Editor 3 for DAX, metadata modelling, calc groups and object governance. Manage branching, pull requests and releases via Azure DevOps . Lakehouse-Aligned Reporting (Gold Layer Only) Develop semantic models exclusively on top of curated Gold Databricks tables . Work closely with Data Engineering on schema design and contract-first modelling. Maintain consistent dimensional modelling aligned to the enterprise Bus Matrix. High-Performance Power BI Engineering Optimise performance: aggregations, composite models, incremental refresh, DQ/Import strategy. Tune Databricks SQL Warehouse queries for speed and cost efficiency. Monitor PPU capacity performance, refresh reliability and dataset health. Governance, Security & Standards Implement RLS/OLS , naming conventions, KPI definitions and calc groups. Apply dataset certification, endorsements and governance metadata. Align semantic models with lineage and security policies across the Azure/Databricks estate. Lifecycle, Release & Best Practice Delivery Use Power BI Deployment Pipelines for Dev ? UAT ? Prod releases. Enforce semantic CI/CD patterns with PBIP + Git + Tabular Editor. Build reusable, certified datasets and dataflows enabling scalable self-service BI. Adoption, UX & Collaboration Design intuitive dashboards with consistent UX across multiple business functions. Support BI adoption through training, documentation and best-practice guidance. Use telemetry to track usage, performance and improve user experience. ? What We're Looking For Required Certifications To meet BI engineering standards, candidates must hold: PL-300: Power BI Data Analyst Associate DP-600: Fabric Analytics Engineer Associate Skills & Experience 3-5+ years building enterprise Power BI datasets and dashboards. Strong DAX and semantic modelling expertise (calc groups, conformed dimensions, role-playing dimensions). Strong SQL skills; comfortable working with Databricks Gold-layer tables. Proven ability to optimise dataset performance (aggregations, incremental refresh, DQ/Import). Experience working with Git-based modelling workflows and PR reviews via Tabular Editor. Excellent design intuition-clean layouts, drill paths, and KPI logic. Nice to Have Python for automation or ad-hoc prep; PySpark familiarity. Understanding of Lakehouse patterns, Delta Lake, metadata-driven pipelines. Unity Catalog / Purview experience for lineage and governance. RLS/OLS implementation experience.
17/11/2025
Full time
? Power BI Report Engineer (Azure / Databricks) Glasgow 3-4 days onsite Exclusive Opportunity with a Leading UK Consumer Brand Are you a Power BI specialist who loves clean, governed data and high-performance semantic models? Do you want to work with a business that's rebuilding its entire BI estate the right way-proper Lakehouse architecture, curated Gold tables, PBIP, Git, and end-to-end governance? If so, this is one of the most modern, forward-thinking Power BI engineering roles in Scotland. Our Glasgow-based client is transforming its reporting platform using Azure + Databricks , with Power BI sitting on top of a fully curated Gold Layer. They develop everything using PBIP + Git + Tabular Editor 3 , and semantic modelling is treated as a first-class engineering discipline. This is your chance to own the creation of high-quality datasets and dashboards used across Operations, Finance, Sales, Logistics and Customer Care-turning trusted Lakehouse data into insights the business relies on every day. ? Why This Role Exists To turn clean, curated Gold Lakehouse data into trusted, enterprise-grade Power BI insights. You'll own semantic modelling, dataset optimisation, governance and best-practice delivery across a modern BI ecosystem. ? What You'll Do Semantic Modelling with PBIP + Git Build and maintain enterprise PBIP datasets fully version-controlled in Git. Use Tabular Editor 3 for DAX, metadata modelling, calc groups and object governance. Manage branching, pull requests and releases via Azure DevOps . Lakehouse-Aligned Reporting (Gold Layer Only) Develop semantic models exclusively on top of curated Gold Databricks tables . Work closely with Data Engineering on schema design and contract-first modelling. Maintain consistent dimensional modelling aligned to the enterprise Bus Matrix. High-Performance Power BI Engineering Optimise performance: aggregations, composite models, incremental refresh, DQ/Import strategy. Tune Databricks SQL Warehouse queries for speed and cost efficiency. Monitor PPU capacity performance, refresh reliability and dataset health. Governance, Security & Standards Implement RLS/OLS , naming conventions, KPI definitions and calc groups. Apply dataset certification, endorsements and governance metadata. Align semantic models with lineage and security policies across the Azure/Databricks estate. Lifecycle, Release & Best Practice Delivery Use Power BI Deployment Pipelines for Dev ? UAT ? Prod releases. Enforce semantic CI/CD patterns with PBIP + Git + Tabular Editor. Build reusable, certified datasets and dataflows enabling scalable self-service BI. Adoption, UX & Collaboration Design intuitive dashboards with consistent UX across multiple business functions. Support BI adoption through training, documentation and best-practice guidance. Use telemetry to track usage, performance and improve user experience. ? What We're Looking For Required Certifications To meet BI engineering standards, candidates must hold: PL-300: Power BI Data Analyst Associate DP-600: Fabric Analytics Engineer Associate Skills & Experience 3-5+ years building enterprise Power BI datasets and dashboards. Strong DAX and semantic modelling expertise (calc groups, conformed dimensions, role-playing dimensions). Strong SQL skills; comfortable working with Databricks Gold-layer tables. Proven ability to optimise dataset performance (aggregations, incremental refresh, DQ/Import). Experience working with Git-based modelling workflows and PR reviews via Tabular Editor. Excellent design intuition-clean layouts, drill paths, and KPI logic. Nice to Have Python for automation or ad-hoc prep; PySpark familiarity. Understanding of Lakehouse patterns, Delta Lake, metadata-driven pipelines. Unity Catalog / Purview experience for lineage and governance. RLS/OLS implementation experience.
Pontoon
Lead Data Engineer
Pontoon Warwick, Warwickshire
Job Title: Lead Data Engineer Location: Warwick (Hybrid - once per week onsite) Remuneration: From 700 per day (via umbrella company) Contract Details: Fixed Term Contract, 6 months, Full Time Are you ready to take the lead in transforming data engineering? Our client, a forward-thinking organisation, is seeking a highly skilled and hands-on Lead Data Engineer to drive their Data & Insights platform built on the Azure stack. This is a Databricks-heavy role , ideal for someone who thrives in technical environments and enjoys working directly with cutting-edge tools. Why This Role Stands Out This is not just a leadership role - it's a hands-on engineering opportunity where your Databricks expertise will be front and centre . You'll be the go-to person for all things Databricks, from architecture and configuration to pipeline promotion and troubleshooting. Key Responsibilities Lead and deliver hands-on data engineering across all layers of the Azure-based data platform. Act as the Databricks SME (Subject Matter Expert) , overseeing architecture, configuration, and documentation. Guide and support the engineering team while contributing directly to development efforts. Manage DevOps practices including branch supervision and merging. Promote data pipelines and notebooks through development, testing, and production environments. Enhance monitoring and control frameworks to ensure platform reliability. Provide technical leadership and mentorship to a small team of Data Engineers. Essential Skills Extensive hands-on experience with Databricks - this is the core of the role. Strong background in Synapse and Azure DevOps. Proficiency in SQL and PySpark within a Databricks environment. Proven experience leading small engineering teams. Skilled in configuration management and technical documentation. If you're a Databricks expert looking for a role that blends leadership with deep technical involvement, this is your chance to make a real impact. Join a dynamic team and help shape the future of data engineering. Ready to elevate your career? Apply now and be a vital part of this exciting transformation! Pontoon is an employment consultancy. We put expertise, energy, and enthusiasm into improving everyone's chance of being part of the workplace. We respect and appreciate people of all ethnicities, generations, religious beliefs, sexual orientations, gender identities, and more. We do this by showcasing their talents, skills, and unique experience in an inclusive environment that helps them thrive. If you require reasonable adjustments at any stage, please let us know and we will be happy to support you.
07/10/2025
Contractor
Job Title: Lead Data Engineer Location: Warwick (Hybrid - once per week onsite) Remuneration: From 700 per day (via umbrella company) Contract Details: Fixed Term Contract, 6 months, Full Time Are you ready to take the lead in transforming data engineering? Our client, a forward-thinking organisation, is seeking a highly skilled and hands-on Lead Data Engineer to drive their Data & Insights platform built on the Azure stack. This is a Databricks-heavy role , ideal for someone who thrives in technical environments and enjoys working directly with cutting-edge tools. Why This Role Stands Out This is not just a leadership role - it's a hands-on engineering opportunity where your Databricks expertise will be front and centre . You'll be the go-to person for all things Databricks, from architecture and configuration to pipeline promotion and troubleshooting. Key Responsibilities Lead and deliver hands-on data engineering across all layers of the Azure-based data platform. Act as the Databricks SME (Subject Matter Expert) , overseeing architecture, configuration, and documentation. Guide and support the engineering team while contributing directly to development efforts. Manage DevOps practices including branch supervision and merging. Promote data pipelines and notebooks through development, testing, and production environments. Enhance monitoring and control frameworks to ensure platform reliability. Provide technical leadership and mentorship to a small team of Data Engineers. Essential Skills Extensive hands-on experience with Databricks - this is the core of the role. Strong background in Synapse and Azure DevOps. Proficiency in SQL and PySpark within a Databricks environment. Proven experience leading small engineering teams. Skilled in configuration management and technical documentation. If you're a Databricks expert looking for a role that blends leadership with deep technical involvement, this is your chance to make a real impact. Join a dynamic team and help shape the future of data engineering. Ready to elevate your career? Apply now and be a vital part of this exciting transformation! Pontoon is an employment consultancy. We put expertise, energy, and enthusiasm into improving everyone's chance of being part of the workplace. We respect and appreciate people of all ethnicities, generations, religious beliefs, sexual orientations, gender identities, and more. We do this by showcasing their talents, skills, and unique experience in an inclusive environment that helps them thrive. If you require reasonable adjustments at any stage, please let us know and we will be happy to support you.
La Fosse Associates Limited
ML Ops Engineer (Contract) - Financial Services - Outside IR35
La Fosse Associates Limited
ML Ops Lead (Engineer) required for a 6-month contract with a global financial services firm in London. Join a high-impact team building out scalable AI/ML solutions on an Azure and Databricks platform supporting varies areas of the organisation. Implement solid structure for Gen AI deployment and implement best practices within the team, leading on ML Ops frameworks & CI/CD pipeline design Build & scale AI/ML tooling across quants, risk, and trading desks Drive adoption of GenAI & LLMs with Azure OpenAI, Unity Catalog & more Partner with quant teams and central AI functions Required: 2+ years ML Ops experience + 3+ years AI/ML engineering Excellent knowledge of ML frameworks and associated libraries in Python. Expertise in Azure Databricks, Python, Azure DevOps, ML life cycle Financial data/trading/derivatives domain experience is a plus 2 stage interview process Location: Hybrid Central London office & remote (flexible) 6 months duration (possible extension)
03/10/2025
Contractor
ML Ops Lead (Engineer) required for a 6-month contract with a global financial services firm in London. Join a high-impact team building out scalable AI/ML solutions on an Azure and Databricks platform supporting varies areas of the organisation. Implement solid structure for Gen AI deployment and implement best practices within the team, leading on ML Ops frameworks & CI/CD pipeline design Build & scale AI/ML tooling across quants, risk, and trading desks Drive adoption of GenAI & LLMs with Azure OpenAI, Unity Catalog & more Partner with quant teams and central AI functions Required: 2+ years ML Ops experience + 3+ years AI/ML engineering Excellent knowledge of ML frameworks and associated libraries in Python. Expertise in Azure Databricks, Python, Azure DevOps, ML life cycle Financial data/trading/derivatives domain experience is a plus 2 stage interview process Location: Hybrid Central London office & remote (flexible) 6 months duration (possible extension)
Tenth Revolution Group
Lead Azure Data Engineer
Tenth Revolution Group
Contract Opportunity: Lead Azure Data Engineer (Remote - 500/day Outside IR35) We're hiring for a Lead Azure Data Engineer to join our team on a hybrid contract based in London, supporting key finance stakeholders and transforming our data platform. This role offers the chance to shape the future of financial reporting through cutting-edge cloud engineering and data architecture. Location : Central London (UK-based candidates only) Rate : 500/day IR35 Status : Outside IR35 Start Date : ASAP - Interviews next week Duration : 6 months with potential for extension The Role You'll lead the design and delivery of scalable data products that support financial analysis and decision-making. Working closely with BI and Analytics teams, you'll help evolve our data warehouse and implement best-in-class engineering practices across Azure. Key Responsibilities Build and enhance ETL/ELT pipelines in Azure Databricks Develop facts and dimensions for financial reporting Collaborate with cross-functional teams to deliver robust data solutions Optimize data workflows for performance and cost-efficiency Implement governance and security using Unity Catalog Drive automation and CI/CD practices across the data platform Explore new technologies to improve data ingestion and self-service Essential Skills Azure Databricks : Expert in Spark (SQL, PySpark), Databricks Workflows Data Pipeline Design : Proven experience in scalable ETL/ELT development Azure Services : Data Lake, Blob Storage, Synapse Data Governance : Unity Catalog, access control, metadata management Performance Tuning : Partitioning, caching, Spark job optimization Cloud Architecture : Infrastructure-as-code, monitoring, automation Finance Domain Knowledge : Experience with financial systems and reporting Data Modelling : Kimball methodology, star schemas Retail Experience : Preferred but not essential About the Team We're a collaborative data function made up of BI Developers and Data Engineers. We work end-to-end on solutions, share knowledge, and support each other's growth. Our culture values curiosity, innovation, and continuous learning. Interested? We have limited interview slots next week and aim to fill this role by the end of the month. Please send me a copy of your CV if you meet the requirements
02/10/2025
Contractor
Contract Opportunity: Lead Azure Data Engineer (Remote - 500/day Outside IR35) We're hiring for a Lead Azure Data Engineer to join our team on a hybrid contract based in London, supporting key finance stakeholders and transforming our data platform. This role offers the chance to shape the future of financial reporting through cutting-edge cloud engineering and data architecture. Location : Central London (UK-based candidates only) Rate : 500/day IR35 Status : Outside IR35 Start Date : ASAP - Interviews next week Duration : 6 months with potential for extension The Role You'll lead the design and delivery of scalable data products that support financial analysis and decision-making. Working closely with BI and Analytics teams, you'll help evolve our data warehouse and implement best-in-class engineering practices across Azure. Key Responsibilities Build and enhance ETL/ELT pipelines in Azure Databricks Develop facts and dimensions for financial reporting Collaborate with cross-functional teams to deliver robust data solutions Optimize data workflows for performance and cost-efficiency Implement governance and security using Unity Catalog Drive automation and CI/CD practices across the data platform Explore new technologies to improve data ingestion and self-service Essential Skills Azure Databricks : Expert in Spark (SQL, PySpark), Databricks Workflows Data Pipeline Design : Proven experience in scalable ETL/ELT development Azure Services : Data Lake, Blob Storage, Synapse Data Governance : Unity Catalog, access control, metadata management Performance Tuning : Partitioning, caching, Spark job optimization Cloud Architecture : Infrastructure-as-code, monitoring, automation Finance Domain Knowledge : Experience with financial systems and reporting Data Modelling : Kimball methodology, star schemas Retail Experience : Preferred but not essential About the Team We're a collaborative data function made up of BI Developers and Data Engineers. We work end-to-end on solutions, share knowledge, and support each other's growth. Our culture values curiosity, innovation, and continuous learning. Interested? We have limited interview slots next week and aim to fill this role by the end of the month. Please send me a copy of your CV if you meet the requirements
Tenth Revolution Group
Databricks Engineer
Tenth Revolution Group City, London
Platform Engineer - Data & AI Infrastructure Location: Remote/UK-based Type: Full-time You must have strong Databricks and Unity Catalog experience Skills/Approaches: Proven experience with Terraform for Azure infrastructure as code. Strong knowledge of GitHub Actions and general CI/CD principles . Hands-on experience with Azure Private Link and Private Link Service . Expertise in Databricks and Unity Catalog for data governance. Solid understanding of Azure Policy and compliance enforcement. Strong background in identity and access management , including OAuth and federated credentials. Deep understanding of Azure security best practices , including Defense in Depth , BCDR , and high availability. If you meet these requirements please send me a copy of your CV to (see below)
02/10/2025
Contractor
Platform Engineer - Data & AI Infrastructure Location: Remote/UK-based Type: Full-time You must have strong Databricks and Unity Catalog experience Skills/Approaches: Proven experience with Terraform for Azure infrastructure as code. Strong knowledge of GitHub Actions and general CI/CD principles . Hands-on experience with Azure Private Link and Private Link Service . Expertise in Databricks and Unity Catalog for data governance. Solid understanding of Azure Policy and compliance enforcement. Strong background in identity and access management , including OAuth and federated credentials. Deep understanding of Azure security best practices , including Defense in Depth , BCDR , and high availability. If you meet these requirements please send me a copy of your CV to (see below)
HUNTER SELECTION
Data Governance Manager
HUNTER SELECTION Bristol, Gloucestershire
Data Governance Manager - Bristol (hybrid) - 65k- 75k We're looking for a Data Governance Manager to play a pivotal role in shaping how a large, regulated organisation manages and protects its data. You'll be responsible for embedding data governance across the business, ensuring data is accurate, secure, and used ethically. This is an exciting opportunity to influence data culture, drive adoption of tools like Azure Purview and Unity Catalog , and make a measurable impact on compliance, risk management, and operational efficiency. As the successful Data Governance Manager, you'll be: Building and championing a clear data governance framework that improves the way information is owned, managed and trusted across the organisation. Working closely with stakeholders to agree policies and standards that balance compliance with practical day-to-day needs. Providing advice and guidance to data owners, ensuring strong stewardship and accountability throughout the business. Creating tools and approaches (catalogues, glossaries, lineage maps) that make it easier for teams to understand and work with their data. Monitoring compliance, highlight risks, and support audit processes with robust evidence and reporting. Playing a key role in enabling the company's wider data and digital strategy. What we're looking for Experience in data governance, information management or a related discipline. Strong understanding of data protection, compliance and regulatory requirements. Such as GDPR, FCA/PRA, or other regulatory frameworks. Excellent stakeholder management skills, with the ability to engage both technical and non-technical audiences. Professional certifications (e.g. DAMA CDMP, DCAM) are desirable. Hands-on knowledge of metadata/cataloguing tools (e.g. Azure Purview. Databricks). Benefits for the Data Governance Manager Hybrid working 25 days holiday + bank holidays increasing with service Discretionary annual bonus Enhanced pension scheme Healthcare cash plan Private health insurance EV salary sacrifice scheme Cycle to work Discount scheme Enhanced maternity and paternity leave Life assurance - 4x salary Professional development And much more! If you are interested in this position please click 'apply'. Hunter Selection Limited is a recruitment consultancy with offices UK wide, specialising in permanent & contract roles within Engineering & Manufacturing, IT & Digital, Science & Technology and Service & Sales sectors. Please note as we receive a high level of applications we can only respond to applicants whose skills & qualifications are suitable for this position. No terminology in this advert is intended to discriminate against any of the protected characteristics that fall under the Equality Act 2010. For the purposes of the Conduct Regulations 2003, when advertising permanent vacancies we are acting as an Employment Agency, and when advertising temporary/contract vacancies we are acting as an Employment Business.
02/10/2025
Full time
Data Governance Manager - Bristol (hybrid) - 65k- 75k We're looking for a Data Governance Manager to play a pivotal role in shaping how a large, regulated organisation manages and protects its data. You'll be responsible for embedding data governance across the business, ensuring data is accurate, secure, and used ethically. This is an exciting opportunity to influence data culture, drive adoption of tools like Azure Purview and Unity Catalog , and make a measurable impact on compliance, risk management, and operational efficiency. As the successful Data Governance Manager, you'll be: Building and championing a clear data governance framework that improves the way information is owned, managed and trusted across the organisation. Working closely with stakeholders to agree policies and standards that balance compliance with practical day-to-day needs. Providing advice and guidance to data owners, ensuring strong stewardship and accountability throughout the business. Creating tools and approaches (catalogues, glossaries, lineage maps) that make it easier for teams to understand and work with their data. Monitoring compliance, highlight risks, and support audit processes with robust evidence and reporting. Playing a key role in enabling the company's wider data and digital strategy. What we're looking for Experience in data governance, information management or a related discipline. Strong understanding of data protection, compliance and regulatory requirements. Such as GDPR, FCA/PRA, or other regulatory frameworks. Excellent stakeholder management skills, with the ability to engage both technical and non-technical audiences. Professional certifications (e.g. DAMA CDMP, DCAM) are desirable. Hands-on knowledge of metadata/cataloguing tools (e.g. Azure Purview. Databricks). Benefits for the Data Governance Manager Hybrid working 25 days holiday + bank holidays increasing with service Discretionary annual bonus Enhanced pension scheme Healthcare cash plan Private health insurance EV salary sacrifice scheme Cycle to work Discount scheme Enhanced maternity and paternity leave Life assurance - 4x salary Professional development And much more! If you are interested in this position please click 'apply'. Hunter Selection Limited is a recruitment consultancy with offices UK wide, specialising in permanent & contract roles within Engineering & Manufacturing, IT & Digital, Science & Technology and Service & Sales sectors. Please note as we receive a high level of applications we can only respond to applicants whose skills & qualifications are suitable for this position. No terminology in this advert is intended to discriminate against any of the protected characteristics that fall under the Equality Act 2010. For the purposes of the Conduct Regulations 2003, when advertising permanent vacancies we are acting as an Employment Agency, and when advertising temporary/contract vacancies we are acting as an Employment Business.
Guidant Global
IT Data and Analytics Senior Development Operations Engineer
Guidant Global Reading, Oxfordshire
Base Location: Reading / Havant / Perth Salary: 600 per day Working Pattern: 40 hours per week / Full time Embark on a transformative career journey with SSE energy company, where innovation meets impact in the heart of the IT sector. As a pivotal player in our forward-thinking team, you'll harness cutting-edge technology to drive change and propel the UK towards its ambitious net-zero targets. Your expertise will not only shape the future of energy but also carve a sustainable world for generations to come. Join us and be at the forefront of the green revolution, where every line of code contributes to a cleaner, brighter future. Key Responsibilities: Provide technical leadership and oversight to the group Data & Analytics platform team. Responsible for ensuring the reliability, security and scalability of analytics platform services. Deliver full automation of the deployment of Data & Analytics platform services via Infrastructure as code. Help to set development standards, configure operational support processes and provide technical assurance. Provide support to Data & Analytics platform users and internal development teams interacting with the Data & Analytics platform services. What do you need? Extensive experience of deploying Azure and ideally AWS cloud resources and be fully conversant with agile and DevOps development methodology. Extensive experience in using Terraform to deploy cloud resources as infrastructure as code. Excellent understanding of CI/CD principles and experience with related tools (e.g. Azure DevOps, GitHub Actions). Strong knowledge of scripting languages such as PowerShell, Python and Azure CLI and proven experience with automation runbooks, VM maintenance scripts and SQL. Strong understanding of cloud access control and governance such as RBAC and IAM. Strong knowledge on Cloud Networking (Azure) such as private endpoints, Firewalls, NSGs, NAT gateways and route tables. Good knowledge in Microsoft Entra ID such as managing App registrations, Enterprise Apps, AD groups, managed identities and Privileged Identity Management. Proven experience in IaaS such as virtual machines - both Windows and Linux. Familiarity with server patching and maintenance. Strong understanding of security best practices within Azure and ideally AWS. Experience of configuring cloud data services (preferably Databricks) in Azure and ideally AWS. Excellent communication and collaboration skills, with the ability to work across multiple technical and non-technical teams. What happens now? After submitting your application for the Data and Analytics Senior Development Operations Engineer role, we understand you're eager to hear back. We value your time and interest, and if your application is successful, you will be contacted directly by the team within 2 working days. We appreciate your patience and look forward to the possibility of welcoming you aboard.
01/10/2025
Contractor
Base Location: Reading / Havant / Perth Salary: 600 per day Working Pattern: 40 hours per week / Full time Embark on a transformative career journey with SSE energy company, where innovation meets impact in the heart of the IT sector. As a pivotal player in our forward-thinking team, you'll harness cutting-edge technology to drive change and propel the UK towards its ambitious net-zero targets. Your expertise will not only shape the future of energy but also carve a sustainable world for generations to come. Join us and be at the forefront of the green revolution, where every line of code contributes to a cleaner, brighter future. Key Responsibilities: Provide technical leadership and oversight to the group Data & Analytics platform team. Responsible for ensuring the reliability, security and scalability of analytics platform services. Deliver full automation of the deployment of Data & Analytics platform services via Infrastructure as code. Help to set development standards, configure operational support processes and provide technical assurance. Provide support to Data & Analytics platform users and internal development teams interacting with the Data & Analytics platform services. What do you need? Extensive experience of deploying Azure and ideally AWS cloud resources and be fully conversant with agile and DevOps development methodology. Extensive experience in using Terraform to deploy cloud resources as infrastructure as code. Excellent understanding of CI/CD principles and experience with related tools (e.g. Azure DevOps, GitHub Actions). Strong knowledge of scripting languages such as PowerShell, Python and Azure CLI and proven experience with automation runbooks, VM maintenance scripts and SQL. Strong understanding of cloud access control and governance such as RBAC and IAM. Strong knowledge on Cloud Networking (Azure) such as private endpoints, Firewalls, NSGs, NAT gateways and route tables. Good knowledge in Microsoft Entra ID such as managing App registrations, Enterprise Apps, AD groups, managed identities and Privileged Identity Management. Proven experience in IaaS such as virtual machines - both Windows and Linux. Familiarity with server patching and maintenance. Strong understanding of security best practices within Azure and ideally AWS. Experience of configuring cloud data services (preferably Databricks) in Azure and ideally AWS. Excellent communication and collaboration skills, with the ability to work across multiple technical and non-technical teams. What happens now? After submitting your application for the Data and Analytics Senior Development Operations Engineer role, we understand you're eager to hear back. We value your time and interest, and if your application is successful, you will be contacted directly by the team within 2 working days. We appreciate your patience and look forward to the possibility of welcoming you aboard.
Experis IT
Databricks Engineer
Experis IT
Databricks Engineer London- hybrid- 3 days per week on-site 6 months + UMBRELLA only- Inside IR35 Key Responsibilities Design, develop, and maintain ETL/ELT pipelines using Airflow for orchestration and scheduling. Build and manage data transformation workflows in DBT running on Databricks . Optimize data models in Delta Lake for performance, scalability, and cost efficiency. Collaborate with analytics, BI, and data science teams to deliver clean, reliable datasets. Implement data quality checks (dbt tests, monitoring) and ensure governance standards. Manage and monitor Databricks clusters & SQL Warehouses to support workloads. Contribute to CI/CD practices for data pipelines (version control, testing, deployments). Troubleshoot pipeline failures, performance bottlenecks, and scaling challenges. Document workflows, transformations, and data models for knowledge sharing. Required Skills & Qualifications 3-6 years of experience as a Data Engineer (or similar). Hands-on expertise with: DBT (dbt-core, dbt-databricks adapter, testing & documentation). Apache Airflow (DAG design, operators, scheduling, dependencies). Databricks (Spark, SQL, Delta Lake, job clusters, SQL Warehouses). Strong SQL skills and understanding of data modelling (Kimball, Data Vault, or similar) . Proficiency in Python for Scripting and pipeline development. Experience with CI/CD tools (eg, GitHub Actions, GitLab CI, Azure DevOps). Familiarity with cloud platforms (AWS, Azure, or GCP). Strong problem-solving skills and ability to work in cross-functional teams. All profiles will be reviewed against the required skills and experience. Due to the high number of applications we will only be able to respond to successful applicants in the first instance. We thank you for your interest and the time taken to apply!
01/10/2025
Contractor
Databricks Engineer London- hybrid- 3 days per week on-site 6 months + UMBRELLA only- Inside IR35 Key Responsibilities Design, develop, and maintain ETL/ELT pipelines using Airflow for orchestration and scheduling. Build and manage data transformation workflows in DBT running on Databricks . Optimize data models in Delta Lake for performance, scalability, and cost efficiency. Collaborate with analytics, BI, and data science teams to deliver clean, reliable datasets. Implement data quality checks (dbt tests, monitoring) and ensure governance standards. Manage and monitor Databricks clusters & SQL Warehouses to support workloads. Contribute to CI/CD practices for data pipelines (version control, testing, deployments). Troubleshoot pipeline failures, performance bottlenecks, and scaling challenges. Document workflows, transformations, and data models for knowledge sharing. Required Skills & Qualifications 3-6 years of experience as a Data Engineer (or similar). Hands-on expertise with: DBT (dbt-core, dbt-databricks adapter, testing & documentation). Apache Airflow (DAG design, operators, scheduling, dependencies). Databricks (Spark, SQL, Delta Lake, job clusters, SQL Warehouses). Strong SQL skills and understanding of data modelling (Kimball, Data Vault, or similar) . Proficiency in Python for Scripting and pipeline development. Experience with CI/CD tools (eg, GitHub Actions, GitLab CI, Azure DevOps). Familiarity with cloud platforms (AWS, Azure, or GCP). Strong problem-solving skills and ability to work in cross-functional teams. All profiles will be reviewed against the required skills and experience. Due to the high number of applications we will only be able to respond to successful applicants in the first instance. We thank you for your interest and the time taken to apply!

Modal Window

  • Home
  • Contact
  • About Us
  • FAQs
  • Terms & Conditions
  • Privacy
  • Employer
  • Post a Job
  • Search Resumes
  • Sign in
  • Job Seeker
  • Find Jobs
  • Create Resume
  • Sign in
  • IT blog
  • Facebook
  • Twitter
  • LinkedIn
  • Youtube
© 2008-2025 IT Job Board