it job board logo
  • Home
  • Find IT Jobs
  • Register CV
  • Register as Employer
  • Contact us
  • Career Advice
  • Recruiting? Post a job
  • Sign in
  • Sign up
  • Home
  • Find IT Jobs
  • Register CV
  • Register as Employer
  • Contact us
  • Career Advice
Sorry, that job is no longer available. Here are some results that may be similar to the job you were looking for.

28 jobs found

Email me jobs like this
Refine Search
Current Search
azure databricks engineer contract
Tenth Revolution Group
Databricks Data Engineer
Tenth Revolution Group Newcastle Upon Tyne, Tyne And Wear
Contract Data Engineer - Databricks & Azure (SC Cleared) Location: Newcastle (2-3 days onsite per week) Rate: 450 per day (Outside IR35) Duration: 6 months Clearance: Active SC clearance is mandatory Expenses: Not reimbursed - candidates must be based in or near Newcastle About the Role We are seeking an experienced Data Engineer with strong expertise in Databricks and Azure to join a high-profile Public Sector project. This is a 6-month contract, outside IR35, requiring Security Clearance (SC) and regular onsite presence in Newcastle. Key Responsibilities Design, develop, and maintain scalable data pipelines using Databricks on Azure. Build and optimize data solutions leveraging Azure Data Services (e.g., Data Lake, Synapse). Work with large datasets to enable advanced analytics and reporting. Collaborate with stakeholders to deliver high-quality data solutions aligned with project objectives. Ensure compliance with public sector security and governance standards. Essential Skills Proven experience in Data Engineering within complex environments. Strong hands-on expertise with Databricks and Azure (Data Lake, Synapse, etc.). Solid understanding of cloud-based data platforms and ETL processes. Active SC Clearance (must be in place before starting). Ability to work onsite in Newcastle 2-3 days per week. Nice to Have Experience in Public Sector projects. Knowledge of data governance and security frameworks. Interested? Apply now with your CV and availability. Please note: Candidates without active SC clearance or unable to commute to Newcastle will not be considered. To discuss this role further please submit your CV or contact Brandon Forbes Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.
10/12/2025
Contractor
Contract Data Engineer - Databricks & Azure (SC Cleared) Location: Newcastle (2-3 days onsite per week) Rate: 450 per day (Outside IR35) Duration: 6 months Clearance: Active SC clearance is mandatory Expenses: Not reimbursed - candidates must be based in or near Newcastle About the Role We are seeking an experienced Data Engineer with strong expertise in Databricks and Azure to join a high-profile Public Sector project. This is a 6-month contract, outside IR35, requiring Security Clearance (SC) and regular onsite presence in Newcastle. Key Responsibilities Design, develop, and maintain scalable data pipelines using Databricks on Azure. Build and optimize data solutions leveraging Azure Data Services (e.g., Data Lake, Synapse). Work with large datasets to enable advanced analytics and reporting. Collaborate with stakeholders to deliver high-quality data solutions aligned with project objectives. Ensure compliance with public sector security and governance standards. Essential Skills Proven experience in Data Engineering within complex environments. Strong hands-on expertise with Databricks and Azure (Data Lake, Synapse, etc.). Solid understanding of cloud-based data platforms and ETL processes. Active SC Clearance (must be in place before starting). Ability to work onsite in Newcastle 2-3 days per week. Nice to Have Experience in Public Sector projects. Knowledge of data governance and security frameworks. Interested? Apply now with your CV and availability. Please note: Candidates without active SC clearance or unable to commute to Newcastle will not be considered. To discuss this role further please submit your CV or contact Brandon Forbes Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.
Tenth Revolution Group
Azure Data Engineer - £500 - Hybrid
Tenth Revolution Group Newcastle Upon Tyne, Tyne And Wear
Azure Data Engineer - 500PD - Hybrid We are seeking an Azure Data Engineer with strong experience in Databricks to design, build, and optimize scalable data pipelines and analytics solutions on the Azure cloud platform. The ideal candidate will have hands-on expertise across Azure data services, data modeling, ETL/ELT development, and collaborative engineering practices. Key Responsibilities Design, develop, and maintain scalable data pipelines using Azure Databricks (Python, PySpark, SQL). Build and optimize ETL/ELT workflows that ingest data from various on-prem and cloud-based sources. Work with Azure services including Azure Data Lake Storage, Azure Data Factory, Azure Synapse Analytics, Azure SQL, and Event Hub. Implement data quality validation, monitoring, metadata management, and governance processes. Collaborate closely with data architects, analysts, and business stakeholders to understand data requirements. Optimize Databricks clusters, jobs, and runtimes for performance and cost efficiency. Develop CI/CD workflows for data pipelines using tools such as Azure DevOps or GitHub Actions. Ensure security best practices for data access, data masking, and role-based access control. Produce technical documentation and contribute to data engineering standards and best practices. Required Skills and Experience Proven experience as a Data Engineer working with Azure cloud services. Strong proficiency in Databricks, including PySpark, Spark SQL, notebooks, Delta Lake, and job orchestration. Strong SQL and data modeling skills (e.g., dimensional modeling, data vault). Experience with Azure Data Factory or other orchestration tools. Understanding of data lakehouse architecture and distributed computing principles. Experience with CI/CD pipelines and version control (Git). Knowledge of REST APIs, JSON, and event-driven data processing. Solid understanding of data governance, data lineage, and security controls. Ability to solve complex technical problems and communicate solutions clearly. Preferred Qualifications Industry certifications (e.g., Databricks Data Engineer Associate/Professional, Azure Data Engineer Associate). Experience with Azure Synapse SQL or serverless SQL pools. Familiarity with streaming technologies (e.g., Spark Structured Streaming, Kafka, Event Hub). Experience with infrastructure-as-code (Terraform or Bicep). Background in BI or analytics engineering (Power BI, dbt) is a plus. To apply for this role please submit your CV or contact Dillon Blackburn on (phone number removed) or at (url removed). Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.
10/12/2025
Contractor
Azure Data Engineer - 500PD - Hybrid We are seeking an Azure Data Engineer with strong experience in Databricks to design, build, and optimize scalable data pipelines and analytics solutions on the Azure cloud platform. The ideal candidate will have hands-on expertise across Azure data services, data modeling, ETL/ELT development, and collaborative engineering practices. Key Responsibilities Design, develop, and maintain scalable data pipelines using Azure Databricks (Python, PySpark, SQL). Build and optimize ETL/ELT workflows that ingest data from various on-prem and cloud-based sources. Work with Azure services including Azure Data Lake Storage, Azure Data Factory, Azure Synapse Analytics, Azure SQL, and Event Hub. Implement data quality validation, monitoring, metadata management, and governance processes. Collaborate closely with data architects, analysts, and business stakeholders to understand data requirements. Optimize Databricks clusters, jobs, and runtimes for performance and cost efficiency. Develop CI/CD workflows for data pipelines using tools such as Azure DevOps or GitHub Actions. Ensure security best practices for data access, data masking, and role-based access control. Produce technical documentation and contribute to data engineering standards and best practices. Required Skills and Experience Proven experience as a Data Engineer working with Azure cloud services. Strong proficiency in Databricks, including PySpark, Spark SQL, notebooks, Delta Lake, and job orchestration. Strong SQL and data modeling skills (e.g., dimensional modeling, data vault). Experience with Azure Data Factory or other orchestration tools. Understanding of data lakehouse architecture and distributed computing principles. Experience with CI/CD pipelines and version control (Git). Knowledge of REST APIs, JSON, and event-driven data processing. Solid understanding of data governance, data lineage, and security controls. Ability to solve complex technical problems and communicate solutions clearly. Preferred Qualifications Industry certifications (e.g., Databricks Data Engineer Associate/Professional, Azure Data Engineer Associate). Experience with Azure Synapse SQL or serverless SQL pools. Familiarity with streaming technologies (e.g., Spark Structured Streaming, Kafka, Event Hub). Experience with infrastructure-as-code (Terraform or Bicep). Background in BI or analytics engineering (Power BI, dbt) is a plus. To apply for this role please submit your CV or contact Dillon Blackburn on (phone number removed) or at (url removed). Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.
TXP
Head of Business Intelligence (Azure Data Lake, Fabric)
TXP City, Birmingham
Role: BI Manager Rate: 500.00 Per Day - Inside IR35 Location: Central Birmingham, West Midlands (Hybrid Working - 2 days per week onsite) Duration: Initial 3 - 6 Months with potential to go Permanent We are currently working with a leading services provider who require a technically strong, Midlands based Senior BI Manager with a good understanding of Azure Data and Data Engineering tools. Working as a key member of a newly formed Data Engineering team, the successful candidate will lead the design, development, and ongoing enhancement of the client's data and reporting infrastructure. You will be the strategic owner of the Azure Data Platform, overseeing services such as Azure Data Lake, Data Warehouse, Data Factory, Databricks, and Power BI. The technical focus is all Microsoft, primarily Azure so any Fabric experience would be very beneficial. Our client is looking for someone who is going to lead the function, and has previous experience doing this. Someone who really understands data and what it can be used for and challenge the business on what they need from the data and challenge the teams to produce the most effective data outputs for the business need so that it can improve and become a first-class function. You will need to be able to drive the direction of how data works for the organisation and the overall Data/BI strategy, design solutions that fit, and demonstrate what value data can bring to the company if it is used effectively. A technical background is essential to be able understand and bridge the gap between the Data Team and the Business environment so that the two collaborate effectively and are challenged both ways. Someone who can understand and appreciate both the technical side and the business strategy side. Skills & experience required: Experience leading a BI function Expertise in Azure BI architecture and Cloud services Hands-on experience with Azure Fabric, SQL warehousing, DataLakes, Databricks Track record in MI/BI product development using Agile and Waterfall methods Experience managing cross-functional teams and sprint activities Experience in leading a BI team and a business through the development and transition to a Data Lake / Factory / Warehouse Technical BI development/architect background If your profile demonstrates strong and recent experience in the above areas - please submit your application ASAP to Jackie Dean at TXP for consideration. TXP takes great pride in representing socially responsible clients who not only prioritise diversity and inclusion but also actively combat social inequality. Together, we have the power to make a profound impact on fostering a more equitable and inclusive society. By working with us, you become part of a movement dedicated to promoting a diverse and inclusive workforce.
09/12/2025
Contractor
Role: BI Manager Rate: 500.00 Per Day - Inside IR35 Location: Central Birmingham, West Midlands (Hybrid Working - 2 days per week onsite) Duration: Initial 3 - 6 Months with potential to go Permanent We are currently working with a leading services provider who require a technically strong, Midlands based Senior BI Manager with a good understanding of Azure Data and Data Engineering tools. Working as a key member of a newly formed Data Engineering team, the successful candidate will lead the design, development, and ongoing enhancement of the client's data and reporting infrastructure. You will be the strategic owner of the Azure Data Platform, overseeing services such as Azure Data Lake, Data Warehouse, Data Factory, Databricks, and Power BI. The technical focus is all Microsoft, primarily Azure so any Fabric experience would be very beneficial. Our client is looking for someone who is going to lead the function, and has previous experience doing this. Someone who really understands data and what it can be used for and challenge the business on what they need from the data and challenge the teams to produce the most effective data outputs for the business need so that it can improve and become a first-class function. You will need to be able to drive the direction of how data works for the organisation and the overall Data/BI strategy, design solutions that fit, and demonstrate what value data can bring to the company if it is used effectively. A technical background is essential to be able understand and bridge the gap between the Data Team and the Business environment so that the two collaborate effectively and are challenged both ways. Someone who can understand and appreciate both the technical side and the business strategy side. Skills & experience required: Experience leading a BI function Expertise in Azure BI architecture and Cloud services Hands-on experience with Azure Fabric, SQL warehousing, DataLakes, Databricks Track record in MI/BI product development using Agile and Waterfall methods Experience managing cross-functional teams and sprint activities Experience in leading a BI team and a business through the development and transition to a Data Lake / Factory / Warehouse Technical BI development/architect background If your profile demonstrates strong and recent experience in the above areas - please submit your application ASAP to Jackie Dean at TXP for consideration. TXP takes great pride in representing socially responsible clients who not only prioritise diversity and inclusion but also actively combat social inequality. Together, we have the power to make a profound impact on fostering a more equitable and inclusive society. By working with us, you become part of a movement dedicated to promoting a diverse and inclusive workforce.
Tenth Revolution Group
DataBricks Data Engineer
Tenth Revolution Group
Data Engineer (Databricks & Azure) - 3-Month Rolling Contract Rate: 400- 450 per day Location: Remote IR35 Status: Outside IR35 Duration: Initial 3 months (rolling) About the Company Join a leading Databricks Partner delivering innovative data solutions for enterprise clients. You'll work on cutting-edge projects leveraging Databricks and Azure to transform data into actionable insights. About the Role We are seeking an experienced Data Engineer with strong expertise in Databricks and Azure to join our team on a 3-month rolling contract. This is a fully remote position, offering flexibility and autonomy while working on high-impact data engineering initiatives. Key Responsibilities Design, develop, and optimize data pipelines using Databricks and Azure Data Services . Implement best practices for data ingestion, transformation, and storage. Collaborate with stakeholders to ensure data solutions meet business requirements. Monitor and troubleshoot data workflows for performance and reliability. Essential Skills Proven experience with Databricks (including Spark-based data processing). Strong knowledge of Azure Data Platform (Data Lake, Synapse, etc.). Proficiency in Python and SQL for data engineering tasks. Understanding of data architecture and ETL processes. Ability to work independently in a remote environment. Nice-to-Have Experience with CI/CD pipelines for data solutions. Familiarity with Delta Lake and ML pipelines . Start Date: ASAP Contract Type: Outside IR35 Apply Now: If you're a skilled Data Engineer looking for a flexible, remote opportunity with a Databricks Partner , we'd love to hear from you!
03/12/2025
Contractor
Data Engineer (Databricks & Azure) - 3-Month Rolling Contract Rate: 400- 450 per day Location: Remote IR35 Status: Outside IR35 Duration: Initial 3 months (rolling) About the Company Join a leading Databricks Partner delivering innovative data solutions for enterprise clients. You'll work on cutting-edge projects leveraging Databricks and Azure to transform data into actionable insights. About the Role We are seeking an experienced Data Engineer with strong expertise in Databricks and Azure to join our team on a 3-month rolling contract. This is a fully remote position, offering flexibility and autonomy while working on high-impact data engineering initiatives. Key Responsibilities Design, develop, and optimize data pipelines using Databricks and Azure Data Services . Implement best practices for data ingestion, transformation, and storage. Collaborate with stakeholders to ensure data solutions meet business requirements. Monitor and troubleshoot data workflows for performance and reliability. Essential Skills Proven experience with Databricks (including Spark-based data processing). Strong knowledge of Azure Data Platform (Data Lake, Synapse, etc.). Proficiency in Python and SQL for data engineering tasks. Understanding of data architecture and ETL processes. Ability to work independently in a remote environment. Nice-to-Have Experience with CI/CD pipelines for data solutions. Familiarity with Delta Lake and ML pipelines . Start Date: ASAP Contract Type: Outside IR35 Apply Now: If you're a skilled Data Engineer looking for a flexible, remote opportunity with a Databricks Partner , we'd love to hear from you!
Qualient Technology Solutions UK Limited
Data Engineer
Qualient Technology Solutions UK Limited
Role Title: Data Engineer Role Type: Contract (inside ir35) Role Location: London, UK Mandatory: Active Security Clearance Job Description:- Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Good proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and Azure SQL Database. Experience working with large datasets and complex data pipelines. Experience with data architecture design and data pipeline optimization. Proven expertise with Databricks, including hands-on implementation experience and certifications. Experience with SQL and NoSQL databases. Experience with data quality and data governance processes. Experience with version control systems (e.g., Git). Experience with Agile development methodologies.
29/11/2025
Contractor
Role Title: Data Engineer Role Type: Contract (inside ir35) Role Location: London, UK Mandatory: Active Security Clearance Job Description:- Essential Skills & Experience: 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks. Good proficiency in Python and Spark (PySpark) or Scala. Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns. Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and Azure SQL Database. Experience working with large datasets and complex data pipelines. Experience with data architecture design and data pipeline optimization. Proven expertise with Databricks, including hands-on implementation experience and certifications. Experience with SQL and NoSQL databases. Experience with data quality and data governance processes. Experience with version control systems (e.g., Git). Experience with Agile development methodologies.
Experis
Data Engineer
Experis City, Swindon
Data Engineer 6 months initially Remote working - but visits to Swindon office once per week Inside IR35 - Umbrella only Role Overview We are seeking an experienced Data Engineer to join our team. The ideal candidate will bring 5-7 years of experience in data engineering, with a strong focus on Azure Databricks. Key Skills & Experience Proven expertise in Azure Databricks (must-have) Hands-on experience with DBT (Data Build Tool) Strong knowledge of Snowflake Solid background in designing, building, and optimizing data pipelines Ability to collaborate effectively in hybrid working arrangements All profiles will be reviewed against the required skills and experience. Due to the high number of applications we will only be able to respond to successful applicants in the first instance. We thank you for your interest and the time taken to apply!
27/11/2025
Contractor
Data Engineer 6 months initially Remote working - but visits to Swindon office once per week Inside IR35 - Umbrella only Role Overview We are seeking an experienced Data Engineer to join our team. The ideal candidate will bring 5-7 years of experience in data engineering, with a strong focus on Azure Databricks. Key Skills & Experience Proven expertise in Azure Databricks (must-have) Hands-on experience with DBT (Data Build Tool) Strong knowledge of Snowflake Solid background in designing, building, and optimizing data pipelines Ability to collaborate effectively in hybrid working arrangements All profiles will be reviewed against the required skills and experience. Due to the high number of applications we will only be able to respond to successful applicants in the first instance. We thank you for your interest and the time taken to apply!
Hays Technology
Python Data Engineer
Hays Technology
Data Engineer - Security Clearance (SC), Python, Azure, BDD Up to 475 per day (Inside IR35) Remote / London 6 months My client is an International Consultancy who require a Security Cleared Data Engineer, with Active Security Clearance (SC), and strong Python skills to design and deploy scalable Data solutions in a containerized Azure environment. Key requirements: Proven experience as a Data Engineer with Active Security Clearance (SC) Strong Python skills with modular, test-driven design Experience with Behave for unit and BDD testing (mocking, patching) Proficiency in PySpark and distributed Data processing Solid understanding of Delta Lake (design and maintenance) Hands-on with Docker for development and deployment Familiarity with Azure services: Functions, Key Vault, Blob Storage Ability to build configurable, parameter-driven applications Exposure to CI/CD pipelines (ideally Azure DevOps) and cloud security best practices Strong collaboration and communication skills Nice to have: Immediate availability Experience with Databricks or Synapse Knowledge of Data governance in Azure ecosystems Infrastructure as Code (IaC) tooling If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. If this job isn't quite right for you, but you are looking for a new position, please contact us for a confidential discussion about your career. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at (url removed)
27/11/2025
Contractor
Data Engineer - Security Clearance (SC), Python, Azure, BDD Up to 475 per day (Inside IR35) Remote / London 6 months My client is an International Consultancy who require a Security Cleared Data Engineer, with Active Security Clearance (SC), and strong Python skills to design and deploy scalable Data solutions in a containerized Azure environment. Key requirements: Proven experience as a Data Engineer with Active Security Clearance (SC) Strong Python skills with modular, test-driven design Experience with Behave for unit and BDD testing (mocking, patching) Proficiency in PySpark and distributed Data processing Solid understanding of Delta Lake (design and maintenance) Hands-on with Docker for development and deployment Familiarity with Azure services: Functions, Key Vault, Blob Storage Ability to build configurable, parameter-driven applications Exposure to CI/CD pipelines (ideally Azure DevOps) and cloud security best practices Strong collaboration and communication skills Nice to have: Immediate availability Experience with Databricks or Synapse Knowledge of Data governance in Azure ecosystems Infrastructure as Code (IaC) tooling If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. If this job isn't quite right for you, but you are looking for a new position, please contact us for a confidential discussion about your career. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at (url removed)
Tenth Revolution Group
Azure Data Engineer - £400PD - Remote
Tenth Revolution Group City, London
Azure Data Engineer - 400PD - Remote Seeking an experienced Data Engineer to design, build, and optimise data solutions within the Microsoft Azure ecosystem. The role focuses on pipeline development, data modelling, governance, and supporting analytics teams with high-quality, reliable data. Key Responsibilities: Develop and maintain scalable data pipelines using Azure Data Factory, Synapse, Databricks, and Microsoft Fabric. Build efficient ETL/ELT processes and data models to support analytics, reporting, and dashboards. Optimise existing pipelines for performance, reliability, and cost efficiency. Implement best practices for data quality, error handling, automation, and monitoring. Manage data security and governance, including Row-Level Security (RLS) and compliance standards. Maintain documentation and metadata for data assets and pipeline architecture. Collaborate with analysts, data scientists, and stakeholders to deliver trusted data products. Provide technical support and troubleshoot production issues. Contribute to improving data engineering processes and development lifecycle. Required Skills & Experience: Proven experience as a Data Engineer working with Azure. Strong skills in Azure Data Factory, Synapse Analytics, Databricks, SQL Database, and Azure Storage. Excellent SQL and data modelling (star/snowflake, dimensional modelling). Knowledge of Power BI dataflows, DAX, and RLS. Experience with Python, PySpark, or T-SQL for transformations. Understanding of CI/CD and DevOps (Git, YAML pipelines). Strong grasp of data governance, security, and performance tuning. To apply for this role please submit your CV or contact Dillon Blackburn on (phone number removed) or at (url removed). Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.
27/11/2025
Contractor
Azure Data Engineer - 400PD - Remote Seeking an experienced Data Engineer to design, build, and optimise data solutions within the Microsoft Azure ecosystem. The role focuses on pipeline development, data modelling, governance, and supporting analytics teams with high-quality, reliable data. Key Responsibilities: Develop and maintain scalable data pipelines using Azure Data Factory, Synapse, Databricks, and Microsoft Fabric. Build efficient ETL/ELT processes and data models to support analytics, reporting, and dashboards. Optimise existing pipelines for performance, reliability, and cost efficiency. Implement best practices for data quality, error handling, automation, and monitoring. Manage data security and governance, including Row-Level Security (RLS) and compliance standards. Maintain documentation and metadata for data assets and pipeline architecture. Collaborate with analysts, data scientists, and stakeholders to deliver trusted data products. Provide technical support and troubleshoot production issues. Contribute to improving data engineering processes and development lifecycle. Required Skills & Experience: Proven experience as a Data Engineer working with Azure. Strong skills in Azure Data Factory, Synapse Analytics, Databricks, SQL Database, and Azure Storage. Excellent SQL and data modelling (star/snowflake, dimensional modelling). Knowledge of Power BI dataflows, DAX, and RLS. Experience with Python, PySpark, or T-SQL for transformations. Understanding of CI/CD and DevOps (Git, YAML pipelines). Strong grasp of data governance, security, and performance tuning. To apply for this role please submit your CV or contact Dillon Blackburn on (phone number removed) or at (url removed). Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.
Hays Technology
Data Engineer (Databricks)
Hays Technology City, London
About the RoleWe are seeking a Databricks Data Engineer with strong expertise in designing and optimising large-scale data engineering solutions within the Databricks Data Intelligence Platform. This role is ideal for someone passionate about building high-performance data pipelines and ensuring robust data governance across modern cloud environments. Key Responsibilities Design, build, and maintain scalable data pipelines using Databricks Notebooks, Jobs, and Workflows for both batch and streaming data. Optimise Spark and Delta Lake performance through efficient cluster configuration, adaptive query execution, and caching strategies. Conduct performance testing and cluster tuning to ensure cost-efficient, high-performing workloads. Implement data quality, lineage tracking, and access control policies aligned with Databricks Unity Catalogue and governance best practices. Develop PySpark applications for ETL, data transformation, and analytics, following modular and reusable design principles. Create and manage Delta Lake tables with ACID compliance, schema evolution, and time travel for versioned data management. Integrate Databricks solutions with Azure services such as Azure Data Lake Storage, Key Vault, and Azure Functions. What We're Looking For Proven experience with Databricks, PySpark, and Delta Lake. Strong understanding of workflow orchestration, performance optimisation, and data governance. Hands-on experience with Azure cloud services. Ability to work in a fast-paced environment and deliver high-quality solutions. SC Cleared candidates If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. If this job isn't quite right for you, but you are looking for a new position, please contact us for a confidential discussion about your career. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at (url removed)
27/11/2025
Contractor
About the RoleWe are seeking a Databricks Data Engineer with strong expertise in designing and optimising large-scale data engineering solutions within the Databricks Data Intelligence Platform. This role is ideal for someone passionate about building high-performance data pipelines and ensuring robust data governance across modern cloud environments. Key Responsibilities Design, build, and maintain scalable data pipelines using Databricks Notebooks, Jobs, and Workflows for both batch and streaming data. Optimise Spark and Delta Lake performance through efficient cluster configuration, adaptive query execution, and caching strategies. Conduct performance testing and cluster tuning to ensure cost-efficient, high-performing workloads. Implement data quality, lineage tracking, and access control policies aligned with Databricks Unity Catalogue and governance best practices. Develop PySpark applications for ETL, data transformation, and analytics, following modular and reusable design principles. Create and manage Delta Lake tables with ACID compliance, schema evolution, and time travel for versioned data management. Integrate Databricks solutions with Azure services such as Azure Data Lake Storage, Key Vault, and Azure Functions. What We're Looking For Proven experience with Databricks, PySpark, and Delta Lake. Strong understanding of workflow orchestration, performance optimisation, and data governance. Hands-on experience with Azure cloud services. Ability to work in a fast-paced environment and deliver high-quality solutions. SC Cleared candidates If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. If this job isn't quite right for you, but you are looking for a new position, please contact us for a confidential discussion about your career. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at (url removed)
Tenth Revolution Group
Data Engineering Team Lead - Remote - Databricks - Azure - 80k
Tenth Revolution Group City, London
Data Engineering Team Lead - Remote - Databricks - Azure - 80k Join a Leading Microsoft Consultancy Driving Data Innovation I'm working with a well-established Microsoft Partner with an incredible project pipeline, rapid growth, and a reputation for delivering Tech for Good. They're working on cutting-edge projects using emerging technologies like Microsoft Fabric and Azure Databricks. We're looking for a Data engineering team lead who combines hands-on technical expertise with leadership skills to mentor a talented team and deliver exceptional solutions for our clients. What You'll Do Lead and mentor a team of Technical Consultants, driving engagement, growth, and alignment with our culture. Oversee resource planning, scheduling, and performance management. Collaborate with Pre-sales, Commercial, and Project Management teams to scope and deliver projects. Ensure consistent delivery of technical solutions aligned with best practices and standards. Support technical delivery when needed, including designing scalable data solutions in Microsoft/Azure environments. Contribute to innovation through cloud migrations, data lakes, and robust ETL/ELT solutions. What We're Looking For Hands-on Data Engineering experience (not Data Analyst or Data Scientist roles). Strong background in Azure Synapse, Databricks, or Microsoft Fabric. Expertise in ETL/ELT development using SQL and Python. Experience implementing data lakes and medallion lakehouse architecture. Skilled in managing large datasets and writing advanced SQL/Python queries. Solid understanding of BI and data warehousing concepts. Excellent communication skills and ability to build strong relationships. Ideally, experience in consulting environments and working within high-performing teams. The company Rapid Growth & Exciting Projects - Continuing to grow working on cutting-edge Microsoft Cloud solutions. Investment in You - They invest in training and development, with clear certification pathways for you. Home-Based Contract - Work remotely with all travel expenses covered. UK-Based Delivery - We never offshore; all consultants are UK-based. Competitive salary and benefits, including: 25 days holiday Private health insurance (after one year) Life assurance (4x base salary) Enhanced parental pay Perkbox, Cyclescheme, Electric Car Scheme Ready to lead and innovate? Apply now or send your CV directly to me: (url removed)
27/11/2025
Full time
Data Engineering Team Lead - Remote - Databricks - Azure - 80k Join a Leading Microsoft Consultancy Driving Data Innovation I'm working with a well-established Microsoft Partner with an incredible project pipeline, rapid growth, and a reputation for delivering Tech for Good. They're working on cutting-edge projects using emerging technologies like Microsoft Fabric and Azure Databricks. We're looking for a Data engineering team lead who combines hands-on technical expertise with leadership skills to mentor a talented team and deliver exceptional solutions for our clients. What You'll Do Lead and mentor a team of Technical Consultants, driving engagement, growth, and alignment with our culture. Oversee resource planning, scheduling, and performance management. Collaborate with Pre-sales, Commercial, and Project Management teams to scope and deliver projects. Ensure consistent delivery of technical solutions aligned with best practices and standards. Support technical delivery when needed, including designing scalable data solutions in Microsoft/Azure environments. Contribute to innovation through cloud migrations, data lakes, and robust ETL/ELT solutions. What We're Looking For Hands-on Data Engineering experience (not Data Analyst or Data Scientist roles). Strong background in Azure Synapse, Databricks, or Microsoft Fabric. Expertise in ETL/ELT development using SQL and Python. Experience implementing data lakes and medallion lakehouse architecture. Skilled in managing large datasets and writing advanced SQL/Python queries. Solid understanding of BI and data warehousing concepts. Excellent communication skills and ability to build strong relationships. Ideally, experience in consulting environments and working within high-performing teams. The company Rapid Growth & Exciting Projects - Continuing to grow working on cutting-edge Microsoft Cloud solutions. Investment in You - They invest in training and development, with clear certification pathways for you. Home-Based Contract - Work remotely with all travel expenses covered. UK-Based Delivery - We never offshore; all consultants are UK-based. Competitive salary and benefits, including: 25 days holiday Private health insurance (after one year) Life assurance (4x base salary) Enhanced parental pay Perkbox, Cyclescheme, Electric Car Scheme Ready to lead and innovate? Apply now or send your CV directly to me: (url removed)
Hays Technology
Databricks Data Engineer
Hays Technology
Data Engineer - Active SC, Databricks, PySpark Up to 475 per day (Inside IR35) Remote / London 6 months My client is an International Consultancy who are recruiting for an Data Engineer with Active Security Clearance (SC) and strong Databricks and Azure experience to deliver and optimise Data engineering solutions. Key requirements: Proven experience as a Data Engineer with Active Security Clearance (SC) Strong experience with Databricks, PySpark and Delta Lake Expertise in Jobs & Workflows, cluster tuning, and performance optimisation Solid understanding of Data governance (Unity Catalog, Lineage, Access Policies) Hands-on with Azure services: Data Lake Storage (Gen2), Key Vault, Azure Functions Familiarity with CI/CD for Databricks deployments Strong troubleshooting in distributed Data environments Excellent communication and collaboration skills Nice to have: Immediate availability Experience with enterprise-scale Databricks environments Knowledge of Azure Synapse, Power BI and cost optimisation strategies If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. If this job isn't quite right for you, but you are looking for a new position, please contact us for a confidential discussion about your career. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at (url removed)
27/11/2025
Contractor
Data Engineer - Active SC, Databricks, PySpark Up to 475 per day (Inside IR35) Remote / London 6 months My client is an International Consultancy who are recruiting for an Data Engineer with Active Security Clearance (SC) and strong Databricks and Azure experience to deliver and optimise Data engineering solutions. Key requirements: Proven experience as a Data Engineer with Active Security Clearance (SC) Strong experience with Databricks, PySpark and Delta Lake Expertise in Jobs & Workflows, cluster tuning, and performance optimisation Solid understanding of Data governance (Unity Catalog, Lineage, Access Policies) Hands-on with Azure services: Data Lake Storage (Gen2), Key Vault, Azure Functions Familiarity with CI/CD for Databricks deployments Strong troubleshooting in distributed Data environments Excellent communication and collaboration skills Nice to have: Immediate availability Experience with enterprise-scale Databricks environments Knowledge of Azure Synapse, Power BI and cost optimisation strategies If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. If this job isn't quite right for you, but you are looking for a new position, please contact us for a confidential discussion about your career. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at (url removed)
TXP
AWS Data Engineer
TXP
Role: Data Engineer (AWS) Contract: 450pd- 500pd (Inside IR35) Location: Remote working with occasional travel to site Duration: End of March 2026 (expected to extend at the new financial year) We are currently recruiting for a Data Engineer to work on a project within the Public Sector space. The role will be AWS focused and requires a Data Engineer who can come in and make an impact and difference to the project. Skills and experience required Experience of back-end / data engineering across a number of languages (including Python), and commonly used IDE's Experience with developing, scheduling, maintaining and resolving issues with batch or micro-batch jobs on AWS ETL or Azure ETL services Experience querying data stored on AWS S3 or Azure ADLSv2, or through a Lakehouse capability Experience in managing API-level and Database connectivity Experience using source control and DevOps tooling such as Gitlab Experience in use of terraform (or similar cloud native products) to build new data & analytics platform capabilities Experience with developing data features and associated transformation procedures on a modern data platform. Examples include (but not limited to) Azure Fabric, AWS Lakeformation, Databricks or Snowflake. Experience automating operations tasks with one or more scripting languages. Due to the nature of the project and the short turnaround required, the successful candidate must hold valid and live SC Clearance. If you are interested in the role and would like to apply, please click on the link for immediate consideration.
25/11/2025
Contractor
Role: Data Engineer (AWS) Contract: 450pd- 500pd (Inside IR35) Location: Remote working with occasional travel to site Duration: End of March 2026 (expected to extend at the new financial year) We are currently recruiting for a Data Engineer to work on a project within the Public Sector space. The role will be AWS focused and requires a Data Engineer who can come in and make an impact and difference to the project. Skills and experience required Experience of back-end / data engineering across a number of languages (including Python), and commonly used IDE's Experience with developing, scheduling, maintaining and resolving issues with batch or micro-batch jobs on AWS ETL or Azure ETL services Experience querying data stored on AWS S3 or Azure ADLSv2, or through a Lakehouse capability Experience in managing API-level and Database connectivity Experience using source control and DevOps tooling such as Gitlab Experience in use of terraform (or similar cloud native products) to build new data & analytics platform capabilities Experience with developing data features and associated transformation procedures on a modern data platform. Examples include (but not limited to) Azure Fabric, AWS Lakeformation, Databricks or Snowflake. Experience automating operations tasks with one or more scripting languages. Due to the nature of the project and the short turnaround required, the successful candidate must hold valid and live SC Clearance. If you are interested in the role and would like to apply, please click on the link for immediate consideration.
Randstad Technologies Recruitment
Data Architect (Insurance Domain)
Randstad Technologies Recruitment
Adword Job Title: Data Architect (Insurance Domain) Location: London UK Hybrid role Responsibilities: Experience Level 15 years with at least 5 years in Azure ecosystem Role: We are seeking a seasoned Data Architect to lead the design and implementation of scalable data solutions for a strategic insurance project The ideal candidate will have deep expertise in Azure cloud services Azure Data Factory and Databricks with a strong understanding of data modeling data integration and analytics in the insurance domain Key Responsibilities Architect and design end to end data solutions on Azure for insurance related data workflows Lead data ingestion transformation and orchestration using ADF and Databricks Collaborate with business stakeholders to understand data requirements and translate them into technical solutions Ensure data quality governance and security compliance across the data lifecycle Optimize performance and cost efficiency of data pipelines and storage Provide technical leadership and mentoring to data engineers and developers Mandatory Skillset: Azure Cloud Services Strong experience with Azure Data Lake Azure Synapse Azure SQL and Azure Storage Azure Data Factory ADF Expertise in building and managing complex data pipelines Databricks Handson experience with Spark based data processing notebooks and ML workflows Data Modeling Proficiency in conceptual logical and physical data modeling SQL Python Advanced skills for data manipulation and transformation Insurance Domain Knowledge Understanding of insurance data structures claims policy underwriting and regulatory requirements Preferred Skillset: Power BI Experience in building dashboards and visualizations Data Governance Tools Familiarity with tools like Purview or Collibra Machine Learning Exposure to ML model deployment and monitoring in Databricks CICD Knowledge of DevOps practices for data pipelines Certifications: Azure Data Engineer or Azure Solutions Architect certifications Skills Mandatory Skills: Python for DATA,Java,Python,Scala,Snowflake,Azure BLOB,Azure Data Factory, Azure Functions, Azure SQL, Azure Synapse Analytics, AZURE DATA LAKE,ANSI-SQL,Databricks,HDInsight If you're excited about this role then we would like to hear from you! Please apply with a copy of your CV or send it to Prasanna com and let's start the conversation! Randstad Technologies Ltd is a leading specialist recruitment business for the IT & Engineering industries. Please note that due to a high level of applications, we can only respond to applicants whose skills & qualifications are suitable for this position. No terminology in this advert is intended to discriminate against any of the protected characteristics that fall under the Equality Act 2010. For the purposes of the Conduct Regulations 2003, when advertising permanent vacancies we are acting as an Employment Agency, and when advertising temporary/contract vacancies we are acting as an Employment Business.
24/11/2025
Full time
Adword Job Title: Data Architect (Insurance Domain) Location: London UK Hybrid role Responsibilities: Experience Level 15 years with at least 5 years in Azure ecosystem Role: We are seeking a seasoned Data Architect to lead the design and implementation of scalable data solutions for a strategic insurance project The ideal candidate will have deep expertise in Azure cloud services Azure Data Factory and Databricks with a strong understanding of data modeling data integration and analytics in the insurance domain Key Responsibilities Architect and design end to end data solutions on Azure for insurance related data workflows Lead data ingestion transformation and orchestration using ADF and Databricks Collaborate with business stakeholders to understand data requirements and translate them into technical solutions Ensure data quality governance and security compliance across the data lifecycle Optimize performance and cost efficiency of data pipelines and storage Provide technical leadership and mentoring to data engineers and developers Mandatory Skillset: Azure Cloud Services Strong experience with Azure Data Lake Azure Synapse Azure SQL and Azure Storage Azure Data Factory ADF Expertise in building and managing complex data pipelines Databricks Handson experience with Spark based data processing notebooks and ML workflows Data Modeling Proficiency in conceptual logical and physical data modeling SQL Python Advanced skills for data manipulation and transformation Insurance Domain Knowledge Understanding of insurance data structures claims policy underwriting and regulatory requirements Preferred Skillset: Power BI Experience in building dashboards and visualizations Data Governance Tools Familiarity with tools like Purview or Collibra Machine Learning Exposure to ML model deployment and monitoring in Databricks CICD Knowledge of DevOps practices for data pipelines Certifications: Azure Data Engineer or Azure Solutions Architect certifications Skills Mandatory Skills: Python for DATA,Java,Python,Scala,Snowflake,Azure BLOB,Azure Data Factory, Azure Functions, Azure SQL, Azure Synapse Analytics, AZURE DATA LAKE,ANSI-SQL,Databricks,HDInsight If you're excited about this role then we would like to hear from you! Please apply with a copy of your CV or send it to Prasanna com and let's start the conversation! Randstad Technologies Ltd is a leading specialist recruitment business for the IT & Engineering industries. Please note that due to a high level of applications, we can only respond to applicants whose skills & qualifications are suitable for this position. No terminology in this advert is intended to discriminate against any of the protected characteristics that fall under the Equality Act 2010. For the purposes of the Conduct Regulations 2003, when advertising permanent vacancies we are acting as an Employment Agency, and when advertising temporary/contract vacancies we are acting as an Employment Business.
Tenth Revolution Group
Databricks Engineer - £400PD - Remote
Tenth Revolution Group
Databricks Engineer - 400PD - Remote About the Role We are seeking a highly skilled Azure Data Engineer with deep, hands-on Databricks experience to join our growing data engineering team. In this role, you will design, build, and optimise scalable data pipelines and lakehouse architectures on Azure, enabling advanced analytics and data-driven decision making across the business. Key Responsibilities Design, develop, and maintain ETL/ELT pipelines using Azure Databricks, PySpark, and Delta Lake. Build and optimise data lakehouse architectures on Azure Data Lake Storage ( ADLS ) . Develop high-performance data solutions using Azure Synapse, Azure Data Factory, and Databricks workflows Implement best practices for data governance, security, and quality across all pipelines. Collaborate with data scientists, analysts, and cross-functional teams to deliver reliable, production-grade data models. Monitor and tune pipeline performance to ensure efficiency, reliability, and cost optimisation. Participate in CI/CD processes and infrastructure-as-code solutions using tools like Terraform, GitHub Actions, or Azure DevOps Required Skills & Experience 3+ years' experience as a Data Engineer working in Azure environments. Strong hands-on experience with Databricks (PySpark, Delta Lake, cluster optimisation, job scheduling). Solid knowledge of Azure cloud services including: Azure Data Lake Storage Azure Data Factory Azure Synapse / SQL Pools Azure Key Vault Strong programming skills in Python and SQL. Experience building scalable, production-grade data pipelines. Understanding of data modelling, data warehousing concepts, and distributed computing. Familiarity with CI/CD, version control, and DevOps practices. Nice-to-Have Experience with streaming technologies (e.g., Spark Structured Streaming, Event Hub, Kafka). Knowledge of MLflow, Unity Catalog, or advanced Databricks features. Exposure to Terraform or other IaC tools. Experience working in Agile/Scrum environments. To apply for this role please submit your CV or contact Dillon Blackburn on (phone number removed) or at (url removed). Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.
19/11/2025
Contractor
Databricks Engineer - 400PD - Remote About the Role We are seeking a highly skilled Azure Data Engineer with deep, hands-on Databricks experience to join our growing data engineering team. In this role, you will design, build, and optimise scalable data pipelines and lakehouse architectures on Azure, enabling advanced analytics and data-driven decision making across the business. Key Responsibilities Design, develop, and maintain ETL/ELT pipelines using Azure Databricks, PySpark, and Delta Lake. Build and optimise data lakehouse architectures on Azure Data Lake Storage ( ADLS ) . Develop high-performance data solutions using Azure Synapse, Azure Data Factory, and Databricks workflows Implement best practices for data governance, security, and quality across all pipelines. Collaborate with data scientists, analysts, and cross-functional teams to deliver reliable, production-grade data models. Monitor and tune pipeline performance to ensure efficiency, reliability, and cost optimisation. Participate in CI/CD processes and infrastructure-as-code solutions using tools like Terraform, GitHub Actions, or Azure DevOps Required Skills & Experience 3+ years' experience as a Data Engineer working in Azure environments. Strong hands-on experience with Databricks (PySpark, Delta Lake, cluster optimisation, job scheduling). Solid knowledge of Azure cloud services including: Azure Data Lake Storage Azure Data Factory Azure Synapse / SQL Pools Azure Key Vault Strong programming skills in Python and SQL. Experience building scalable, production-grade data pipelines. Understanding of data modelling, data warehousing concepts, and distributed computing. Familiarity with CI/CD, version control, and DevOps practices. Nice-to-Have Experience with streaming technologies (e.g., Spark Structured Streaming, Event Hub, Kafka). Knowledge of MLflow, Unity Catalog, or advanced Databricks features. Exposure to Terraform or other IaC tools. Experience working in Agile/Scrum environments. To apply for this role please submit your CV or contact Dillon Blackburn on (phone number removed) or at (url removed). Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.
Syntax Consultancy Ltd
Databricks Engineer (SC Cleared)
Syntax Consultancy Ltd City, London
Databricks Engineer (SC Cleared) London (Hybrid) 6 Month Contract £(Apply online only)/day (Inside IR35) Databricks Engineer needed with active SC Security Clearance for 6 Month Contract based in Central London (Hybrid). Developing a cutting-edge Azure Databricks platform for economic data modelling, analysis, and forecasting. Start ASAP in Nov/Dec 2025. Hybrid Working - 2 days/week remote (WFH), and 3 days/week working on-site from the Central London office. A chance to work with a leading global IT and Digital transformation business specialising in Government projects: In-depth Data Engineering + strong hands-on Azure Databricks expertise. Azure Data Services, Azure Data Factory, Azure Blob Storage + Azure SQL Database. Designing, developing, building + optimising data pipelines, implementing data transformations, ensuring data quality and reliability. Deep Data Warehousing knowledge including data modelling techniques + data integration patterns. Experience of working with complex data pipelines, large data sets, data pipeline optimization + data architecture design. Implementing complex data transformations using Spark, PySpark or Scala + working with SQL / MySQL databases. Experience with data quality, data governance processes, Git version control + Agile development environments. Azure Data Engineer certification preferred -eg- Azure Data Engineer Associate. Advantageous skills: Azure Event Hubs, Kafka, data visualisation tools, Power BI, Tableau, Azure DevOps, Docker, Kubernetes, Jenkins.
17/11/2025
Contractor
Databricks Engineer (SC Cleared) London (Hybrid) 6 Month Contract £(Apply online only)/day (Inside IR35) Databricks Engineer needed with active SC Security Clearance for 6 Month Contract based in Central London (Hybrid). Developing a cutting-edge Azure Databricks platform for economic data modelling, analysis, and forecasting. Start ASAP in Nov/Dec 2025. Hybrid Working - 2 days/week remote (WFH), and 3 days/week working on-site from the Central London office. A chance to work with a leading global IT and Digital transformation business specialising in Government projects: In-depth Data Engineering + strong hands-on Azure Databricks expertise. Azure Data Services, Azure Data Factory, Azure Blob Storage + Azure SQL Database. Designing, developing, building + optimising data pipelines, implementing data transformations, ensuring data quality and reliability. Deep Data Warehousing knowledge including data modelling techniques + data integration patterns. Experience of working with complex data pipelines, large data sets, data pipeline optimization + data architecture design. Implementing complex data transformations using Spark, PySpark or Scala + working with SQL / MySQL databases. Experience with data quality, data governance processes, Git version control + Agile development environments. Azure Data Engineer certification preferred -eg- Azure Data Engineer Associate. Advantageous skills: Azure Event Hubs, Kafka, data visualisation tools, Power BI, Tableau, Azure DevOps, Docker, Kubernetes, Jenkins.
Head Resourcing
PowerBI Engineer
Head Resourcing
? Power BI Report Engineer (Azure / Databricks) Glasgow 3-4 days onsite Exclusive Opportunity with a Leading UK Consumer Brand Are you a Power BI specialist who loves clean, governed data and high-performance semantic models? Do you want to work with a business that's rebuilding its entire BI estate the right way-proper Lakehouse architecture, curated Gold tables, PBIP, Git, and end-to-end governance? If so, this is one of the most modern, forward-thinking Power BI engineering roles in Scotland. Our Glasgow-based client is transforming its reporting platform using Azure + Databricks , with Power BI sitting on top of a fully curated Gold Layer. They develop everything using PBIP + Git + Tabular Editor 3 , and semantic modelling is treated as a first-class engineering discipline. This is your chance to own the creation of high-quality datasets and dashboards used across Operations, Finance, Sales, Logistics and Customer Care-turning trusted Lakehouse data into insights the business relies on every day. ? Why This Role Exists To turn clean, curated Gold Lakehouse data into trusted, enterprise-grade Power BI insights. You'll own semantic modelling, dataset optimisation, governance and best-practice delivery across a modern BI ecosystem. ? What You'll Do Semantic Modelling with PBIP + Git Build and maintain enterprise PBIP datasets fully version-controlled in Git. Use Tabular Editor 3 for DAX, metadata modelling, calc groups and object governance. Manage branching, pull requests and releases via Azure DevOps . Lakehouse-Aligned Reporting (Gold Layer Only) Develop semantic models exclusively on top of curated Gold Databricks tables . Work closely with Data Engineering on schema design and contract-first modelling. Maintain consistent dimensional modelling aligned to the enterprise Bus Matrix. High-Performance Power BI Engineering Optimise performance: aggregations, composite models, incremental refresh, DQ/Import strategy. Tune Databricks SQL Warehouse queries for speed and cost efficiency. Monitor PPU capacity performance, refresh reliability and dataset health. Governance, Security & Standards Implement RLS/OLS , naming conventions, KPI definitions and calc groups. Apply dataset certification, endorsements and governance metadata. Align semantic models with lineage and security policies across the Azure/Databricks estate. Lifecycle, Release & Best Practice Delivery Use Power BI Deployment Pipelines for Dev ? UAT ? Prod releases. Enforce semantic CI/CD patterns with PBIP + Git + Tabular Editor. Build reusable, certified datasets and dataflows enabling scalable self-service BI. Adoption, UX & Collaboration Design intuitive dashboards with consistent UX across multiple business functions. Support BI adoption through training, documentation and best-practice guidance. Use telemetry to track usage, performance and improve user experience. ? What We're Looking For Required Certifications To meet BI engineering standards, candidates must hold: PL-300: Power BI Data Analyst Associate DP-600: Fabric Analytics Engineer Associate Skills & Experience 3-5+ years building enterprise Power BI datasets and dashboards. Strong DAX and semantic modelling expertise (calc groups, conformed dimensions, role-playing dimensions). Strong SQL skills; comfortable working with Databricks Gold-layer tables. Proven ability to optimise dataset performance (aggregations, incremental refresh, DQ/Import). Experience working with Git-based modelling workflows and PR reviews via Tabular Editor. Excellent design intuition-clean layouts, drill paths, and KPI logic. Nice to Have Python for automation or ad-hoc prep; PySpark familiarity. Understanding of Lakehouse patterns, Delta Lake, metadata-driven pipelines. Unity Catalog / Purview experience for lineage and governance. RLS/OLS implementation experience.
17/11/2025
Full time
? Power BI Report Engineer (Azure / Databricks) Glasgow 3-4 days onsite Exclusive Opportunity with a Leading UK Consumer Brand Are you a Power BI specialist who loves clean, governed data and high-performance semantic models? Do you want to work with a business that's rebuilding its entire BI estate the right way-proper Lakehouse architecture, curated Gold tables, PBIP, Git, and end-to-end governance? If so, this is one of the most modern, forward-thinking Power BI engineering roles in Scotland. Our Glasgow-based client is transforming its reporting platform using Azure + Databricks , with Power BI sitting on top of a fully curated Gold Layer. They develop everything using PBIP + Git + Tabular Editor 3 , and semantic modelling is treated as a first-class engineering discipline. This is your chance to own the creation of high-quality datasets and dashboards used across Operations, Finance, Sales, Logistics and Customer Care-turning trusted Lakehouse data into insights the business relies on every day. ? Why This Role Exists To turn clean, curated Gold Lakehouse data into trusted, enterprise-grade Power BI insights. You'll own semantic modelling, dataset optimisation, governance and best-practice delivery across a modern BI ecosystem. ? What You'll Do Semantic Modelling with PBIP + Git Build and maintain enterprise PBIP datasets fully version-controlled in Git. Use Tabular Editor 3 for DAX, metadata modelling, calc groups and object governance. Manage branching, pull requests and releases via Azure DevOps . Lakehouse-Aligned Reporting (Gold Layer Only) Develop semantic models exclusively on top of curated Gold Databricks tables . Work closely with Data Engineering on schema design and contract-first modelling. Maintain consistent dimensional modelling aligned to the enterprise Bus Matrix. High-Performance Power BI Engineering Optimise performance: aggregations, composite models, incremental refresh, DQ/Import strategy. Tune Databricks SQL Warehouse queries for speed and cost efficiency. Monitor PPU capacity performance, refresh reliability and dataset health. Governance, Security & Standards Implement RLS/OLS , naming conventions, KPI definitions and calc groups. Apply dataset certification, endorsements and governance metadata. Align semantic models with lineage and security policies across the Azure/Databricks estate. Lifecycle, Release & Best Practice Delivery Use Power BI Deployment Pipelines for Dev ? UAT ? Prod releases. Enforce semantic CI/CD patterns with PBIP + Git + Tabular Editor. Build reusable, certified datasets and dataflows enabling scalable self-service BI. Adoption, UX & Collaboration Design intuitive dashboards with consistent UX across multiple business functions. Support BI adoption through training, documentation and best-practice guidance. Use telemetry to track usage, performance and improve user experience. ? What We're Looking For Required Certifications To meet BI engineering standards, candidates must hold: PL-300: Power BI Data Analyst Associate DP-600: Fabric Analytics Engineer Associate Skills & Experience 3-5+ years building enterprise Power BI datasets and dashboards. Strong DAX and semantic modelling expertise (calc groups, conformed dimensions, role-playing dimensions). Strong SQL skills; comfortable working with Databricks Gold-layer tables. Proven ability to optimise dataset performance (aggregations, incremental refresh, DQ/Import). Experience working with Git-based modelling workflows and PR reviews via Tabular Editor. Excellent design intuition-clean layouts, drill paths, and KPI logic. Nice to Have Python for automation or ad-hoc prep; PySpark familiarity. Understanding of Lakehouse patterns, Delta Lake, metadata-driven pipelines. Unity Catalog / Purview experience for lineage and governance. RLS/OLS implementation experience.
Pontoon
Lead Data Engineer
Pontoon Warwick, Warwickshire
Job Title: Lead Data Engineer Location: Warwick (Hybrid - once per week onsite) Remuneration: From 700 per day (via umbrella company) Contract Details: Fixed Term Contract, 6 months, Full Time Are you ready to take the lead in transforming data engineering? Our client, a forward-thinking organisation, is seeking a highly skilled and hands-on Lead Data Engineer to drive their Data & Insights platform built on the Azure stack. This is a Databricks-heavy role , ideal for someone who thrives in technical environments and enjoys working directly with cutting-edge tools. Why This Role Stands Out This is not just a leadership role - it's a hands-on engineering opportunity where your Databricks expertise will be front and centre . You'll be the go-to person for all things Databricks, from architecture and configuration to pipeline promotion and troubleshooting. Key Responsibilities Lead and deliver hands-on data engineering across all layers of the Azure-based data platform. Act as the Databricks SME (Subject Matter Expert) , overseeing architecture, configuration, and documentation. Guide and support the engineering team while contributing directly to development efforts. Manage DevOps practices including branch supervision and merging. Promote data pipelines and notebooks through development, testing, and production environments. Enhance monitoring and control frameworks to ensure platform reliability. Provide technical leadership and mentorship to a small team of Data Engineers. Essential Skills Extensive hands-on experience with Databricks - this is the core of the role. Strong background in Synapse and Azure DevOps. Proficiency in SQL and PySpark within a Databricks environment. Proven experience leading small engineering teams. Skilled in configuration management and technical documentation. If you're a Databricks expert looking for a role that blends leadership with deep technical involvement, this is your chance to make a real impact. Join a dynamic team and help shape the future of data engineering. Ready to elevate your career? Apply now and be a vital part of this exciting transformation! Pontoon is an employment consultancy. We put expertise, energy, and enthusiasm into improving everyone's chance of being part of the workplace. We respect and appreciate people of all ethnicities, generations, religious beliefs, sexual orientations, gender identities, and more. We do this by showcasing their talents, skills, and unique experience in an inclusive environment that helps them thrive. If you require reasonable adjustments at any stage, please let us know and we will be happy to support you.
07/10/2025
Contractor
Job Title: Lead Data Engineer Location: Warwick (Hybrid - once per week onsite) Remuneration: From 700 per day (via umbrella company) Contract Details: Fixed Term Contract, 6 months, Full Time Are you ready to take the lead in transforming data engineering? Our client, a forward-thinking organisation, is seeking a highly skilled and hands-on Lead Data Engineer to drive their Data & Insights platform built on the Azure stack. This is a Databricks-heavy role , ideal for someone who thrives in technical environments and enjoys working directly with cutting-edge tools. Why This Role Stands Out This is not just a leadership role - it's a hands-on engineering opportunity where your Databricks expertise will be front and centre . You'll be the go-to person for all things Databricks, from architecture and configuration to pipeline promotion and troubleshooting. Key Responsibilities Lead and deliver hands-on data engineering across all layers of the Azure-based data platform. Act as the Databricks SME (Subject Matter Expert) , overseeing architecture, configuration, and documentation. Guide and support the engineering team while contributing directly to development efforts. Manage DevOps practices including branch supervision and merging. Promote data pipelines and notebooks through development, testing, and production environments. Enhance monitoring and control frameworks to ensure platform reliability. Provide technical leadership and mentorship to a small team of Data Engineers. Essential Skills Extensive hands-on experience with Databricks - this is the core of the role. Strong background in Synapse and Azure DevOps. Proficiency in SQL and PySpark within a Databricks environment. Proven experience leading small engineering teams. Skilled in configuration management and technical documentation. If you're a Databricks expert looking for a role that blends leadership with deep technical involvement, this is your chance to make a real impact. Join a dynamic team and help shape the future of data engineering. Ready to elevate your career? Apply now and be a vital part of this exciting transformation! Pontoon is an employment consultancy. We put expertise, energy, and enthusiasm into improving everyone's chance of being part of the workplace. We respect and appreciate people of all ethnicities, generations, religious beliefs, sexual orientations, gender identities, and more. We do this by showcasing their talents, skills, and unique experience in an inclusive environment that helps them thrive. If you require reasonable adjustments at any stage, please let us know and we will be happy to support you.
La Fosse Associates Limited
ML Ops Engineer (Contract) - Financial Services - Outside IR35
La Fosse Associates Limited
ML Ops Lead (Engineer) required for a 6-month contract with a global financial services firm in London. Join a high-impact team building out scalable AI/ML solutions on an Azure and Databricks platform supporting varies areas of the organisation. Implement solid structure for Gen AI deployment and implement best practices within the team, leading on ML Ops frameworks & CI/CD pipeline design Build & scale AI/ML tooling across quants, risk, and trading desks Drive adoption of GenAI & LLMs with Azure OpenAI, Unity Catalog & more Partner with quant teams and central AI functions Required: 2+ years ML Ops experience + 3+ years AI/ML engineering Excellent knowledge of ML frameworks and associated libraries in Python. Expertise in Azure Databricks, Python, Azure DevOps, ML life cycle Financial data/trading/derivatives domain experience is a plus 2 stage interview process Location: Hybrid Central London office & remote (flexible) 6 months duration (possible extension)
03/10/2025
Contractor
ML Ops Lead (Engineer) required for a 6-month contract with a global financial services firm in London. Join a high-impact team building out scalable AI/ML solutions on an Azure and Databricks platform supporting varies areas of the organisation. Implement solid structure for Gen AI deployment and implement best practices within the team, leading on ML Ops frameworks & CI/CD pipeline design Build & scale AI/ML tooling across quants, risk, and trading desks Drive adoption of GenAI & LLMs with Azure OpenAI, Unity Catalog & more Partner with quant teams and central AI functions Required: 2+ years ML Ops experience + 3+ years AI/ML engineering Excellent knowledge of ML frameworks and associated libraries in Python. Expertise in Azure Databricks, Python, Azure DevOps, ML life cycle Financial data/trading/derivatives domain experience is a plus 2 stage interview process Location: Hybrid Central London office & remote (flexible) 6 months duration (possible extension)
Tenth Revolution Group
Lead Azure Data Engineer
Tenth Revolution Group
Contract Opportunity: Lead Azure Data Engineer (Remote - 500/day Outside IR35) We're hiring for a Lead Azure Data Engineer to join our team on a hybrid contract based in London, supporting key finance stakeholders and transforming our data platform. This role offers the chance to shape the future of financial reporting through cutting-edge cloud engineering and data architecture. Location : Central London (UK-based candidates only) Rate : 500/day IR35 Status : Outside IR35 Start Date : ASAP - Interviews next week Duration : 6 months with potential for extension The Role You'll lead the design and delivery of scalable data products that support financial analysis and decision-making. Working closely with BI and Analytics teams, you'll help evolve our data warehouse and implement best-in-class engineering practices across Azure. Key Responsibilities Build and enhance ETL/ELT pipelines in Azure Databricks Develop facts and dimensions for financial reporting Collaborate with cross-functional teams to deliver robust data solutions Optimize data workflows for performance and cost-efficiency Implement governance and security using Unity Catalog Drive automation and CI/CD practices across the data platform Explore new technologies to improve data ingestion and self-service Essential Skills Azure Databricks : Expert in Spark (SQL, PySpark), Databricks Workflows Data Pipeline Design : Proven experience in scalable ETL/ELT development Azure Services : Data Lake, Blob Storage, Synapse Data Governance : Unity Catalog, access control, metadata management Performance Tuning : Partitioning, caching, Spark job optimization Cloud Architecture : Infrastructure-as-code, monitoring, automation Finance Domain Knowledge : Experience with financial systems and reporting Data Modelling : Kimball methodology, star schemas Retail Experience : Preferred but not essential About the Team We're a collaborative data function made up of BI Developers and Data Engineers. We work end-to-end on solutions, share knowledge, and support each other's growth. Our culture values curiosity, innovation, and continuous learning. Interested? We have limited interview slots next week and aim to fill this role by the end of the month. Please send me a copy of your CV if you meet the requirements
02/10/2025
Contractor
Contract Opportunity: Lead Azure Data Engineer (Remote - 500/day Outside IR35) We're hiring for a Lead Azure Data Engineer to join our team on a hybrid contract based in London, supporting key finance stakeholders and transforming our data platform. This role offers the chance to shape the future of financial reporting through cutting-edge cloud engineering and data architecture. Location : Central London (UK-based candidates only) Rate : 500/day IR35 Status : Outside IR35 Start Date : ASAP - Interviews next week Duration : 6 months with potential for extension The Role You'll lead the design and delivery of scalable data products that support financial analysis and decision-making. Working closely with BI and Analytics teams, you'll help evolve our data warehouse and implement best-in-class engineering practices across Azure. Key Responsibilities Build and enhance ETL/ELT pipelines in Azure Databricks Develop facts and dimensions for financial reporting Collaborate with cross-functional teams to deliver robust data solutions Optimize data workflows for performance and cost-efficiency Implement governance and security using Unity Catalog Drive automation and CI/CD practices across the data platform Explore new technologies to improve data ingestion and self-service Essential Skills Azure Databricks : Expert in Spark (SQL, PySpark), Databricks Workflows Data Pipeline Design : Proven experience in scalable ETL/ELT development Azure Services : Data Lake, Blob Storage, Synapse Data Governance : Unity Catalog, access control, metadata management Performance Tuning : Partitioning, caching, Spark job optimization Cloud Architecture : Infrastructure-as-code, monitoring, automation Finance Domain Knowledge : Experience with financial systems and reporting Data Modelling : Kimball methodology, star schemas Retail Experience : Preferred but not essential About the Team We're a collaborative data function made up of BI Developers and Data Engineers. We work end-to-end on solutions, share knowledge, and support each other's growth. Our culture values curiosity, innovation, and continuous learning. Interested? We have limited interview slots next week and aim to fill this role by the end of the month. Please send me a copy of your CV if you meet the requirements
Tenth Revolution Group
Databricks Engineer
Tenth Revolution Group City, London
Platform Engineer - Data & AI Infrastructure Location: Remote/UK-based Type: Full-time You must have strong Databricks and Unity Catalog experience Skills/Approaches: Proven experience with Terraform for Azure infrastructure as code. Strong knowledge of GitHub Actions and general CI/CD principles . Hands-on experience with Azure Private Link and Private Link Service . Expertise in Databricks and Unity Catalog for data governance. Solid understanding of Azure Policy and compliance enforcement. Strong background in identity and access management , including OAuth and federated credentials. Deep understanding of Azure security best practices , including Defense in Depth , BCDR , and high availability. If you meet these requirements please send me a copy of your CV to (see below)
02/10/2025
Contractor
Platform Engineer - Data & AI Infrastructure Location: Remote/UK-based Type: Full-time You must have strong Databricks and Unity Catalog experience Skills/Approaches: Proven experience with Terraform for Azure infrastructure as code. Strong knowledge of GitHub Actions and general CI/CD principles . Hands-on experience with Azure Private Link and Private Link Service . Expertise in Databricks and Unity Catalog for data governance. Solid understanding of Azure Policy and compliance enforcement. Strong background in identity and access management , including OAuth and federated credentials. Deep understanding of Azure security best practices , including Defense in Depth , BCDR , and high availability. If you meet these requirements please send me a copy of your CV to (see below)

Modal Window

  • Home
  • Contact
  • About Us
  • FAQs
  • Terms & Conditions
  • Privacy
  • Employer
  • Post a Job
  • Search Resumes
  • Sign in
  • Job Seeker
  • Find Jobs
  • Create Resume
  • Sign in
  • IT blog
  • Facebook
  • Twitter
  • LinkedIn
  • Youtube
© 2008-2025 IT Job Board