Senior Sales Executive - Data & Analytics - Up to 100,000 salary + Commission Location: London, United Kingdom About the Role: A global IT consultancy is looking for an experienced Senior Sales Executive to drive growth across the UK by selling cutting-edge data engineering, analytics, cloud data platforms, and AI-led solutions. This is a dynamic role combining new business development with account growth, ideal for a sales professional who thrives on both hunting and nurturing client relationships. Key Responsibilities: Develop new business opportunities across mid-market and enterprise clients. Manage the full sales lifecycle with support from pre-sales and delivery teams. Build and maintain a healthy sales pipeline aligned to revenue targets. Own and grow assigned accounts through upsell and cross-sell opportunities. Engage with senior stakeholders (Heads of Data, Analytics Managers, IT Directors). Collaborate with internal teams for solutioning and accurate forecasting. What We're Looking For: Essential: 5-7 years of B2B sales experience in Data, Analytics, Cloud, or Digital services. Proven track record selling consulting or managed services. High-level understanding of data engineering, analytics, and cloud platforms (e.g., Snowflake, Databricks, Azure, AWS). Strong communication and stakeholder management skills. Desirable: Experience with UK enterprise or mid-market clients. Familiarity with pre-sales and offshore delivery models. Industry exposure to Insurance, Public Sector, BFSI. Why Join: Opportunity to sell high-demand data and AI services. Strong pre-sales and delivery support. Clear career growth and learning opportunities. Competitive compensation with performance-based incentives.
08/01/2026
Full time
Senior Sales Executive - Data & Analytics - Up to 100,000 salary + Commission Location: London, United Kingdom About the Role: A global IT consultancy is looking for an experienced Senior Sales Executive to drive growth across the UK by selling cutting-edge data engineering, analytics, cloud data platforms, and AI-led solutions. This is a dynamic role combining new business development with account growth, ideal for a sales professional who thrives on both hunting and nurturing client relationships. Key Responsibilities: Develop new business opportunities across mid-market and enterprise clients. Manage the full sales lifecycle with support from pre-sales and delivery teams. Build and maintain a healthy sales pipeline aligned to revenue targets. Own and grow assigned accounts through upsell and cross-sell opportunities. Engage with senior stakeholders (Heads of Data, Analytics Managers, IT Directors). Collaborate with internal teams for solutioning and accurate forecasting. What We're Looking For: Essential: 5-7 years of B2B sales experience in Data, Analytics, Cloud, or Digital services. Proven track record selling consulting or managed services. High-level understanding of data engineering, analytics, and cloud platforms (e.g., Snowflake, Databricks, Azure, AWS). Strong communication and stakeholder management skills. Desirable: Experience with UK enterprise or mid-market clients. Familiarity with pre-sales and offshore delivery models. Industry exposure to Insurance, Public Sector, BFSI. Why Join: Opportunity to sell high-demand data and AI services. Strong pre-sales and delivery support. Clear career growth and learning opportunities. Competitive compensation with performance-based incentives.
Data Platform Engineer London (AWS, Apache Spark, AWS Glue, Iceberg, S3, RDS, Redshift, Kafka/MSK, Python, Terraform, Ansible, CI/CD, Jenkins, GitLab, Snowflake, Databricks) Working with an established FinTech client in London who is looking for a Data Platform Engineer to play a key role in defining, building, and evolving their enterprise Data Lakehouse platform during an exciting period of growth. You ll work closely with Platform Engineering and Application Engineering teams, taking ownership of the infrastructure , patterns , standards,and tooling used to build and operate data products across the business. The role focuses on ensuring the data platform is resilient,secure, reliable, and cost-effective within an AWS environment. You ll be responsible for how the platform is operated, maintained, monitored, and extended, with a strong emphasis on observability, fault prevention, and early fault detection across AWS data services. Automation is central to the way this team works. You ll design and maintain Infrastructure as Code and Configuration as Code solutions, supported by CI/CD pipelines, to ensure consistent, repeatable deployments and strong governance. You ll also enhance data lake integration testing, security measures, monitoring, SLAs, and operational metrics. Working for a tech driven organisation in a collaborative environment, for an organisation that values engineering that values engineering best practises! This client Is offering this role on hybrid basis, looking to be in the office few times per month. For more information, please get in touch.
08/01/2026
Full time
Data Platform Engineer London (AWS, Apache Spark, AWS Glue, Iceberg, S3, RDS, Redshift, Kafka/MSK, Python, Terraform, Ansible, CI/CD, Jenkins, GitLab, Snowflake, Databricks) Working with an established FinTech client in London who is looking for a Data Platform Engineer to play a key role in defining, building, and evolving their enterprise Data Lakehouse platform during an exciting period of growth. You ll work closely with Platform Engineering and Application Engineering teams, taking ownership of the infrastructure , patterns , standards,and tooling used to build and operate data products across the business. The role focuses on ensuring the data platform is resilient,secure, reliable, and cost-effective within an AWS environment. You ll be responsible for how the platform is operated, maintained, monitored, and extended, with a strong emphasis on observability, fault prevention, and early fault detection across AWS data services. Automation is central to the way this team works. You ll design and maintain Infrastructure as Code and Configuration as Code solutions, supported by CI/CD pipelines, to ensure consistent, repeatable deployments and strong governance. You ll also enhance data lake integration testing, security measures, monitoring, SLAs, and operational metrics. Working for a tech driven organisation in a collaborative environment, for an organisation that values engineering that values engineering best practises! This client Is offering this role on hybrid basis, looking to be in the office few times per month. For more information, please get in touch.
Tenth Revolution Group
Newcastle Upon Tyne, Tyne And Wear
Databricks Engineer - £400PD - Hybrid We are seeking a skilled Databricks Data Engineer to design, build, and optimize scalable data pipelines and analytics platforms. In this role, you will work closely with data scientists, analysts, and product teams to deliver reliable, high-performance data solutions using Databricks and modern cloud technologies. Key Responsibilities Design, develop, and maintain scalable data pipelines using Databricks, Apache Spark, and Delta Lake Build and optimize ETL/ELT workflows for batch and streaming data Develop data models to support analytics, reporting, and machine learning use cases Ensure data quality, reliability, and performance across data platforms Integrate data from multiple sources including APIs, databases, and cloud storage Collaborate with data scientists and analysts to enable advanced analytics and ML workloads Implement monitoring, logging, and cost-optimization best practices Follow data governance, security, and compliance standards Document data pipelines, architectures, and best practices Required Qualifications 3+ years of experience in data engineering or a similar role Strong experience with Databricks and Apache Spark Proficiency in Python and SQL Experience with Delta Lake, data modelling, and performance tuning Hands-on experience with cloud platforms (AWS, Azure, or GCP) Familiarity with data orchestration tools (eg, Airflow, Azure Data Factory) Solid understanding of data warehousing and big data concepts Preferred Qualifications Experience with Real Time/streaming data (Spark Structured Streaming, Kafka, etc.) Knowledge of CI/CD for data pipelines Experience supporting machine learning pipelines Databricks or cloud platform certifications To apply for this role please submit your CV or contact Dillon Blackburn (see below) Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.
08/01/2026
Contractor
Databricks Engineer - £400PD - Hybrid We are seeking a skilled Databricks Data Engineer to design, build, and optimize scalable data pipelines and analytics platforms. In this role, you will work closely with data scientists, analysts, and product teams to deliver reliable, high-performance data solutions using Databricks and modern cloud technologies. Key Responsibilities Design, develop, and maintain scalable data pipelines using Databricks, Apache Spark, and Delta Lake Build and optimize ETL/ELT workflows for batch and streaming data Develop data models to support analytics, reporting, and machine learning use cases Ensure data quality, reliability, and performance across data platforms Integrate data from multiple sources including APIs, databases, and cloud storage Collaborate with data scientists and analysts to enable advanced analytics and ML workloads Implement monitoring, logging, and cost-optimization best practices Follow data governance, security, and compliance standards Document data pipelines, architectures, and best practices Required Qualifications 3+ years of experience in data engineering or a similar role Strong experience with Databricks and Apache Spark Proficiency in Python and SQL Experience with Delta Lake, data modelling, and performance tuning Hands-on experience with cloud platforms (AWS, Azure, or GCP) Familiarity with data orchestration tools (eg, Airflow, Azure Data Factory) Solid understanding of data warehousing and big data concepts Preferred Qualifications Experience with Real Time/streaming data (Spark Structured Streaming, Kafka, etc.) Knowledge of CI/CD for data pipelines Experience supporting machine learning pipelines Databricks or cloud platform certifications To apply for this role please submit your CV or contact Dillon Blackburn (see below) Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.
Data Engineer Manager Hybrid - London with 2/3 days WFH Circ £85,000 - £95,000 + Attractive Bonus & Benefits Hands On Data Engineer Manager required for this exciting newly created position with a prestigious and rapidly expanding business in West London. It would suit someone with official management experience, or potentially a Lead / Senior Engineer looking to take on more managerial responsibility. The Data Engineer Manager will play a pivotal role at the heart of our client's data & analytics operation. Having implemented a new MS Fabric based Data platform, the need now is to scale up and meet the demand to deliver data driven insights and strategies right across the business globally. There'll be a hands-on element to the role as you'll be troubleshooting, reviewing code, steering the team through deployments and acting as the escalation point for data engineering. Our client can offer an excellent career development opportunity and a vibrant, creative and collaborative work environment. This is a hybrid role based in Central / West London with the flexibility to work from home 2 or 3 days per week. Key Responsibilities include; Define and take ownership of the roadmap for the ongoing development and enhancement of the Data Platform. Design, implement, and oversee scalable data pipelines and ETL/ELT processes within MS Fabric, leveraging expertise in Azure Data Factory, Databricks, and other Azure services. Advocate for engineering best practices and ensure long-term sustainability of systems. Integrate principles of data quality, observability, and governance throughout all processes. Participate in recruiting, mentoring, and developing a high-performing data organization. Demonstrate pragmatic leadership by aligning multiple product workstreams to achieve a unified, robust, and trustworthy data platform that supports production services such as dashboards, new product launches, analytics, and data science initiatives. Develop and maintain comprehensive data models, data lakes, and data warehouses (e.g., utilizing Azure Synapse). Collaborate with data analysts, Analytics Engineers, and various stakeholders to fulfil business requirements. Key Experience, Skills and Knowledge: Experience leading data or platform teams in a production environment as a Senior Data Engineer, Tech Lead, Data Engineering Manager etc. Proven success with modern data infrastructure: distributed systems, batch and streaming pipelines Hands-on knowledge of tools such as Apache Spark, Kafka, Databricks, DBT or similar Experience building, defining, and owning data models, data lakes, and data warehouses Programming proficiency in the likes of Python, Pyspark, SQL, Scala or Java. Experience operating in a cloud-native environment such as Azure, AWS, GCP etc ( Fabric experience would be beneficial but is not essential). Excellent stakeholder management and communication skills. A strategic mindset, with a practical approach to delivery and prioritisation. Proven success with modern data infrastructure: distributed systems, batch and streaming pipelines. Experience building, defining, and owning data models, data lakes, and data warehouses. Exposure to data science concepts and techniques is highly desirable. Strong problem-solving skills and attention to detail. Salary is dependent on experience and expected to be in the region of £85,000 - £95,000 + an attractive bonus scheme and benefits package. For further information, please send your CV to Wayne Young at Young's Employment Services Ltd. YES are operating as both a recruitment Agency and Recruitment Business.
07/01/2026
Full time
Data Engineer Manager Hybrid - London with 2/3 days WFH Circ £85,000 - £95,000 + Attractive Bonus & Benefits Hands On Data Engineer Manager required for this exciting newly created position with a prestigious and rapidly expanding business in West London. It would suit someone with official management experience, or potentially a Lead / Senior Engineer looking to take on more managerial responsibility. The Data Engineer Manager will play a pivotal role at the heart of our client's data & analytics operation. Having implemented a new MS Fabric based Data platform, the need now is to scale up and meet the demand to deliver data driven insights and strategies right across the business globally. There'll be a hands-on element to the role as you'll be troubleshooting, reviewing code, steering the team through deployments and acting as the escalation point for data engineering. Our client can offer an excellent career development opportunity and a vibrant, creative and collaborative work environment. This is a hybrid role based in Central / West London with the flexibility to work from home 2 or 3 days per week. Key Responsibilities include; Define and take ownership of the roadmap for the ongoing development and enhancement of the Data Platform. Design, implement, and oversee scalable data pipelines and ETL/ELT processes within MS Fabric, leveraging expertise in Azure Data Factory, Databricks, and other Azure services. Advocate for engineering best practices and ensure long-term sustainability of systems. Integrate principles of data quality, observability, and governance throughout all processes. Participate in recruiting, mentoring, and developing a high-performing data organization. Demonstrate pragmatic leadership by aligning multiple product workstreams to achieve a unified, robust, and trustworthy data platform that supports production services such as dashboards, new product launches, analytics, and data science initiatives. Develop and maintain comprehensive data models, data lakes, and data warehouses (e.g., utilizing Azure Synapse). Collaborate with data analysts, Analytics Engineers, and various stakeholders to fulfil business requirements. Key Experience, Skills and Knowledge: Experience leading data or platform teams in a production environment as a Senior Data Engineer, Tech Lead, Data Engineering Manager etc. Proven success with modern data infrastructure: distributed systems, batch and streaming pipelines Hands-on knowledge of tools such as Apache Spark, Kafka, Databricks, DBT or similar Experience building, defining, and owning data models, data lakes, and data warehouses Programming proficiency in the likes of Python, Pyspark, SQL, Scala or Java. Experience operating in a cloud-native environment such as Azure, AWS, GCP etc ( Fabric experience would be beneficial but is not essential). Excellent stakeholder management and communication skills. A strategic mindset, with a practical approach to delivery and prioritisation. Proven success with modern data infrastructure: distributed systems, batch and streaming pipelines. Experience building, defining, and owning data models, data lakes, and data warehouses. Exposure to data science concepts and techniques is highly desirable. Strong problem-solving skills and attention to detail. Salary is dependent on experience and expected to be in the region of £85,000 - £95,000 + an attractive bonus scheme and benefits package. For further information, please send your CV to Wayne Young at Young's Employment Services Ltd. YES are operating as both a recruitment Agency and Recruitment Business.
Locations : Stockholm Copenhagen V Berlin München London Who We Are Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we help clients with total transformation-inspiring complex change, enabling organizations to grow, building competitive advantage, and driving bottom-line impact. To succeed, organizations must blend digital and human capabilities. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives to spark change. BCG delivers solutions through leading-edge management consulting along with technology and design, corporate and digital ventures-and business purpose. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, generating results that allow our clients to thrive. We Are BCG X We're a diverse team of more than 3,000 tech experts united by a drive to make a difference. Working across industries and disciplines, we combine our experience and expertise to tackle the biggest challenges faced by society today. We go beyond what was once thought possible, creating new and innovative solutions to the world's most complex problems. Leveraging BCG's global network and partnerships with leading organizations, BCG X provides a stable ecosystem for talent to build game-changing businesses, products, and services from the ground up, all while growing their career. Together, we strive to create solutions that will positively impact the lives of millions. What You'll Do Our BCG X teams own the full analytics value-chain end to end: framing new business challenges, designing innovative algorithms, implementing, and deploying scalable solutions, and enabling colleagues and clients to fully embrace AI. Our product offerings span from fully custom-builds to industry specific leading edge AI software solutions. As a (Senior) AI Software Engineer you'll be part of our rapidly growing engineering team and help to build the next generation of AI solutions. You'll have the chance to partner with clients in a variety of BCG regions and industries, and on key topics like climate change, enabling them to design, build, and deploy new and innovative solutions. Additional responsibilities will include developing and delivering thought leadership in scientific communities and papers as well as leading conferences on behalf of BCG X. We are looking for talented individuals with a passion for software development, large-scale data analytics and transforming organizations into AI led innovative companies. Successful candidates possess the following: +4 years of experience in a technology consulting environment Apply software development practices and standards to develop robust and maintainable software Actively involved in every part of the software development life cycle Experienced at guiding non-technical teams and consultants in and best practices for robust software development Optimize and enhance computational efficiency of algorithms and software design Motivated by a fast-paced, service-oriented environment and interacting directly with clients on new features for future product releases Enjoy collaborating in teams to share software design and solution ideas A natural problem-solver and intellectually curious across a breadth of industries and topics Master's degree or PhD in relevant field of study - please provide all academic certificates showing the final grades (A-level, Bachelor, Master, PhD) Additional tasks: Designing and building data & AI platforms for our clients. Such platforms provide data and (Gen)AI capabilities to a wide variety of consumers and use cases across the client organization. Often part of large (AI) transformational journeys BCG does for its clients. Often involves the following engineering disciplines : Cloud Engineering Data Engineering (not building pipelines but designing and building the framework) DevOps MLOps/LLMOps Often work with the following technologies : Azure, AWS, GCP Airflow, dbt, Databricks, Snowflake, etc. GitHub, Azure DevOps and related developer tooling and CI/CD platforms, Terraform or other Infra-as-Code MLflow, AzureML or similar for MLOps; LangSmith, Langfuse and similar for LLMOps The difference to our "AI Engineer" role is: Do you "use/consume" these technologies, or are you the one that "provides" them to the rest of the organization. What You'll Bring TECHNOLOGIES: Programming Languages: Python Experience with additional programming languages is a plus Additional info BCG offers a comprehensive benefits program, including medical, dental and vision coverage, telemedicine services, life, accident and disability insurance, parental leave and family planning benefits, caregiving resources, mental health offerings, a generous retirement program, financial guidance, paid time off, and more. Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify.
05/01/2026
Full time
Locations : Stockholm Copenhagen V Berlin München London Who We Are Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we help clients with total transformation-inspiring complex change, enabling organizations to grow, building competitive advantage, and driving bottom-line impact. To succeed, organizations must blend digital and human capabilities. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives to spark change. BCG delivers solutions through leading-edge management consulting along with technology and design, corporate and digital ventures-and business purpose. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, generating results that allow our clients to thrive. We Are BCG X We're a diverse team of more than 3,000 tech experts united by a drive to make a difference. Working across industries and disciplines, we combine our experience and expertise to tackle the biggest challenges faced by society today. We go beyond what was once thought possible, creating new and innovative solutions to the world's most complex problems. Leveraging BCG's global network and partnerships with leading organizations, BCG X provides a stable ecosystem for talent to build game-changing businesses, products, and services from the ground up, all while growing their career. Together, we strive to create solutions that will positively impact the lives of millions. What You'll Do Our BCG X teams own the full analytics value-chain end to end: framing new business challenges, designing innovative algorithms, implementing, and deploying scalable solutions, and enabling colleagues and clients to fully embrace AI. Our product offerings span from fully custom-builds to industry specific leading edge AI software solutions. As a (Senior) AI Software Engineer you'll be part of our rapidly growing engineering team and help to build the next generation of AI solutions. You'll have the chance to partner with clients in a variety of BCG regions and industries, and on key topics like climate change, enabling them to design, build, and deploy new and innovative solutions. Additional responsibilities will include developing and delivering thought leadership in scientific communities and papers as well as leading conferences on behalf of BCG X. We are looking for talented individuals with a passion for software development, large-scale data analytics and transforming organizations into AI led innovative companies. Successful candidates possess the following: +4 years of experience in a technology consulting environment Apply software development practices and standards to develop robust and maintainable software Actively involved in every part of the software development life cycle Experienced at guiding non-technical teams and consultants in and best practices for robust software development Optimize and enhance computational efficiency of algorithms and software design Motivated by a fast-paced, service-oriented environment and interacting directly with clients on new features for future product releases Enjoy collaborating in teams to share software design and solution ideas A natural problem-solver and intellectually curious across a breadth of industries and topics Master's degree or PhD in relevant field of study - please provide all academic certificates showing the final grades (A-level, Bachelor, Master, PhD) Additional tasks: Designing and building data & AI platforms for our clients. Such platforms provide data and (Gen)AI capabilities to a wide variety of consumers and use cases across the client organization. Often part of large (AI) transformational journeys BCG does for its clients. Often involves the following engineering disciplines : Cloud Engineering Data Engineering (not building pipelines but designing and building the framework) DevOps MLOps/LLMOps Often work with the following technologies : Azure, AWS, GCP Airflow, dbt, Databricks, Snowflake, etc. GitHub, Azure DevOps and related developer tooling and CI/CD platforms, Terraform or other Infra-as-Code MLflow, AzureML or similar for MLOps; LangSmith, Langfuse and similar for LLMOps The difference to our "AI Engineer" role is: Do you "use/consume" these technologies, or are you the one that "provides" them to the rest of the organization. What You'll Bring TECHNOLOGIES: Programming Languages: Python Experience with additional programming languages is a plus Additional info BCG offers a comprehensive benefits program, including medical, dental and vision coverage, telemedicine services, life, accident and disability insurance, parental leave and family planning benefits, caregiving resources, mental health offerings, a generous retirement program, financial guidance, paid time off, and more. Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify.
Python Data Engineer - Multi-Strategy Hedge Fund Location: London Hybrid: 2 days per week on-site Type: Full-time About the Role A leading multi-strategy hedge fund is seeking a highly skilled Python Data Engineer to join its technology and data team. This is a hands-on role focused on building and optimising data infrastructure that powers quantitative research, trading strategies, and risk management. Key Responsibilities Develop and maintain scalable Python-based ETL pipelines for ingesting and transforming market data from multiple sources. Design and manage cloud-based data lake solutions (AWS, Databricks) for large volumes of structured and unstructured data. Implement rigorous data quality, validation, and cleansing routines to ensure accuracy of financial time-series data. Optimize workflows for low latency and high throughput, critical for trading and research. Collaborate with portfolio managers, quantitative researchers, and traders to deliver tailored data solutions for modeling and strategy development. Contribute to the design and implementation of the firm's security master database. Analyse datasets to extract actionable insights for trading and risk management. Document system architecture, data flows, and technical processes for transparency and reproducibility. Requirements Strong proficiency in Python (pandas, NumPy, PySpark) and ETL development. Hands-on experience with AWS services (S3, Glue, Lambda) and Databricks. Solid understanding of financial market data, particularly time-series. Knowledge of data quality frameworks and performance optimisation techniques. Degree in Computer Science, Engineering, or related field. Preferred Skills SQL and relational database design experience. Exposure to quantitative finance or trading environments. Familiarity with containerisation and orchestration (Docker, Kubernetes). What We Offer Competitive compensation and performance-based bonus. Hybrid working model: 2 days per week on-site in London. Opportunity to work on mission-critical data systems for a global hedge fund. Collaborative, high-performance culture with direct exposure to front-office teams To Avoid Disappointment, Apply Now! To find out more about Huxley, please visit (url removed) Huxley, a trading division of SThree Partnership LLP is acting as an Employment Business in relation to this vacancy Registered office 8 Bishopsgate, London, EC2N 4BQ, United Kingdom Partnership Number OC(phone number removed) England and Wales
03/01/2026
Full time
Python Data Engineer - Multi-Strategy Hedge Fund Location: London Hybrid: 2 days per week on-site Type: Full-time About the Role A leading multi-strategy hedge fund is seeking a highly skilled Python Data Engineer to join its technology and data team. This is a hands-on role focused on building and optimising data infrastructure that powers quantitative research, trading strategies, and risk management. Key Responsibilities Develop and maintain scalable Python-based ETL pipelines for ingesting and transforming market data from multiple sources. Design and manage cloud-based data lake solutions (AWS, Databricks) for large volumes of structured and unstructured data. Implement rigorous data quality, validation, and cleansing routines to ensure accuracy of financial time-series data. Optimize workflows for low latency and high throughput, critical for trading and research. Collaborate with portfolio managers, quantitative researchers, and traders to deliver tailored data solutions for modeling and strategy development. Contribute to the design and implementation of the firm's security master database. Analyse datasets to extract actionable insights for trading and risk management. Document system architecture, data flows, and technical processes for transparency and reproducibility. Requirements Strong proficiency in Python (pandas, NumPy, PySpark) and ETL development. Hands-on experience with AWS services (S3, Glue, Lambda) and Databricks. Solid understanding of financial market data, particularly time-series. Knowledge of data quality frameworks and performance optimisation techniques. Degree in Computer Science, Engineering, or related field. Preferred Skills SQL and relational database design experience. Exposure to quantitative finance or trading environments. Familiarity with containerisation and orchestration (Docker, Kubernetes). What We Offer Competitive compensation and performance-based bonus. Hybrid working model: 2 days per week on-site in London. Opportunity to work on mission-critical data systems for a global hedge fund. Collaborative, high-performance culture with direct exposure to front-office teams To Avoid Disappointment, Apply Now! To find out more about Huxley, please visit (url removed) Huxley, a trading division of SThree Partnership LLP is acting as an Employment Business in relation to this vacancy Registered office 8 Bishopsgate, London, EC2N 4BQ, United Kingdom Partnership Number OC(phone number removed) England and Wales
Data Engineer An established technology consultancy is looking to hire an experienced Data Engineer to work on large-scale, customer-facing data projects while also contributing to the development of internal data services. This role blends hands-on engineering with architecture design and technical advisory work, offering exposure to enterprise clients and modern cloud platforms. You will play a key role in designing and delivering cloud-native data platforms, working closely with engineering teams, stakeholders, and customers from initial design through to production release. The role offers variety, autonomy, and the opportunity to work with leading-edge data technologies across Azure and AWS. The role As a Data Engineer, you will be responsible for designing, building, and maintaining scalable data platforms and pipelines. You will support and lead technical workshops, contribute to architecture decisions, and act as a trusted technical partner on complex data initiatives. Key responsibilities include: Designing and building scalable data platforms and ETL/ELT pipelines in Azure and AWS Implementing serverless, batch, and streaming data architectures Working hands-on with Spark, Python, Databricks, and SQL-based analytics platforms Designing Lakehouse-style architectures and analytical data models Feeding behavioural and analytical data back into production systems Supporting architecture reviews, design sessions, and technical workshops Collaborating with engineering, analytics, and commercial teams Advising customers throughout the full project lifecycle Contributing to internal data services, standards, and best practices What we are looking for Essential experience Proven experience as a Data Engineer working with large-scale data platforms Strong hands-on experience in either Azure or AWS, with working knowledge of the other Azure experience with Lakehouse concepts, Data Factory, Synapse and/or Fabric AWS experience with Redshift, Lambda, and SQL-based analytics services Strong Python skills and experience using Apache Spark Hands-on experience with Databricks Experience designing and maintaining ETL/ELT pipelines Solid understanding of data modelling techniques Experience working in cross-functional teams on cloud-based data platforms Ability to work with SDKs and APIs across cloud services Strong communication skills and a customer-focused approach Desirable experience Data migrations and platform modernisation projects Implementing machine learning models using Python Consulting or customer-facing engineering roles Feeding analytics insights back into operational systems Certifications (beneficial but not required) AWS Solutions Architect Associate Azure Solutions Architect Associate AWS Data Engineer Associate Azure Data Engineer Associate What s on offer The opportunity to work on modern cloud and data projects using leading technologies A collaborative engineering culture with highly skilled colleagues Structured learning paths and access to training and certifications Certification exam fees covered and certification-related bonuses Competitive salary and comprehensive benefits package A supportive and inclusive working environment with regular knowledge sharing and team events This role would suit a Data Engineer who enjoys combining deep technical work with customer interaction and wants to continue developing their expertise across cloud and data platforms. If you would like to find out more, then please get in contact with Jack at (url removed).
03/01/2026
Full time
Data Engineer An established technology consultancy is looking to hire an experienced Data Engineer to work on large-scale, customer-facing data projects while also contributing to the development of internal data services. This role blends hands-on engineering with architecture design and technical advisory work, offering exposure to enterprise clients and modern cloud platforms. You will play a key role in designing and delivering cloud-native data platforms, working closely with engineering teams, stakeholders, and customers from initial design through to production release. The role offers variety, autonomy, and the opportunity to work with leading-edge data technologies across Azure and AWS. The role As a Data Engineer, you will be responsible for designing, building, and maintaining scalable data platforms and pipelines. You will support and lead technical workshops, contribute to architecture decisions, and act as a trusted technical partner on complex data initiatives. Key responsibilities include: Designing and building scalable data platforms and ETL/ELT pipelines in Azure and AWS Implementing serverless, batch, and streaming data architectures Working hands-on with Spark, Python, Databricks, and SQL-based analytics platforms Designing Lakehouse-style architectures and analytical data models Feeding behavioural and analytical data back into production systems Supporting architecture reviews, design sessions, and technical workshops Collaborating with engineering, analytics, and commercial teams Advising customers throughout the full project lifecycle Contributing to internal data services, standards, and best practices What we are looking for Essential experience Proven experience as a Data Engineer working with large-scale data platforms Strong hands-on experience in either Azure or AWS, with working knowledge of the other Azure experience with Lakehouse concepts, Data Factory, Synapse and/or Fabric AWS experience with Redshift, Lambda, and SQL-based analytics services Strong Python skills and experience using Apache Spark Hands-on experience with Databricks Experience designing and maintaining ETL/ELT pipelines Solid understanding of data modelling techniques Experience working in cross-functional teams on cloud-based data platforms Ability to work with SDKs and APIs across cloud services Strong communication skills and a customer-focused approach Desirable experience Data migrations and platform modernisation projects Implementing machine learning models using Python Consulting or customer-facing engineering roles Feeding analytics insights back into operational systems Certifications (beneficial but not required) AWS Solutions Architect Associate Azure Solutions Architect Associate AWS Data Engineer Associate Azure Data Engineer Associate What s on offer The opportunity to work on modern cloud and data projects using leading technologies A collaborative engineering culture with highly skilled colleagues Structured learning paths and access to training and certifications Certification exam fees covered and certification-related bonuses Competitive salary and comprehensive benefits package A supportive and inclusive working environment with regular knowledge sharing and team events This role would suit a Data Engineer who enjoys combining deep technical work with customer interaction and wants to continue developing their expertise across cloud and data platforms. If you would like to find out more, then please get in contact with Jack at (url removed).
Data Science Engineer My client is recruiting for a Data Science Engineer to design, develop, and deliver AI and analytics solutions aligned with the organisations Data & AI strategy. Key Responsibilities End-to-end development of AI/ML solutions. MLOps practices: CI/CD, model monitoring, retraining. Use of open-source and enterprise tools (LangChain, Azure OpenAI, Databricks). Generative AI features: embeddings, RAG, AI agents. Clean, testable code with modern engineering practices. Align with enterprise architecture and governance. Collaborate with architects and stakeholders. Lifecycle management of models. Pilot emerging technologies. Experience & Skills 2-4 years in production-level AI/ML delivery. Legal/professional services experience is a plus. AI/ML frameworks: PyTorch, TensorFlow, LangChain. Cloud: Azure (preferred), AWS, GCP. MLOps: CI/CD, model lifecycle, monitoring. Generative AI: LLMs, RAG, chat agents. Data engineering alignment: ETL, governance. Strong coding, communication, and collaboration skills. Strategic thinking, problem-solving, and stakeholder engagement. In accordance with the Employment Agencies and Employment Businesses Regulations 2003, this position is advertised based upon DGH Recruitment Limited having first sought approval of its client to find candidates for this position. DGH Recruitment Limited acts as both an Employment Agency and Employment Business
31/12/2025
Full time
Data Science Engineer My client is recruiting for a Data Science Engineer to design, develop, and deliver AI and analytics solutions aligned with the organisations Data & AI strategy. Key Responsibilities End-to-end development of AI/ML solutions. MLOps practices: CI/CD, model monitoring, retraining. Use of open-source and enterprise tools (LangChain, Azure OpenAI, Databricks). Generative AI features: embeddings, RAG, AI agents. Clean, testable code with modern engineering practices. Align with enterprise architecture and governance. Collaborate with architects and stakeholders. Lifecycle management of models. Pilot emerging technologies. Experience & Skills 2-4 years in production-level AI/ML delivery. Legal/professional services experience is a plus. AI/ML frameworks: PyTorch, TensorFlow, LangChain. Cloud: Azure (preferred), AWS, GCP. MLOps: CI/CD, model lifecycle, monitoring. Generative AI: LLMs, RAG, chat agents. Data engineering alignment: ETL, governance. Strong coding, communication, and collaboration skills. Strategic thinking, problem-solving, and stakeholder engagement. In accordance with the Employment Agencies and Employment Businesses Regulations 2003, this position is advertised based upon DGH Recruitment Limited having first sought approval of its client to find candidates for this position. DGH Recruitment Limited acts as both an Employment Agency and Employment Business
Deerfoot Recruitment Solutions Limited
City, London
Data Engineering Technical Lead Global Investment Bank London - Hybrid Permanent - Excellent Package + Benefits We are working with one of the world's leading banking groups, who we have partnered with for 15 years. We are seeking an experienced Data Architect / EDM Developer / Data Engineering Lead to join their International Technology team in London. You will be a key part of the Architecture, Middleware, Data & Enterprise Services (AMD) division, driving data engineering, integration and automation initiatives across our clients EMEA banking and securities entities. This is a hands-on leadership role, combining technical expertise with mentoring and team leadership. Key Responsibilities Architect, design and deliver enterprise-wide EDM and data solutions. Lead and mentor EDM developers, ensuring high-quality, cost-effective delivery. Drive data innovation, automation and best practices across EMEA. Translate business requirements into functional and technical designs. Ensure compliance with SDLC, governance, and risk policies. Skills & Experience - Essential Strong SQL Server or Snowflake skills. Advanced knowledge of low-code/no-code data engineering / ETL tools - ideally Markit EDM (v19.2+) or similar (e.g. Informatica). Proven delivery experience in Financial Services / Banking sector. Deep understanding of SDLC, systems integration, and data warehousing. Ability to gather requirements and liaise effectively with business stakeholders. Desirable Skills Cloud (AWS / Azure), Python, PowerShell, APIs. Data pipelines, lineage, automation. BI tools (Power BI, Tableau, SSRS). Modern data architectures (lakehouse, data mesh). CI/CD, GitHub, Control-M, dbt/Databricks. This is an opportunity to join a global top-5 bank with long-term stability, world-class resources, and clear career progression routes. Enterprise Data Architect, EDM Developer, Data Engineering Lead, Data Architect, ETL Developer, Data Solutions Architect, Senior Data Engineer (Financial Services). Apply today for full details. Deerfoot Recruitment Solutions Ltd is a leading independent tech recruitment consultancy in the UK. For every CV sent to clients, we donate 1 to The Born Free Foundation. We are a Climate Action Workforce in partnership with Ecologi. If this role isn't right for you, explore our referral reward program with payouts at interview and placement milestones. Visit our website for details. Deerfoot Recruitment Solutions Ltd is acting as an Employment Agency in relation to this vacancy.
30/12/2025
Full time
Data Engineering Technical Lead Global Investment Bank London - Hybrid Permanent - Excellent Package + Benefits We are working with one of the world's leading banking groups, who we have partnered with for 15 years. We are seeking an experienced Data Architect / EDM Developer / Data Engineering Lead to join their International Technology team in London. You will be a key part of the Architecture, Middleware, Data & Enterprise Services (AMD) division, driving data engineering, integration and automation initiatives across our clients EMEA banking and securities entities. This is a hands-on leadership role, combining technical expertise with mentoring and team leadership. Key Responsibilities Architect, design and deliver enterprise-wide EDM and data solutions. Lead and mentor EDM developers, ensuring high-quality, cost-effective delivery. Drive data innovation, automation and best practices across EMEA. Translate business requirements into functional and technical designs. Ensure compliance with SDLC, governance, and risk policies. Skills & Experience - Essential Strong SQL Server or Snowflake skills. Advanced knowledge of low-code/no-code data engineering / ETL tools - ideally Markit EDM (v19.2+) or similar (e.g. Informatica). Proven delivery experience in Financial Services / Banking sector. Deep understanding of SDLC, systems integration, and data warehousing. Ability to gather requirements and liaise effectively with business stakeholders. Desirable Skills Cloud (AWS / Azure), Python, PowerShell, APIs. Data pipelines, lineage, automation. BI tools (Power BI, Tableau, SSRS). Modern data architectures (lakehouse, data mesh). CI/CD, GitHub, Control-M, dbt/Databricks. This is an opportunity to join a global top-5 bank with long-term stability, world-class resources, and clear career progression routes. Enterprise Data Architect, EDM Developer, Data Engineering Lead, Data Architect, ETL Developer, Data Solutions Architect, Senior Data Engineer (Financial Services). Apply today for full details. Deerfoot Recruitment Solutions Ltd is a leading independent tech recruitment consultancy in the UK. For every CV sent to clients, we donate 1 to The Born Free Foundation. We are a Climate Action Workforce in partnership with Ecologi. If this role isn't right for you, explore our referral reward program with payouts at interview and placement milestones. Visit our website for details. Deerfoot Recruitment Solutions Ltd is acting as an Employment Agency in relation to this vacancy.
As a Software Development Manager, you will lead the Global Content Delivery team. Ideally, having previous experience of working in Financial Services. This is a "Player/Coach" role, where you will both lead and contribute hands-on to the overall success of the team. The Software Development Manager will be responsible for transforming and optimising the delivery of software applications and data solutions, leading a highly motivated team to build innovative, scalable, and resilient software and data solutions that empower the organisation. Manage partnerships with business leaders, stakeholders, and IT to develop and promote content delivery data solutions. Translates stakeholder needs into technical solutions. As a player coach, you will impact the product, architecture, software design and engineering. With a focus on intuitive front-end creation, robust backend Frameworks and strong data design Lead, a globally dispersed team Experience with AWS, Databricks, DBT, PySpark, React, JavaScript Terraform. Experience with AI/ML/Gen AI/LLM What you need to do now If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. If this job isn't quite right for you, but you are looking for a new position, please contact us for a confidential discussion about your career. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at (url removed)
29/12/2025
Full time
As a Software Development Manager, you will lead the Global Content Delivery team. Ideally, having previous experience of working in Financial Services. This is a "Player/Coach" role, where you will both lead and contribute hands-on to the overall success of the team. The Software Development Manager will be responsible for transforming and optimising the delivery of software applications and data solutions, leading a highly motivated team to build innovative, scalable, and resilient software and data solutions that empower the organisation. Manage partnerships with business leaders, stakeholders, and IT to develop and promote content delivery data solutions. Translates stakeholder needs into technical solutions. As a player coach, you will impact the product, architecture, software design and engineering. With a focus on intuitive front-end creation, robust backend Frameworks and strong data design Lead, a globally dispersed team Experience with AWS, Databricks, DBT, PySpark, React, JavaScript Terraform. Experience with AI/ML/Gen AI/LLM What you need to do now If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. If this job isn't quite right for you, but you are looking for a new position, please contact us for a confidential discussion about your career. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at (url removed)
I'm working with a world-class technology company in Edinburgh to help them find a Lead Data Engineer to join their team (hybrid working but there is flex on this for the right person). This is your chance to take the technical lead on complex, large-scale data projects that power real-world products used by millions of people . The organisation has been steadily growing for a number of years and have become a market leader in their field so it's genuinely a really exciting time to join! You'll be joining a forward-thinking team that's passionate about doing things properly using a modern tech stack , cloud-first approach, and a genuine commitment to engineering excellence. As Lead Data Engineer, you'll be hands-on in designing and building scalable data platforms and pipelines that enable advanced analytics, machine learning, and business-critical insights. You'll shape the technical vision , set best practices, and make key architectural decisions that define how data flows across the organisation. You won't be working in isolation either as collaboration is at the heart of this role. You'll work closely with engineers, product managers, and data scientists to turn ideas into high-performing, production-ready systems. You'll also play a big part in mentoring others , driving standards across the team, and influencing the overall data strategy. The ideal person for this role will have a strong background in data engineering , with experience building modern data solutions using technologies like Kafka , Spark , Databricks , dbt , and Airflow . You'll know your way around cloud platforms (AWS, GCP, or Azure) and be confident coding in Python , Java , or Scala . Most importantly, you'll understand what it takes to design data systems that are scalable , reliable and built for the long haul. In return, they are offering a competitive salary (happy to discuss prior to application), great benefits which includes uncapped holidays and multiple bonuses! Their office in central Edinburgh is only a short walk from Haymarket train station. The role is Hybrid (ideally 1 or 2 days in office), however, they can be flex on this for the right candidate. If you're ready to step into a role where your technical leadership will have a visible impact and where you can build data systems that continue to scale then please apply or contact Matthew MacAlpine at Cathcart Technology. Cathcart Technology is acting as an Employment Agency in relation to this vacancy.
24/12/2025
Full time
I'm working with a world-class technology company in Edinburgh to help them find a Lead Data Engineer to join their team (hybrid working but there is flex on this for the right person). This is your chance to take the technical lead on complex, large-scale data projects that power real-world products used by millions of people . The organisation has been steadily growing for a number of years and have become a market leader in their field so it's genuinely a really exciting time to join! You'll be joining a forward-thinking team that's passionate about doing things properly using a modern tech stack , cloud-first approach, and a genuine commitment to engineering excellence. As Lead Data Engineer, you'll be hands-on in designing and building scalable data platforms and pipelines that enable advanced analytics, machine learning, and business-critical insights. You'll shape the technical vision , set best practices, and make key architectural decisions that define how data flows across the organisation. You won't be working in isolation either as collaboration is at the heart of this role. You'll work closely with engineers, product managers, and data scientists to turn ideas into high-performing, production-ready systems. You'll also play a big part in mentoring others , driving standards across the team, and influencing the overall data strategy. The ideal person for this role will have a strong background in data engineering , with experience building modern data solutions using technologies like Kafka , Spark , Databricks , dbt , and Airflow . You'll know your way around cloud platforms (AWS, GCP, or Azure) and be confident coding in Python , Java , or Scala . Most importantly, you'll understand what it takes to design data systems that are scalable , reliable and built for the long haul. In return, they are offering a competitive salary (happy to discuss prior to application), great benefits which includes uncapped holidays and multiple bonuses! Their office in central Edinburgh is only a short walk from Haymarket train station. The role is Hybrid (ideally 1 or 2 days in office), however, they can be flex on this for the right candidate. If you're ready to step into a role where your technical leadership will have a visible impact and where you can build data systems that continue to scale then please apply or contact Matthew MacAlpine at Cathcart Technology. Cathcart Technology is acting as an Employment Agency in relation to this vacancy.
AI Platform Engineer London Excellent Salary +Benefits Join an award-winning, internationally recognised B2B consultancy as an AI Platform Engineer, owning the cloud-native platform that underpins conversational AI and generative AI products at scale. Sitting at the core of AI delivery, you will design, build, and operate the runtime, infrastructure, and operational layers supporting RAG pipelines, LLM orchestration, vector search, and evaluation workflows across AWS and Databricks. Working closely with senior AI engineers and product teams, you'll ensure AI systems are scalable, observable, secure, and cost-efficient, turning experimental AI into reliable, production-grade capabilities. With further scope of responsibilities detailed below: Own and evolve the AI platform powering conversational assistants and generative AI products. Build, operate, and optimise RAG and LLM-backed services, improving latency, reliability, and cost. Design and run cloud-native AI services across AWS and Databricks, including ingestion and embedding pipelines. Scale and operate vector search infrastructure (Weaviate, OpenSearch, Algolia, AWS Bedrock Knowledge Bases). Implement strong observability, CI/CD, security, and governance across AI workloads. Enable future architectures such as multi-model orchestration and agentic workflows. Required Skills & Experience Strong experience designing and operating cloud-native platforms on AWS (Lambda, API Gateway, DynamoDB, S3, CloudWatch). Hands-on experience with Databricks and large-scale data or embedding pipelines. Proven experience building and operating production AI systems , including RAG pipelines, LLM-backed services, and vector search (Weaviate, OpenSearch, Algolia). Proficiency in Python , with experience deploying containerised services on Kubernetes using Terraform . Solid understanding of distributed systems, cloud architecture, and API design , with a focus on scalability and reliability. Demonstrable ownership of observability, performance, cost efficiency, and operational robustness in production environments. Why Join? You'll own the foundational AI platform behind a growing suite of generative AI products, working with senior AI leaders on systems used by real customers at scale. This role offers deep technical ownership, long-term impact, and an excellent compensation package within a market-leading organisation. INDAM
23/12/2025
Full time
AI Platform Engineer London Excellent Salary +Benefits Join an award-winning, internationally recognised B2B consultancy as an AI Platform Engineer, owning the cloud-native platform that underpins conversational AI and generative AI products at scale. Sitting at the core of AI delivery, you will design, build, and operate the runtime, infrastructure, and operational layers supporting RAG pipelines, LLM orchestration, vector search, and evaluation workflows across AWS and Databricks. Working closely with senior AI engineers and product teams, you'll ensure AI systems are scalable, observable, secure, and cost-efficient, turning experimental AI into reliable, production-grade capabilities. With further scope of responsibilities detailed below: Own and evolve the AI platform powering conversational assistants and generative AI products. Build, operate, and optimise RAG and LLM-backed services, improving latency, reliability, and cost. Design and run cloud-native AI services across AWS and Databricks, including ingestion and embedding pipelines. Scale and operate vector search infrastructure (Weaviate, OpenSearch, Algolia, AWS Bedrock Knowledge Bases). Implement strong observability, CI/CD, security, and governance across AI workloads. Enable future architectures such as multi-model orchestration and agentic workflows. Required Skills & Experience Strong experience designing and operating cloud-native platforms on AWS (Lambda, API Gateway, DynamoDB, S3, CloudWatch). Hands-on experience with Databricks and large-scale data or embedding pipelines. Proven experience building and operating production AI systems , including RAG pipelines, LLM-backed services, and vector search (Weaviate, OpenSearch, Algolia). Proficiency in Python , with experience deploying containerised services on Kubernetes using Terraform . Solid understanding of distributed systems, cloud architecture, and API design , with a focus on scalability and reliability. Demonstrable ownership of observability, performance, cost efficiency, and operational robustness in production environments. Why Join? You'll own the foundational AI platform behind a growing suite of generative AI products, working with senior AI leaders on systems used by real customers at scale. This role offers deep technical ownership, long-term impact, and an excellent compensation package within a market-leading organisation. INDAM
Data Engineer, Remote Modern Cloud Data Stack 45,000 PA DOE This is a high-visibility opportunity in an ambitious, values-led organisation refreshing its data strategy and modernising its intelligence platform. You'll be trusted early, work closely with stakeholders, and build the foundations that drive better insight, smarter decisions, and meaningful impact, using data for good. It's ideal for someone early in their journey with 2+ years' experience, ready to step up. You'll join a supportive, encouraging environment with real runway to grow technically and start developing leadership skills as your ownership and influence increases across the business. What you'll do Help shape and deliver a refreshed data strategy and modern intelligence platform Build reliable, scalable ELT/ETL pipelines into a cloud data warehouse, Snowflake, Databricks, or similar Build and optimise core data models and transformations, dimensional, analytics-ready, built to last Create trusted data products that enable self-service analytics across the organisation Improve data quality, monitoring, performance, and cost efficiency Partner with analysts, BI, and non-technical teams to turn questions into robust data assets Contribute to standards, best practice, and reusable engineering frameworks Support responsible AI tooling, including programmatic LLM workflows where relevant What you'll bring 2+ years' experience in data engineering within a modern stack Strong SQL and a solid modelling foundation Python (preferred) or similar for pipeline development and automation Cloud experience, AWS, Azure, or GCP Familiarity with orchestration and analytics engineering tools, dbt, Airflow, or similar Strong habits around governance, security, documentation, version control (Git), and CI/CD The kind of person who thrives here Confident, curious, and motivated. You care about doing things properly, you enjoy being visible and trusted in the business, and you're passionate about using data to create positive outcomes. Excited . APPLY NOW No Sponsorship - Post Grad Visa
18/12/2025
Full time
Data Engineer, Remote Modern Cloud Data Stack 45,000 PA DOE This is a high-visibility opportunity in an ambitious, values-led organisation refreshing its data strategy and modernising its intelligence platform. You'll be trusted early, work closely with stakeholders, and build the foundations that drive better insight, smarter decisions, and meaningful impact, using data for good. It's ideal for someone early in their journey with 2+ years' experience, ready to step up. You'll join a supportive, encouraging environment with real runway to grow technically and start developing leadership skills as your ownership and influence increases across the business. What you'll do Help shape and deliver a refreshed data strategy and modern intelligence platform Build reliable, scalable ELT/ETL pipelines into a cloud data warehouse, Snowflake, Databricks, or similar Build and optimise core data models and transformations, dimensional, analytics-ready, built to last Create trusted data products that enable self-service analytics across the organisation Improve data quality, monitoring, performance, and cost efficiency Partner with analysts, BI, and non-technical teams to turn questions into robust data assets Contribute to standards, best practice, and reusable engineering frameworks Support responsible AI tooling, including programmatic LLM workflows where relevant What you'll bring 2+ years' experience in data engineering within a modern stack Strong SQL and a solid modelling foundation Python (preferred) or similar for pipeline development and automation Cloud experience, AWS, Azure, or GCP Familiarity with orchestration and analytics engineering tools, dbt, Airflow, or similar Strong habits around governance, security, documentation, version control (Git), and CI/CD The kind of person who thrives here Confident, curious, and motivated. You care about doing things properly, you enjoy being visible and trusted in the business, and you're passionate about using data to create positive outcomes. Excited . APPLY NOW No Sponsorship - Post Grad Visa
Principal Data Engineer - Hybrid (London/Winchester) We're seeking a hands-on Principal Data Engineer to design and deliver enterprise-scale, cloud-native data platforms that power analytics, reporting, and Real Time decision-making. This is a strategic technical leadership role where you'll shape architecture, mentor engineers, and deliver end-to-end solutions across a modern AWS/Databricks stack. What you'll do Lead the design of scalable, secure data architectures on AWS. Build and optimise ETL/ELT pipelines for batch and streaming data. Deploy and manage Apache Spark jobs on Databricks and Delta Lake. Write production-grade Python and SQL for large-scale data transformations. Drive data quality, governance, and automation through CI/CD and IaC. Collaborate with data scientists, analysts, and business stakeholders. Mentor and guide data engineering teams. What we're looking for Proven experience in senior/principal data engineering roles. Expertise in AWS, Databricks, Apache Spark, Python, and SQL. Strong background in cloud-native data platforms, Real Time processing, and data lakes. Hands-on experience with tools such as Airflow, Kafka, Docker, GitLab CI/CD. Excellent stakeholder engagement and leadership skills. What's on offer £84000 salary + 10% bonus 6% pension contribution Private medical & flexible benefits package 25 days annual leave (plus buy/sell options) Hybrid working - travel to London or Winchester once/twice per week Join a company at the forefront of media, connectivity, and smart technology, where your work directly powers millions of daily connections across the UK.
06/10/2025
Full time
Principal Data Engineer - Hybrid (London/Winchester) We're seeking a hands-on Principal Data Engineer to design and deliver enterprise-scale, cloud-native data platforms that power analytics, reporting, and Real Time decision-making. This is a strategic technical leadership role where you'll shape architecture, mentor engineers, and deliver end-to-end solutions across a modern AWS/Databricks stack. What you'll do Lead the design of scalable, secure data architectures on AWS. Build and optimise ETL/ELT pipelines for batch and streaming data. Deploy and manage Apache Spark jobs on Databricks and Delta Lake. Write production-grade Python and SQL for large-scale data transformations. Drive data quality, governance, and automation through CI/CD and IaC. Collaborate with data scientists, analysts, and business stakeholders. Mentor and guide data engineering teams. What we're looking for Proven experience in senior/principal data engineering roles. Expertise in AWS, Databricks, Apache Spark, Python, and SQL. Strong background in cloud-native data platforms, Real Time processing, and data lakes. Hands-on experience with tools such as Airflow, Kafka, Docker, GitLab CI/CD. Excellent stakeholder engagement and leadership skills. What's on offer £84000 salary + 10% bonus 6% pension contribution Private medical & flexible benefits package 25 days annual leave (plus buy/sell options) Hybrid working - travel to London or Winchester once/twice per week Join a company at the forefront of media, connectivity, and smart technology, where your work directly powers millions of daily connections across the UK.
Senior AWS Data Engineer - London - £125,000 Please note - this role will require you to work from the London based office. You must have the unrestricted right to work in the UK to be eligible for this role. This organisation is not able to offer sponsorship. An exciting opportunity to join a greenfield initiative focused on transforming how market data is accessed and utilised. As a Senior AWS Data Engineer, you'll play a key role in designing and building a cutting-edge data platform using technologies like Databricks, Snowflake, and AWS Glue. Key Responsibilities: Build and maintain scalable data pipelines, warehouses, and lakes. Design secure, high-performance data architectures. Develop processing and analysis algorithms for complex data sets. Collaborate with data scientists to deploy machine learning models. Contribute to strategy, planning, and continuous improvement. Required Experience: Hands-on experience with AWS data tools: Glue, PySpark, Athena, Iceberg, Lake Formation. Strong Python and SQL skills for data processing and analysis. Deep understanding of data governance, quality, and security. Knowledge of market data and its business applications. Desirable Experience: Experience with Databricks and Snowflake. Familiarity with machine learning and data science concepts. Strategic thinking and ability to influence cross-functional teams. This role offers the chance to work across multiple business areas, solve complex data challenges, and contribute to long-term strategic goals. You'll be empowered to lead, collaborate, and innovate in a dynamic environment. To apply for this role please submit your CV or contact David Airey on or at . Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.
03/10/2025
Full time
Senior AWS Data Engineer - London - £125,000 Please note - this role will require you to work from the London based office. You must have the unrestricted right to work in the UK to be eligible for this role. This organisation is not able to offer sponsorship. An exciting opportunity to join a greenfield initiative focused on transforming how market data is accessed and utilised. As a Senior AWS Data Engineer, you'll play a key role in designing and building a cutting-edge data platform using technologies like Databricks, Snowflake, and AWS Glue. Key Responsibilities: Build and maintain scalable data pipelines, warehouses, and lakes. Design secure, high-performance data architectures. Develop processing and analysis algorithms for complex data sets. Collaborate with data scientists to deploy machine learning models. Contribute to strategy, planning, and continuous improvement. Required Experience: Hands-on experience with AWS data tools: Glue, PySpark, Athena, Iceberg, Lake Formation. Strong Python and SQL skills for data processing and analysis. Deep understanding of data governance, quality, and security. Knowledge of market data and its business applications. Desirable Experience: Experience with Databricks and Snowflake. Familiarity with machine learning and data science concepts. Strategic thinking and ability to influence cross-functional teams. This role offers the chance to work across multiple business areas, solve complex data challenges, and contribute to long-term strategic goals. You'll be empowered to lead, collaborate, and innovate in a dynamic environment. To apply for this role please submit your CV or contact David Airey on or at . Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.
Why Greencore? We're a leading manufacturer of convenience food in the UK and our purpose is to make everyday taste better! We're a vibrant, fast-paced leading food manufacturer. Employing 13,300 colleagues across 16 manufacturing units and 17 distribution depots across the UK. We supply all the UK's food retailers with everything from Sandwiches, soups and sushi to cooking sauces, pickles and ready meals, and in FY24, we generated revenues of £1.8bn. Our vast direct-to-store (DTS) distribution network, comprising of 17 depots nationwide, enables us to make over 10,500 daily deliveries of our own chilled and frozen produce and that of third parties. Why is this exciting for your career as a Senior Data Engineer? The MBE Programme presents a huge opportunity for colleagues across the technology function to play a central role in the design, shape, delivery and execution of an enterprise wide digital transformation programme. The complexity of the initiative, within a FTSE 250 business, will allow for large-scale problem solving, group wide impact assessment and supporting the delivery of an enablement project to future proof the business. Why we embarked on Making Business Easier? Over time processes have become increasingly complex, increasing both the risk and cost they pose, whilst restricting our agility. At the same time, our customers and the market expect more from us than ever before. Making Business Easier forms a fundamental foundation for our commercial and operational excellence agendas, whilst supporting managing our cost base effectively in the future. The MBE Programme will streamline and simplify core processes, provide easier access to quality business data and will invest in the right technology to enable these processes. What you'll be doing: As a Senior Data Engineer, you will play a key role in shaping and delivering enterprise-wide data solutions that translate complex business requirements into scalable, high-performance data platforms. In this role, you will help define and guide the structure of data systems, focusing on seamless integration, accessibility, and governance, while optimising data flows to support both analytics and operational needs. Collaborating closely with business stakeholders, data engineers, and analysts, you will ensure that data platforms are robust, efficient, and adaptable to evolving business priorities. You will also support the usage, alignment, and consistency of data models; therefore, will have a wide-ranging role across many business projects and deliverables Shape and implement data solutions that align with business objectives and leverage both cloud and on-premise technologies Translate complex business needs into scalable, high-performing data solutions Support the development and application of best practices in data governance, security, and system design Collaborate closely with business stakeholders, product teams, and engineers to design and deliver effective, integrated data solutions Optimise data flows and pipelines to enable a wide range of analytical and operational use cases Promote data consistency across transactional and analytical systems through well-designed integration approaches Contribute to the design and ongoing improvement of data platforms - including data lakes, data warehouses, and other distributed storage environments - focused on efficiency, scalability, and ease of maintenance Mentor and support junior engineers and analysts in applying best practices in data engineering and solution design What you'll need: 5+ years of Data Engineering experience, with expertise in Azure data services and/or Microsoft Fabric Strong expertise in designing scalable data platforms and managing cloud-based data ecosystems Proven track record in data integration, ETL processes, and optimising large-scale data systems Expertise in cloud-based data platforms (AWS, Azure, Google Cloud) and distributed storage solutions Proficiency in Python, PySpark, SQL, NoSQL, and data processing frameworks (Spark, Databricks) Expertise in ETL/ELT design and orchestration in Azure, as well as pipeline performance tuning & optimisation Competent in integrating relational, NoSQL, and streaming data sources Management of CI/CD pipelines & Git-based workflows Good knowledge of data governance, privacy regulations, and security best practices Experience with modern data architectures, including data lakes, data mesh, and event-driven data processing Strong problem-solving and analytical skills to translate complex business needs into scalable data solutions Excellent communication and stakeholder management to align business and technical goals High attention to detail and commitment to data quality, security, and governance Ability to mentor and guide teams, fostering a culture of best practices in data architecture Power BI and DAX for data visualisation (desirable) Knowledge of Azure Machine Learning and AI services (desirable) Experience with streaming platforms like Event Hub or Kafka Familiarity with cloud cost optimisation techniques (desirable) What you'll get: Competitive salary and job-related benefits 25 days holiday allowance plus bank holidays Car Allowance Annual Target Bonus Pension up to 8% matched PMI Cover: Individual Life insurance up to 4x salary Company share save scheme Greencore Qualifications Exclusive Greencore employee discount platform Access to a full Wellbeing Centre platform
03/10/2025
Full time
Why Greencore? We're a leading manufacturer of convenience food in the UK and our purpose is to make everyday taste better! We're a vibrant, fast-paced leading food manufacturer. Employing 13,300 colleagues across 16 manufacturing units and 17 distribution depots across the UK. We supply all the UK's food retailers with everything from Sandwiches, soups and sushi to cooking sauces, pickles and ready meals, and in FY24, we generated revenues of £1.8bn. Our vast direct-to-store (DTS) distribution network, comprising of 17 depots nationwide, enables us to make over 10,500 daily deliveries of our own chilled and frozen produce and that of third parties. Why is this exciting for your career as a Senior Data Engineer? The MBE Programme presents a huge opportunity for colleagues across the technology function to play a central role in the design, shape, delivery and execution of an enterprise wide digital transformation programme. The complexity of the initiative, within a FTSE 250 business, will allow for large-scale problem solving, group wide impact assessment and supporting the delivery of an enablement project to future proof the business. Why we embarked on Making Business Easier? Over time processes have become increasingly complex, increasing both the risk and cost they pose, whilst restricting our agility. At the same time, our customers and the market expect more from us than ever before. Making Business Easier forms a fundamental foundation for our commercial and operational excellence agendas, whilst supporting managing our cost base effectively in the future. The MBE Programme will streamline and simplify core processes, provide easier access to quality business data and will invest in the right technology to enable these processes. What you'll be doing: As a Senior Data Engineer, you will play a key role in shaping and delivering enterprise-wide data solutions that translate complex business requirements into scalable, high-performance data platforms. In this role, you will help define and guide the structure of data systems, focusing on seamless integration, accessibility, and governance, while optimising data flows to support both analytics and operational needs. Collaborating closely with business stakeholders, data engineers, and analysts, you will ensure that data platforms are robust, efficient, and adaptable to evolving business priorities. You will also support the usage, alignment, and consistency of data models; therefore, will have a wide-ranging role across many business projects and deliverables Shape and implement data solutions that align with business objectives and leverage both cloud and on-premise technologies Translate complex business needs into scalable, high-performing data solutions Support the development and application of best practices in data governance, security, and system design Collaborate closely with business stakeholders, product teams, and engineers to design and deliver effective, integrated data solutions Optimise data flows and pipelines to enable a wide range of analytical and operational use cases Promote data consistency across transactional and analytical systems through well-designed integration approaches Contribute to the design and ongoing improvement of data platforms - including data lakes, data warehouses, and other distributed storage environments - focused on efficiency, scalability, and ease of maintenance Mentor and support junior engineers and analysts in applying best practices in data engineering and solution design What you'll need: 5+ years of Data Engineering experience, with expertise in Azure data services and/or Microsoft Fabric Strong expertise in designing scalable data platforms and managing cloud-based data ecosystems Proven track record in data integration, ETL processes, and optimising large-scale data systems Expertise in cloud-based data platforms (AWS, Azure, Google Cloud) and distributed storage solutions Proficiency in Python, PySpark, SQL, NoSQL, and data processing frameworks (Spark, Databricks) Expertise in ETL/ELT design and orchestration in Azure, as well as pipeline performance tuning & optimisation Competent in integrating relational, NoSQL, and streaming data sources Management of CI/CD pipelines & Git-based workflows Good knowledge of data governance, privacy regulations, and security best practices Experience with modern data architectures, including data lakes, data mesh, and event-driven data processing Strong problem-solving and analytical skills to translate complex business needs into scalable data solutions Excellent communication and stakeholder management to align business and technical goals High attention to detail and commitment to data quality, security, and governance Ability to mentor and guide teams, fostering a culture of best practices in data architecture Power BI and DAX for data visualisation (desirable) Knowledge of Azure Machine Learning and AI services (desirable) Experience with streaming platforms like Event Hub or Kafka Familiarity with cloud cost optimisation techniques (desirable) What you'll get: Competitive salary and job-related benefits 25 days holiday allowance plus bank holidays Car Allowance Annual Target Bonus Pension up to 8% matched PMI Cover: Individual Life insurance up to 4x salary Company share save scheme Greencore Qualifications Exclusive Greencore employee discount platform Access to a full Wellbeing Centre platform
Why Greencore? We're a leading manufacturer of convenience food in the UK and our purpose is to make everyday taste better! We're a vibrant, fast-paced leading food manufacturer. Employing 13,300 colleagues across 16 manufacturing units and 17 distribution depots across the UK. We supply all the UK's food retailers with everything from Sandwiches, soups and sushi to cooking sauces, pickles and ready meals, and in FY24, we generated revenues of 1.8bn. Our vast direct-to-store (DTS) distribution network, comprising of 17 depots nationwide, enables us to make over 10,500 daily deliveries of our own chilled and frozen produce and that of third parties. Why is this exciting for your career as a Senior Data Engineer? The MBE Programme presents a huge opportunity for colleagues across the technology function to play a central role in the design, shape, delivery and execution of an enterprise wide digital transformation programme. The complexity of the initiative, within a FTSE 250 business, will allow for large-scale problem solving, group wide impact assessment and supporting the delivery of an enablement project to future proof the business. Why we embarked on Making Business Easier? Over time processes have become increasingly complex, increasing both the risk and cost they pose, whilst restricting our agility. At the same time, our customers and the market expect more from us than ever before. Making Business Easier forms a fundamental foundation for our commercial and operational excellence agendas, whilst supporting managing our cost base effectively in the future. The MBE Programme will streamline and simplify core processes, provide easier access to quality business data and will invest in the right technology to enable these processes. What you'll be doing: As a Senior Data Engineer, you will play a key role in shaping and delivering enterprise-wide data solutions that translate complex business requirements into scalable, high-performance data platforms. In this role, you will help define and guide the structure of data systems, focusing on seamless integration, accessibility, and governance, while optimising data flows to support both analytics and operational needs. Collaborating closely with business stakeholders, data engineers, and analysts, you will ensure that data platforms are robust, efficient, and adaptable to evolving business priorities. You will also support the usage, alignment, and consistency of data models; therefore, will have a wide-ranging role across many business projects and deliverables Shape and implement data solutions that align with business objectives and leverage both cloud and on-premise technologies Translate complex business needs into scalable, high-performing data solutions Support the development and application of best practices in data governance, security, and system design Collaborate closely with business stakeholders, product teams, and engineers to design and deliver effective, integrated data solutions Optimise data flows and pipelines to enable a wide range of analytical and operational use cases Promote data consistency across transactional and analytical systems through well-designed integration approaches Contribute to the design and ongoing improvement of data platforms - including data lakes, data warehouses, and other distributed storage environments - focused on efficiency, scalability, and ease of maintenance Mentor and support junior engineers and analysts in applying best practices in data engineering and solution design What you'll need: 5+ years of Data Engineering experience, with expertise in Azure data services and/or Microsoft Fabric Strong expertise in designing scalable data platforms and managing cloud-based data ecosystems Proven track record in data integration, ETL processes, and optimising large-scale data systems Expertise in cloud-based data platforms (AWS, Azure, Google Cloud) and distributed storage solutions Proficiency in Python, PySpark, SQL, NoSQL, and data processing frameworks (Spark, Databricks) Expertise in ETL/ELT design and orchestration in Azure, as well as pipeline performance tuning & optimisation Competent in integrating relational, NoSQL, and streaming data sources Management of CI/CD pipelines & Git-based workflows Good knowledge of data governance, privacy regulations, and security best practices Experience with modern data architectures, including data lakes, data mesh, and event-driven data processing Strong problem-solving and analytical skills to translate complex business needs into scalable data solutions Excellent communication and stakeholder management to align business and technical goals High attention to detail and commitment to data quality, security, and governance Ability to mentor and guide teams, fostering a culture of best practices in data architecture Power BI and DAX for data visualisation (desirable) Knowledge of Azure Machine Learning and AI services (desirable) Experience with streaming platforms like Event Hub or Kafka Familiarity with cloud cost optimisation techniques (desirable) What you'll get: Competitive salary and job-related benefits 25 days holiday allowance plus bank holidays Car Allowance Annual Target Bonus Pension up to 8% matched PMI Cover: Individual Life insurance up to 4x salary Company share save scheme Greencore Qualifications Exclusive Greencore employee discount platform Access to a full Wellbeing Centre platform
02/10/2025
Full time
Why Greencore? We're a leading manufacturer of convenience food in the UK and our purpose is to make everyday taste better! We're a vibrant, fast-paced leading food manufacturer. Employing 13,300 colleagues across 16 manufacturing units and 17 distribution depots across the UK. We supply all the UK's food retailers with everything from Sandwiches, soups and sushi to cooking sauces, pickles and ready meals, and in FY24, we generated revenues of 1.8bn. Our vast direct-to-store (DTS) distribution network, comprising of 17 depots nationwide, enables us to make over 10,500 daily deliveries of our own chilled and frozen produce and that of third parties. Why is this exciting for your career as a Senior Data Engineer? The MBE Programme presents a huge opportunity for colleagues across the technology function to play a central role in the design, shape, delivery and execution of an enterprise wide digital transformation programme. The complexity of the initiative, within a FTSE 250 business, will allow for large-scale problem solving, group wide impact assessment and supporting the delivery of an enablement project to future proof the business. Why we embarked on Making Business Easier? Over time processes have become increasingly complex, increasing both the risk and cost they pose, whilst restricting our agility. At the same time, our customers and the market expect more from us than ever before. Making Business Easier forms a fundamental foundation for our commercial and operational excellence agendas, whilst supporting managing our cost base effectively in the future. The MBE Programme will streamline and simplify core processes, provide easier access to quality business data and will invest in the right technology to enable these processes. What you'll be doing: As a Senior Data Engineer, you will play a key role in shaping and delivering enterprise-wide data solutions that translate complex business requirements into scalable, high-performance data platforms. In this role, you will help define and guide the structure of data systems, focusing on seamless integration, accessibility, and governance, while optimising data flows to support both analytics and operational needs. Collaborating closely with business stakeholders, data engineers, and analysts, you will ensure that data platforms are robust, efficient, and adaptable to evolving business priorities. You will also support the usage, alignment, and consistency of data models; therefore, will have a wide-ranging role across many business projects and deliverables Shape and implement data solutions that align with business objectives and leverage both cloud and on-premise technologies Translate complex business needs into scalable, high-performing data solutions Support the development and application of best practices in data governance, security, and system design Collaborate closely with business stakeholders, product teams, and engineers to design and deliver effective, integrated data solutions Optimise data flows and pipelines to enable a wide range of analytical and operational use cases Promote data consistency across transactional and analytical systems through well-designed integration approaches Contribute to the design and ongoing improvement of data platforms - including data lakes, data warehouses, and other distributed storage environments - focused on efficiency, scalability, and ease of maintenance Mentor and support junior engineers and analysts in applying best practices in data engineering and solution design What you'll need: 5+ years of Data Engineering experience, with expertise in Azure data services and/or Microsoft Fabric Strong expertise in designing scalable data platforms and managing cloud-based data ecosystems Proven track record in data integration, ETL processes, and optimising large-scale data systems Expertise in cloud-based data platforms (AWS, Azure, Google Cloud) and distributed storage solutions Proficiency in Python, PySpark, SQL, NoSQL, and data processing frameworks (Spark, Databricks) Expertise in ETL/ELT design and orchestration in Azure, as well as pipeline performance tuning & optimisation Competent in integrating relational, NoSQL, and streaming data sources Management of CI/CD pipelines & Git-based workflows Good knowledge of data governance, privacy regulations, and security best practices Experience with modern data architectures, including data lakes, data mesh, and event-driven data processing Strong problem-solving and analytical skills to translate complex business needs into scalable data solutions Excellent communication and stakeholder management to align business and technical goals High attention to detail and commitment to data quality, security, and governance Ability to mentor and guide teams, fostering a culture of best practices in data architecture Power BI and DAX for data visualisation (desirable) Knowledge of Azure Machine Learning and AI services (desirable) Experience with streaming platforms like Event Hub or Kafka Familiarity with cloud cost optimisation techniques (desirable) What you'll get: Competitive salary and job-related benefits 25 days holiday allowance plus bank holidays Car Allowance Annual Target Bonus Pension up to 8% matched PMI Cover: Individual Life insurance up to 4x salary Company share save scheme Greencore Qualifications Exclusive Greencore employee discount platform Access to a full Wellbeing Centre platform
Base Location: Reading / Havant / Perth Salary: 600 per day Working Pattern: 40 hours per week / Full time Embark on a transformative career journey with SSE energy company, where innovation meets impact in the heart of the IT sector. As a pivotal player in our forward-thinking team, you'll harness cutting-edge technology to drive change and propel the UK towards its ambitious net-zero targets. Your expertise will not only shape the future of energy but also carve a sustainable world for generations to come. Join us and be at the forefront of the green revolution, where every line of code contributes to a cleaner, brighter future. Key Responsibilities: Provide technical leadership and oversight to the group Data & Analytics platform team. Responsible for ensuring the reliability, security and scalability of analytics platform services. Deliver full automation of the deployment of Data & Analytics platform services via Infrastructure as code. Help to set development standards, configure operational support processes and provide technical assurance. Provide support to Data & Analytics platform users and internal development teams interacting with the Data & Analytics platform services. What do you need? Extensive experience of deploying Azure and ideally AWS cloud resources and be fully conversant with agile and DevOps development methodology. Extensive experience in using Terraform to deploy cloud resources as infrastructure as code. Excellent understanding of CI/CD principles and experience with related tools (e.g. Azure DevOps, GitHub Actions). Strong knowledge of scripting languages such as PowerShell, Python and Azure CLI and proven experience with automation runbooks, VM maintenance scripts and SQL. Strong understanding of cloud access control and governance such as RBAC and IAM. Strong knowledge on Cloud Networking (Azure) such as private endpoints, Firewalls, NSGs, NAT gateways and route tables. Good knowledge in Microsoft Entra ID such as managing App registrations, Enterprise Apps, AD groups, managed identities and Privileged Identity Management. Proven experience in IaaS such as virtual machines - both Windows and Linux. Familiarity with server patching and maintenance. Strong understanding of security best practices within Azure and ideally AWS. Experience of configuring cloud data services (preferably Databricks) in Azure and ideally AWS. Excellent communication and collaboration skills, with the ability to work across multiple technical and non-technical teams. What happens now? After submitting your application for the Data and Analytics Senior Development Operations Engineer role, we understand you're eager to hear back. We value your time and interest, and if your application is successful, you will be contacted directly by the team within 2 working days. We appreciate your patience and look forward to the possibility of welcoming you aboard.
01/10/2025
Contractor
Base Location: Reading / Havant / Perth Salary: 600 per day Working Pattern: 40 hours per week / Full time Embark on a transformative career journey with SSE energy company, where innovation meets impact in the heart of the IT sector. As a pivotal player in our forward-thinking team, you'll harness cutting-edge technology to drive change and propel the UK towards its ambitious net-zero targets. Your expertise will not only shape the future of energy but also carve a sustainable world for generations to come. Join us and be at the forefront of the green revolution, where every line of code contributes to a cleaner, brighter future. Key Responsibilities: Provide technical leadership and oversight to the group Data & Analytics platform team. Responsible for ensuring the reliability, security and scalability of analytics platform services. Deliver full automation of the deployment of Data & Analytics platform services via Infrastructure as code. Help to set development standards, configure operational support processes and provide technical assurance. Provide support to Data & Analytics platform users and internal development teams interacting with the Data & Analytics platform services. What do you need? Extensive experience of deploying Azure and ideally AWS cloud resources and be fully conversant with agile and DevOps development methodology. Extensive experience in using Terraform to deploy cloud resources as infrastructure as code. Excellent understanding of CI/CD principles and experience with related tools (e.g. Azure DevOps, GitHub Actions). Strong knowledge of scripting languages such as PowerShell, Python and Azure CLI and proven experience with automation runbooks, VM maintenance scripts and SQL. Strong understanding of cloud access control and governance such as RBAC and IAM. Strong knowledge on Cloud Networking (Azure) such as private endpoints, Firewalls, NSGs, NAT gateways and route tables. Good knowledge in Microsoft Entra ID such as managing App registrations, Enterprise Apps, AD groups, managed identities and Privileged Identity Management. Proven experience in IaaS such as virtual machines - both Windows and Linux. Familiarity with server patching and maintenance. Strong understanding of security best practices within Azure and ideally AWS. Experience of configuring cloud data services (preferably Databricks) in Azure and ideally AWS. Excellent communication and collaboration skills, with the ability to work across multiple technical and non-technical teams. What happens now? After submitting your application for the Data and Analytics Senior Development Operations Engineer role, we understand you're eager to hear back. We value your time and interest, and if your application is successful, you will be contacted directly by the team within 2 working days. We appreciate your patience and look forward to the possibility of welcoming you aboard.
Lead Data Engineer (Databricks) - Leeds (Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI / CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer) Our client is a global innovator and world leader with one of the most recognisable names within technology. They are looking for a Lead Data Engineer with significant Databricks experience as well as leadership responsibility to run an exceptional Agile engineering team and provide technical leadership through coaching and mentorship. We are seeking a Lead Data Engineer capable of leading client delivery, ensuring the highest standards. This will include working with architects, creating automated tests, instilling a culture of continuous improvement and setting standards for the team. You will be responsible for building a greenfield modern data platform using cutting-edge technologies, architecting big data solutions and developing complex enterprise data ETL and ML pipelines and projections. The successful candidate will have strong Python, PySpark and SQL experience, possess a clear understanding of databricks, as well as a passion for Data Science (R, Machine Learning and AI). Database experience with SQL and No-SQL - Aurora, MS SQL Server, MySQL is expected, as well as significant Agile and Scrum exposure along with SOLID principles. Continuous Integration tools, Infrastructure as code and strong Cloud Platform knowledge, ideally with AWS is also key. We are keen to hear from talented Lead Data Engineer candidates from all backgrounds. This is a truly amazing opportunity to work for a prestigious brand that will do wonders for your career. They invest heavily in training and career development with unlimited career progression for top performers. Location: Leeds Salary: £55k - £70k + Pension + Benefits To apply for this position please send your CV to Nathan Warner at Noir Consulting. (Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI / CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer) NOIRUKTECHREC NOIRUKREC
01/10/2025
Full time
Lead Data Engineer (Databricks) - Leeds (Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI / CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer) Our client is a global innovator and world leader with one of the most recognisable names within technology. They are looking for a Lead Data Engineer with significant Databricks experience as well as leadership responsibility to run an exceptional Agile engineering team and provide technical leadership through coaching and mentorship. We are seeking a Lead Data Engineer capable of leading client delivery, ensuring the highest standards. This will include working with architects, creating automated tests, instilling a culture of continuous improvement and setting standards for the team. You will be responsible for building a greenfield modern data platform using cutting-edge technologies, architecting big data solutions and developing complex enterprise data ETL and ML pipelines and projections. The successful candidate will have strong Python, PySpark and SQL experience, possess a clear understanding of databricks, as well as a passion for Data Science (R, Machine Learning and AI). Database experience with SQL and No-SQL - Aurora, MS SQL Server, MySQL is expected, as well as significant Agile and Scrum exposure along with SOLID principles. Continuous Integration tools, Infrastructure as code and strong Cloud Platform knowledge, ideally with AWS is also key. We are keen to hear from talented Lead Data Engineer candidates from all backgrounds. This is a truly amazing opportunity to work for a prestigious brand that will do wonders for your career. They invest heavily in training and career development with unlimited career progression for top performers. Location: Leeds Salary: £55k - £70k + Pension + Benefits To apply for this position please send your CV to Nathan Warner at Noir Consulting. (Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI / CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Lead Data Engineer, Team Lead, Technical Lead, Senior Data Engineer, Data Engineer) NOIRUKTECHREC NOIRUKREC
Databricks Engineer London- hybrid- 3 days per week on-site 6 months + UMBRELLA only- Inside IR35 Key Responsibilities Design, develop, and maintain ETL/ELT pipelines using Airflow for orchestration and scheduling. Build and manage data transformation workflows in DBT running on Databricks . Optimize data models in Delta Lake for performance, scalability, and cost efficiency. Collaborate with analytics, BI, and data science teams to deliver clean, reliable datasets. Implement data quality checks (dbt tests, monitoring) and ensure governance standards. Manage and monitor Databricks clusters & SQL Warehouses to support workloads. Contribute to CI/CD practices for data pipelines (version control, testing, deployments). Troubleshoot pipeline failures, performance bottlenecks, and scaling challenges. Document workflows, transformations, and data models for knowledge sharing. Required Skills & Qualifications 3-6 years of experience as a Data Engineer (or similar). Hands-on expertise with: DBT (dbt-core, dbt-databricks adapter, testing & documentation). Apache Airflow (DAG design, operators, scheduling, dependencies). Databricks (Spark, SQL, Delta Lake, job clusters, SQL Warehouses). Strong SQL skills and understanding of data modelling (Kimball, Data Vault, or similar) . Proficiency in Python for Scripting and pipeline development. Experience with CI/CD tools (eg, GitHub Actions, GitLab CI, Azure DevOps). Familiarity with cloud platforms (AWS, Azure, or GCP). Strong problem-solving skills and ability to work in cross-functional teams. All profiles will be reviewed against the required skills and experience. Due to the high number of applications we will only be able to respond to successful applicants in the first instance. We thank you for your interest and the time taken to apply!
01/10/2025
Contractor
Databricks Engineer London- hybrid- 3 days per week on-site 6 months + UMBRELLA only- Inside IR35 Key Responsibilities Design, develop, and maintain ETL/ELT pipelines using Airflow for orchestration and scheduling. Build and manage data transformation workflows in DBT running on Databricks . Optimize data models in Delta Lake for performance, scalability, and cost efficiency. Collaborate with analytics, BI, and data science teams to deliver clean, reliable datasets. Implement data quality checks (dbt tests, monitoring) and ensure governance standards. Manage and monitor Databricks clusters & SQL Warehouses to support workloads. Contribute to CI/CD practices for data pipelines (version control, testing, deployments). Troubleshoot pipeline failures, performance bottlenecks, and scaling challenges. Document workflows, transformations, and data models for knowledge sharing. Required Skills & Qualifications 3-6 years of experience as a Data Engineer (or similar). Hands-on expertise with: DBT (dbt-core, dbt-databricks adapter, testing & documentation). Apache Airflow (DAG design, operators, scheduling, dependencies). Databricks (Spark, SQL, Delta Lake, job clusters, SQL Warehouses). Strong SQL skills and understanding of data modelling (Kimball, Data Vault, or similar) . Proficiency in Python for Scripting and pipeline development. Experience with CI/CD tools (eg, GitHub Actions, GitLab CI, Azure DevOps). Familiarity with cloud platforms (AWS, Azure, or GCP). Strong problem-solving skills and ability to work in cross-functional teams. All profiles will be reviewed against the required skills and experience. Due to the high number of applications we will only be able to respond to successful applicants in the first instance. We thank you for your interest and the time taken to apply!