it job board logo
  • Home
  • Find IT Jobs
  • Register CV
  • Register as Employer
  • Contact us
  • Career Advice
  • Recruiting? Post a job
  • Sign in
  • Sign up
  • Home
  • Find IT Jobs
  • Register CV
  • Register as Employer
  • Contact us
  • Career Advice
Sorry, that job is no longer available. Here are some results that may be similar to the job you were looking for.

348 jobs found

Email me jobs like this
Refine Search
Current Search
systems engineer python and sql
Involved Productions Ltd
Data Engineer
Involved Productions Ltd London
We’re looking for a Data Engineer to work across the Involved Group, the collective behind globally renowned dance and electronic music labels including Anjunabeats and Anjunadeep, spanning label services and distribution, music publishing, events promotion and artist management. This is a key role within our Technology Department, responsible for developing and managing data pipelines, automating data collection processes, and creating analytics dashboards to provide actionable insights across the company, directly impacting strategy. This role involves working closely with a variety of departments to understand their data needs, developing solutions that streamline data analysis and reporting processes. Reporting to the Head of Technology, our Data Engineer ensures that data analytics initiatives are strategically aligned, efficiently executed, and contribute to the company's overall objectives. Location: Bermondsey, London Working pattern: Part-time (3 days/week) – either in-person at our lively Bermondsey office, hybrid, or home-working.   ____________________________   Who we are:   Based in Bermondsey, the Involved group of companies includes: Involved Productions, home of globally renowned independent dance and electronic music labels Anjunabeats, Anjunadeep and Anjunachill, as well as our label and distribution services. Involved Live, the touring and events company responsible for a portfolio of international events, festivals, and all-night-long showcases, creating unforgettable experiences for fans globally. Involved Publishing, a progressive independent music publisher, representing cutting-edge producers, writers and artists from around the world. Involved Management is a boutique artist management company that is responsible for steering the careers of Above & Beyond, Lane 8, Le Youth and Dusky.  We offer careers, not just jobs, and our team embrace the entrepreneurial spirit, independent mindset and respectful culture we have created, building community and connection through music. ____________________________   Our Data Engineer is responsible for: Analytics Dashboard Creation: Developing and optimising Tableau dashboards that provide clear, actionable insights to various teams, including Streaming & Promotions, Label Directors, and Publishing. Data Pipeline Development: Designing, building, and maintaining efficient and scalable data pipelines to automate the collection, transformation, and delivery of data to and from various sources, including DSPs, FUGA Analytics, Google Analytics, Chartmetric, Curve, etc. Database Management: Developing and maintaining the company’s database structure, ensuring data accuracy, security, and accessibility for analytics purposes. Teaching: Providing support and training to ensure teams are making effective use of analytics tools and dashboards. Tailoring : Collaborating with different departments to understand their data needs, and working creatively to provide tailored analytics solutions. Building: Supporting the Head of Technology in building and maintaining cross-platform automations. Innovation and Research: Staying up to date with the latest trends and technologies in data engineering and analytics, exploring new tools and methodologies that can enhance our data capabilities. This list is not exhaustive – we may ask you to go beyond your job description on occasion, and we hope the role will change and develop with you. ____________________________   About you:   The ideal candidate for this role will likely have: a solid foundation in Python and JavaScript, ideally with proficiency in other programming languages. experience designing and implementing ETL pipelines, specifically using Apache Airflow (Astronomer). hands-on experience with ETL frameworks, particularly dbt (data build tool). SQL and various database management system skills. a good understanding of different database types, designs, and data modelling systems. experience with cloud platforms like AWS and GCP, including services such as BigQuery, RDS, and Athena. familiarity with Tableau and project management tools like monday.com and Notion. knowledge of APIs from music Digital Service Providers (e.g., Spotify, Apple Music). previous experience at a record label, music distributor, or music publisher. an understanding of the music industry excellent analytical, problem-solving, and communication skills. a proactive approach to learning, excitement about problem-solving, approaching new projects with an open mind. strong accuracy and attention to detail. good written and verbal communication skills, the ability to explain complex ideas using non-technical language. the ability to prioritise and manage their time independently.   ____________________________   What we offer:   A competitive salary (£50-60k pro rata) Participation in our Profit Share Scheme 20 days annual leave A benefits package to support your wellbeing, including access to local gyms and fitness classes, and subscription to health apps including Calm, Headspace and Strava A collection of enhanced family policies to support your family life The opportunity to attend a variety of live events Cycle to work scheme Season ticket loans A lively, collaborative office environment, and a flexible hybrid working policy Paid time off to volunteer with our local charitable initiatives   Applications   Closing date for applications is 21 November 2025, although we may close applications earlier. If you need more information before applying, email us at people@anjunabeats.com. We are committed to inclusion, and encourage applications from anyone with relevant experience and skills. If you require any adjustments throughout the application process to meet your needs and help you perform at your best, please let us know.
28/10/2025
Part time
We’re looking for a Data Engineer to work across the Involved Group, the collective behind globally renowned dance and electronic music labels including Anjunabeats and Anjunadeep, spanning label services and distribution, music publishing, events promotion and artist management. This is a key role within our Technology Department, responsible for developing and managing data pipelines, automating data collection processes, and creating analytics dashboards to provide actionable insights across the company, directly impacting strategy. This role involves working closely with a variety of departments to understand their data needs, developing solutions that streamline data analysis and reporting processes. Reporting to the Head of Technology, our Data Engineer ensures that data analytics initiatives are strategically aligned, efficiently executed, and contribute to the company's overall objectives. Location: Bermondsey, London Working pattern: Part-time (3 days/week) – either in-person at our lively Bermondsey office, hybrid, or home-working.   ____________________________   Who we are:   Based in Bermondsey, the Involved group of companies includes: Involved Productions, home of globally renowned independent dance and electronic music labels Anjunabeats, Anjunadeep and Anjunachill, as well as our label and distribution services. Involved Live, the touring and events company responsible for a portfolio of international events, festivals, and all-night-long showcases, creating unforgettable experiences for fans globally. Involved Publishing, a progressive independent music publisher, representing cutting-edge producers, writers and artists from around the world. Involved Management is a boutique artist management company that is responsible for steering the careers of Above & Beyond, Lane 8, Le Youth and Dusky.  We offer careers, not just jobs, and our team embrace the entrepreneurial spirit, independent mindset and respectful culture we have created, building community and connection through music. ____________________________   Our Data Engineer is responsible for: Analytics Dashboard Creation: Developing and optimising Tableau dashboards that provide clear, actionable insights to various teams, including Streaming & Promotions, Label Directors, and Publishing. Data Pipeline Development: Designing, building, and maintaining efficient and scalable data pipelines to automate the collection, transformation, and delivery of data to and from various sources, including DSPs, FUGA Analytics, Google Analytics, Chartmetric, Curve, etc. Database Management: Developing and maintaining the company’s database structure, ensuring data accuracy, security, and accessibility for analytics purposes. Teaching: Providing support and training to ensure teams are making effective use of analytics tools and dashboards. Tailoring : Collaborating with different departments to understand their data needs, and working creatively to provide tailored analytics solutions. Building: Supporting the Head of Technology in building and maintaining cross-platform automations. Innovation and Research: Staying up to date with the latest trends and technologies in data engineering and analytics, exploring new tools and methodologies that can enhance our data capabilities. This list is not exhaustive – we may ask you to go beyond your job description on occasion, and we hope the role will change and develop with you. ____________________________   About you:   The ideal candidate for this role will likely have: a solid foundation in Python and JavaScript, ideally with proficiency in other programming languages. experience designing and implementing ETL pipelines, specifically using Apache Airflow (Astronomer). hands-on experience with ETL frameworks, particularly dbt (data build tool). SQL and various database management system skills. a good understanding of different database types, designs, and data modelling systems. experience with cloud platforms like AWS and GCP, including services such as BigQuery, RDS, and Athena. familiarity with Tableau and project management tools like monday.com and Notion. knowledge of APIs from music Digital Service Providers (e.g., Spotify, Apple Music). previous experience at a record label, music distributor, or music publisher. an understanding of the music industry excellent analytical, problem-solving, and communication skills. a proactive approach to learning, excitement about problem-solving, approaching new projects with an open mind. strong accuracy and attention to detail. good written and verbal communication skills, the ability to explain complex ideas using non-technical language. the ability to prioritise and manage their time independently.   ____________________________   What we offer:   A competitive salary (£50-60k pro rata) Participation in our Profit Share Scheme 20 days annual leave A benefits package to support your wellbeing, including access to local gyms and fitness classes, and subscription to health apps including Calm, Headspace and Strava A collection of enhanced family policies to support your family life The opportunity to attend a variety of live events Cycle to work scheme Season ticket loans A lively, collaborative office environment, and a flexible hybrid working policy Paid time off to volunteer with our local charitable initiatives   Applications   Closing date for applications is 21 November 2025, although we may close applications earlier. If you need more information before applying, email us at people@anjunabeats.com. We are committed to inclusion, and encourage applications from anyone with relevant experience and skills. If you require any adjustments throughout the application process to meet your needs and help you perform at your best, please let us know.
Hays
Workflow/Business Process Developers (App Support, Debugging)
Hays
Workflow/Business Process Developers (App Support, Debugging) Gatwick (2 days per week in office) £40-50k + Benefits. Must haves: - Must work 2 days per week in the Gatwick Office. - This role cannot offer Visa Sponsorship. Roles Available - 1x Full Time Permanent hire - 1x Full Time 12 Month Fixed Term Contract hire. Your new company This leading financial and consulting business are looking to bolster their engineering team with 2 Workflow Developers to support the core Development team. You will be working in their state-of-the-art offices near to Gatwick/Crawley and will be required to work 2 days per week in the office. Your new role This can be considered a hybrid business/technical role. You'll be working within an enterprise level bespoke financial system to streamline efficiency and the position will be varied. You may be coding in Python, setting up a test environment or debugging within a workflow, this team generally supports the main prestigious development team by diagnosing problems and implementing solutions to these without needing support. The system is complex and can take 6 months to be fully up to speed. Previous experience working on financial products will be required and experience in application support or business process automation would be very transferable. You may have aspirations to move into a Full Stack Dev role but this should be a medium-term ambition whilst you are fully committed to this role. It's a BAU role to meet demand as the software package is being increasingly used and scheduled to go international as well, so the roles are extremely stable with years of work confirmed ahead. What you'll need to succeed We are looking for proficiency in software coding, ideally Python but open to strong Excel/VBA/SQL, automated testing, C#/Java etc for the development of Workflows. Communication skills are paramount as you are the interface between the development team and key stakeholders in the business. Evidence of technical problem solving, thinking on your feet and working with limited supervision are crucial. Working on financial / modular systems and on automation projects will be highly beneficial. What you'll get in return You'll work for an internationally renowned business, one of the top 5 organisations in their sector globally and will be able to work hybrid with 2 days in the office. On top of this you'll get 26 days holiday plus bank holidays and the option to purchase more, access to a huge catalogue of online courses for professional development, electric car scheme and healthcare support through a virtual GP. There is parking on site and the office is walking distance from the train station. What you need to do now To find out more and to be considered for this position please apply directly, or contact Max Wilcock, Senior Business Director on . At Hays Technology, we are shaping the future of recruitment. The rapid adoption of cloud, which is making customer interfaces more engaging and creating a seamless engagement with businesses, means that from the foundation of your organisation up, software developers are critical to success. As the competition for talent grows, we're ready and waiting to help developers really make an impact on organisations, so talk to us today. We are Hays Technology. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at hays.co.uk JBRP1_UKTJ
13/12/2025
Full time
Workflow/Business Process Developers (App Support, Debugging) Gatwick (2 days per week in office) £40-50k + Benefits. Must haves: - Must work 2 days per week in the Gatwick Office. - This role cannot offer Visa Sponsorship. Roles Available - 1x Full Time Permanent hire - 1x Full Time 12 Month Fixed Term Contract hire. Your new company This leading financial and consulting business are looking to bolster their engineering team with 2 Workflow Developers to support the core Development team. You will be working in their state-of-the-art offices near to Gatwick/Crawley and will be required to work 2 days per week in the office. Your new role This can be considered a hybrid business/technical role. You'll be working within an enterprise level bespoke financial system to streamline efficiency and the position will be varied. You may be coding in Python, setting up a test environment or debugging within a workflow, this team generally supports the main prestigious development team by diagnosing problems and implementing solutions to these without needing support. The system is complex and can take 6 months to be fully up to speed. Previous experience working on financial products will be required and experience in application support or business process automation would be very transferable. You may have aspirations to move into a Full Stack Dev role but this should be a medium-term ambition whilst you are fully committed to this role. It's a BAU role to meet demand as the software package is being increasingly used and scheduled to go international as well, so the roles are extremely stable with years of work confirmed ahead. What you'll need to succeed We are looking for proficiency in software coding, ideally Python but open to strong Excel/VBA/SQL, automated testing, C#/Java etc for the development of Workflows. Communication skills are paramount as you are the interface between the development team and key stakeholders in the business. Evidence of technical problem solving, thinking on your feet and working with limited supervision are crucial. Working on financial / modular systems and on automation projects will be highly beneficial. What you'll get in return You'll work for an internationally renowned business, one of the top 5 organisations in their sector globally and will be able to work hybrid with 2 days in the office. On top of this you'll get 26 days holiday plus bank holidays and the option to purchase more, access to a huge catalogue of online courses for professional development, electric car scheme and healthcare support through a virtual GP. There is parking on site and the office is walking distance from the train station. What you need to do now To find out more and to be considered for this position please apply directly, or contact Max Wilcock, Senior Business Director on . At Hays Technology, we are shaping the future of recruitment. The rapid adoption of cloud, which is making customer interfaces more engaging and creating a seamless engagement with businesses, means that from the foundation of your organisation up, software developers are critical to success. As the competition for talent grows, we're ready and waiting to help developers really make an impact on organisations, so talk to us today. We are Hays Technology. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at hays.co.uk JBRP1_UKTJ
Akkodis
Mid - Senior Data Engineer // REMOTE - UK
Akkodis
Senior Data Engineer - Make an Impact About Us We're driving a major transformation in data and analytics, and we need a Senior Data Engineer who can do more than just build pipelines - someone who can demonstrate real impact , influence stakeholders, and help shape the future of our data platform. Why This Role Exists This is an opportunity for an experienced data engineer r and an opportunity to grow into a Principal-level role within 2-3 years . You'll join a small, ambitious team with high visibility across the business, working on modernisation projects that will redefine how we use data. What We're Looking For Impact-Driven Mindset: We want someone who can clearly articulate the difference they've made in previous roles - not just list tasks. Show us how you improved processes, accelerated insights, or drove strategic decisions. Technical Expertise: Essential: Strong experience with Microsoft Fabric or Databricks at a good level. Python Proficiency: Advanced coding skills for building robust, scalable solutions. Strong SQL and data modelling (relational and dimensional). Modern Data Engineering: Proven ability to design and deliver scalable solutions using modern architectures (lakehouse, medallion, warehouse-first). Stakeholder Engagement: Ability to influence and collaborate with business leaders, translating technical solutions into measurable business outcomes. Growth Potential: Comfortable mentoring junior engineers and keen to develop into a leadership role. Mindset: Curious, proactive, and passionate about turning data into tangible business value. What You'll Do Drive the evolution of our data platform using Microsoft Fabric and modern engineering practices. Build and optimise data pipelines for ingestion, transformation, and modelling. Support migration from legacy systems (e.g., Synapse) to modern architectures. Collaborate with stakeholders to ensure solutions deliver real business impact. Contribute to innovation projects, including AI integration and advanced analytics. Why Join Us A small, supportive team with big ambitions. High visibility and the chance to make a real difference. Opportunity to shape modern data capabilities from the ground up. Flexible working (remote with quarterly meet-ups). What Success Looks Like You can evidence impact : cost savings, efficiency gains, improved decision-making, or accelerated delivery timelines. You're trusted by stakeholders and seen as a partner who drives change. You bring clarity and simplicity to complex data challenges. Please note you MUST have Python and Microsoft Fabric experience Please get in touch with Kamilla removed) Modis International Ltd acts as an employment agency for permanent recruitment and an employment business for the supply of temporary workers in the UK. Modis Europe Ltd provide a variety of international solutions that connect clients to the best talent in the world. For all positions based in Switzerland, Modis Europe Ltd works with its licensed Swiss partner Accurity GmbH to ensure that candidate applications are handled in accordance with Swiss law. Both Modis International Ltd and Modis Europe Ltd are Equal Opportunities Employers. By applying for this role your details will be submitted to Modis International Ltd and/ or Modis Europe Ltd. Our Candidate Privacy Information Statement which explains how we will use your information is available on the Modis website.
12/12/2025
Full time
Senior Data Engineer - Make an Impact About Us We're driving a major transformation in data and analytics, and we need a Senior Data Engineer who can do more than just build pipelines - someone who can demonstrate real impact , influence stakeholders, and help shape the future of our data platform. Why This Role Exists This is an opportunity for an experienced data engineer r and an opportunity to grow into a Principal-level role within 2-3 years . You'll join a small, ambitious team with high visibility across the business, working on modernisation projects that will redefine how we use data. What We're Looking For Impact-Driven Mindset: We want someone who can clearly articulate the difference they've made in previous roles - not just list tasks. Show us how you improved processes, accelerated insights, or drove strategic decisions. Technical Expertise: Essential: Strong experience with Microsoft Fabric or Databricks at a good level. Python Proficiency: Advanced coding skills for building robust, scalable solutions. Strong SQL and data modelling (relational and dimensional). Modern Data Engineering: Proven ability to design and deliver scalable solutions using modern architectures (lakehouse, medallion, warehouse-first). Stakeholder Engagement: Ability to influence and collaborate with business leaders, translating technical solutions into measurable business outcomes. Growth Potential: Comfortable mentoring junior engineers and keen to develop into a leadership role. Mindset: Curious, proactive, and passionate about turning data into tangible business value. What You'll Do Drive the evolution of our data platform using Microsoft Fabric and modern engineering practices. Build and optimise data pipelines for ingestion, transformation, and modelling. Support migration from legacy systems (e.g., Synapse) to modern architectures. Collaborate with stakeholders to ensure solutions deliver real business impact. Contribute to innovation projects, including AI integration and advanced analytics. Why Join Us A small, supportive team with big ambitions. High visibility and the chance to make a real difference. Opportunity to shape modern data capabilities from the ground up. Flexible working (remote with quarterly meet-ups). What Success Looks Like You can evidence impact : cost savings, efficiency gains, improved decision-making, or accelerated delivery timelines. You're trusted by stakeholders and seen as a partner who drives change. You bring clarity and simplicity to complex data challenges. Please note you MUST have Python and Microsoft Fabric experience Please get in touch with Kamilla removed) Modis International Ltd acts as an employment agency for permanent recruitment and an employment business for the supply of temporary workers in the UK. Modis Europe Ltd provide a variety of international solutions that connect clients to the best talent in the world. For all positions based in Switzerland, Modis Europe Ltd works with its licensed Swiss partner Accurity GmbH to ensure that candidate applications are handled in accordance with Swiss law. Both Modis International Ltd and Modis Europe Ltd are Equal Opportunities Employers. By applying for this role your details will be submitted to Modis International Ltd and/ or Modis Europe Ltd. Our Candidate Privacy Information Statement which explains how we will use your information is available on the Modis website.
GBR Group Ltd
Senior Systems Developer
GBR Group Ltd
Job Title: Senior Systems Developer DETAILS We are seeking a highly skilled Senior Systems Developer with extensive experience in data architecture, system design, and enterprise-level application development. The successful candidate will be responsible for constructing scalable systems, designing robust data models, and guiding the technical direction of backend and data-driven solutions across the organisation. DUTIES & RESPONSIBILITIES Design, develop, and maintain sophisticated backend systems, APIs, and services. Lead architectural decisions to ensure systems are scalable, secure, and high-performing. Implement best practices for software engineering, and cloud-native development. Collaborate with cross-functional teams (Data Engineering, DevOps, Product, QA) to conceptualise and deliver high-quality solutions. Define and implement enterprise data models, data flows, and database schemas. Architect and maintain data pipelines, data lakes, and data warehouses. Optimise data storage, retrieval, partitioning, and indexing strategies for performance and scalability. Ensure data quality, governance, lineage, and compliance with security standards. Develop integrations between internal and external systems utilising APIs, ETL tools, and messaging systems. Automate workflows, monitoring, and deployment processes. Drive platform modernisation initiatives and migrations to cloud. Participate in code reviews, architecture meetings, and technical strategy discussions. Provide expert guidance on system performance, scalability, and troubleshooting. SKILLS, EXPERIENCE & QUALIFICATIONS Bachelor s or Master s degree in Computer Science, Information Technology, or related discipline. Minimum 8 years of experience in systems development, with at least 2 years dedicated to data architecture. Demonstrated success in delivering enterprise-grade systems and data platforms. Strong programming expertise in Python and AI skills. Profound understanding of system architecture, design patterns, and microservices. Hands-on experience with cloud platforms such as AWS, Azure, or GCP. Expertise in SQL and NoSQL database technologies. Knowledge of ETL/ELT frameworks, data modelling and data governance. Familiarity with containerisation and orchestration tools such as Docker and Kubernetes. Awareness of security frameworks, including authentication and authorisation protocols. Analytical and problem-solving capabilities. Excellent communication and documentation skills. Ability to work independently and lead cross-functional teams. Adaptability to rapidly evolving technological environments PREFERRED SKILLS Airflow, dbt, Spark, Kafka, RabbitMQ, Redis. Git, CI/CD pipelines. Experience with data warehousing solutions such as Snowflake, Redshift, BigQuery, or Synapse. Exposure to AI/ML workflows and model deployment. Experience with streaming systems and real-time architecture. Knowledge of event-driven and serverless architectural patterns. The Salary will be £ 42,500 - £ 45,500 DOE Type: Permanent
12/12/2025
Full time
Job Title: Senior Systems Developer DETAILS We are seeking a highly skilled Senior Systems Developer with extensive experience in data architecture, system design, and enterprise-level application development. The successful candidate will be responsible for constructing scalable systems, designing robust data models, and guiding the technical direction of backend and data-driven solutions across the organisation. DUTIES & RESPONSIBILITIES Design, develop, and maintain sophisticated backend systems, APIs, and services. Lead architectural decisions to ensure systems are scalable, secure, and high-performing. Implement best practices for software engineering, and cloud-native development. Collaborate with cross-functional teams (Data Engineering, DevOps, Product, QA) to conceptualise and deliver high-quality solutions. Define and implement enterprise data models, data flows, and database schemas. Architect and maintain data pipelines, data lakes, and data warehouses. Optimise data storage, retrieval, partitioning, and indexing strategies for performance and scalability. Ensure data quality, governance, lineage, and compliance with security standards. Develop integrations between internal and external systems utilising APIs, ETL tools, and messaging systems. Automate workflows, monitoring, and deployment processes. Drive platform modernisation initiatives and migrations to cloud. Participate in code reviews, architecture meetings, and technical strategy discussions. Provide expert guidance on system performance, scalability, and troubleshooting. SKILLS, EXPERIENCE & QUALIFICATIONS Bachelor s or Master s degree in Computer Science, Information Technology, or related discipline. Minimum 8 years of experience in systems development, with at least 2 years dedicated to data architecture. Demonstrated success in delivering enterprise-grade systems and data platforms. Strong programming expertise in Python and AI skills. Profound understanding of system architecture, design patterns, and microservices. Hands-on experience with cloud platforms such as AWS, Azure, or GCP. Expertise in SQL and NoSQL database technologies. Knowledge of ETL/ELT frameworks, data modelling and data governance. Familiarity with containerisation and orchestration tools such as Docker and Kubernetes. Awareness of security frameworks, including authentication and authorisation protocols. Analytical and problem-solving capabilities. Excellent communication and documentation skills. Ability to work independently and lead cross-functional teams. Adaptability to rapidly evolving technological environments PREFERRED SKILLS Airflow, dbt, Spark, Kafka, RabbitMQ, Redis. Git, CI/CD pipelines. Experience with data warehousing solutions such as Snowflake, Redshift, BigQuery, or Synapse. Exposure to AI/ML workflows and model deployment. Experience with streaming systems and real-time architecture. Knowledge of event-driven and serverless architectural patterns. The Salary will be £ 42,500 - £ 45,500 DOE Type: Permanent
AI Automation Engineer
McCabe & Barton
AI Automation Engineer | Hybrid 3 days a week in office | London | Permanent A leading financial services client in London is seeking a talented AI Automation Engineer to join their team. Please see below for key details. Role Overview: Analyse and optimise business processes for automation whilst designing, building, and deploying intelligent automation solutions using BPA platforms (Appian), Machine Learning, and Generative AI to drive operational efficiency and innovation. Key Characteristics: Process Analysis & Optimisation - Expert in analysing existing business processes through stakeholder interviews, process mapping, and workflow documentation to identify automation opportunities. Skilled in creating process flow diagrams, conducting time-motion studies, identifying bottlenecks and inefficiencies, and redesigning processes to be machine-readable and automation-ready using methodologies. Python Development - Strong proficiency in Python programming including object-oriented design, asynchronous programming, error handling, and writing clean, maintainable code. Experience with key libraries including Pandas, NumPy for data manipulation, requests and APIs for integrations, asyncio for concurrent processing, and building robust automation scripts with proper logging, testing (pytest), and documentation. AI & Machine Learning Frameworks - Deep expertise in AI/ML frameworks including TensorFlow, PyTorch, Scikit-learn, and Hugging Face Transformers. Experience building, training, and deploying machine learning models for classification, regression, clustering, and NLP tasks. Understanding of model evaluation metrics, hyperparameter tuning, feature engineering, and MLOps practices for production deployment. Generative AI & LLM Integration - Proficient in working with Large Language Models including OpenAI GPT models, Anthropic Claude, Azure OpenAI, and open-source alternatives (Llama, Mistral). Experience with prompt engineering, fine-tuning, RAG (Retrieval Augmented Generation) architectures, vector databases (Pinecone, ChromaDB, FAISS), embeddings, and building AI-powered automation solutions that leverage natural language understanding. Appian BPA Platform - Strong experience with Appian low-code platform including process modelling, interface design, expression rules, integration objects, and data modelling. Skilled in building end-to-end business process applications, configuring workflows, implementing business rules, managing records, and integrating Appian with external systems via REST APIs, web services, and connected systems. API Development & Integration - Proficient in designing and building RESTful APIs using FastAPI, Flask, or Django REST Framework for exposing AI models and automation services. Experience with API authentication (OAuth, JWT), rate limiting, error handling, API documentation (Swagger/OpenAPI), webhooks, and integrating disparate systems to create seamless automated workflows. Document Processing & OCR - Experience implementing intelligent document processing solutions using OCR technologies (Tesseract, Azure AI Document Intelligence, natural language processing for information extraction, document classification, and building end-to-end pipelines for automated document ingestion, processing, and data extraction with validation rules. Robotic Process Automation (RPA) - Knowledge of RPA concepts and tools (UiPath, Automation Anywhere, Power Automate) for automating repetitive tasks, screen scraping, and Legacy system integration. Ability to assess when RPA vs. API integration vs. AI solutions are most appropriate, and experience building hybrid automation solutions combining multiple technologies. Data Engineering & Pipeline Development - Strong skills in building data pipelines for AI/automation solutions including data extraction, transformation, and loading (ETL). Experience with SQL databases (SQL Server), data validation, cleansing workflows, scheduling tools (Azure Data Factory), and ensuring data quality for machine learning applications. Machine Learning Operations (MLOps) - Experience deploying ML models to production environments using containerisation (Docker), orchestration (Kubernetes), model versioning (MLflow, DVC), monitoring model performance and drift, A/B testing frameworks, and implementing CI/CD pipelines for automated model training and deployment. Understanding of model governance, explainability, and compliance requirements. Solution Architecture & Technical Design - Ability to design end-to-end automation architectures that combine multiple technologies (BPA, ML, GenAI, APIs) into cohesive solutions. Experience creating technical design documents, system architecture diagrams, assessing build vs. buy decisions, estimating effort and complexity, and presenting technical recommendations to both technical and non-technical stakeholders. Stakeholder Collaboration & Change Management - Excellent communication skills for gathering requirements from business users, translating business needs into technical specifications, and demonstrating proof-of-concepts. Experience managing stakeholder expectations, conducting user acceptance testing, providing training on automated solutions, measuring automation ROI through KPIs (time saved, error reduction, cost savings), and driving adoption of intelligent automation across the organisation. If you align to the key requirements then please apply with an updated CV.
12/12/2025
Full time
AI Automation Engineer | Hybrid 3 days a week in office | London | Permanent A leading financial services client in London is seeking a talented AI Automation Engineer to join their team. Please see below for key details. Role Overview: Analyse and optimise business processes for automation whilst designing, building, and deploying intelligent automation solutions using BPA platforms (Appian), Machine Learning, and Generative AI to drive operational efficiency and innovation. Key Characteristics: Process Analysis & Optimisation - Expert in analysing existing business processes through stakeholder interviews, process mapping, and workflow documentation to identify automation opportunities. Skilled in creating process flow diagrams, conducting time-motion studies, identifying bottlenecks and inefficiencies, and redesigning processes to be machine-readable and automation-ready using methodologies. Python Development - Strong proficiency in Python programming including object-oriented design, asynchronous programming, error handling, and writing clean, maintainable code. Experience with key libraries including Pandas, NumPy for data manipulation, requests and APIs for integrations, asyncio for concurrent processing, and building robust automation scripts with proper logging, testing (pytest), and documentation. AI & Machine Learning Frameworks - Deep expertise in AI/ML frameworks including TensorFlow, PyTorch, Scikit-learn, and Hugging Face Transformers. Experience building, training, and deploying machine learning models for classification, regression, clustering, and NLP tasks. Understanding of model evaluation metrics, hyperparameter tuning, feature engineering, and MLOps practices for production deployment. Generative AI & LLM Integration - Proficient in working with Large Language Models including OpenAI GPT models, Anthropic Claude, Azure OpenAI, and open-source alternatives (Llama, Mistral). Experience with prompt engineering, fine-tuning, RAG (Retrieval Augmented Generation) architectures, vector databases (Pinecone, ChromaDB, FAISS), embeddings, and building AI-powered automation solutions that leverage natural language understanding. Appian BPA Platform - Strong experience with Appian low-code platform including process modelling, interface design, expression rules, integration objects, and data modelling. Skilled in building end-to-end business process applications, configuring workflows, implementing business rules, managing records, and integrating Appian with external systems via REST APIs, web services, and connected systems. API Development & Integration - Proficient in designing and building RESTful APIs using FastAPI, Flask, or Django REST Framework for exposing AI models and automation services. Experience with API authentication (OAuth, JWT), rate limiting, error handling, API documentation (Swagger/OpenAPI), webhooks, and integrating disparate systems to create seamless automated workflows. Document Processing & OCR - Experience implementing intelligent document processing solutions using OCR technologies (Tesseract, Azure AI Document Intelligence, natural language processing for information extraction, document classification, and building end-to-end pipelines for automated document ingestion, processing, and data extraction with validation rules. Robotic Process Automation (RPA) - Knowledge of RPA concepts and tools (UiPath, Automation Anywhere, Power Automate) for automating repetitive tasks, screen scraping, and Legacy system integration. Ability to assess when RPA vs. API integration vs. AI solutions are most appropriate, and experience building hybrid automation solutions combining multiple technologies. Data Engineering & Pipeline Development - Strong skills in building data pipelines for AI/automation solutions including data extraction, transformation, and loading (ETL). Experience with SQL databases (SQL Server), data validation, cleansing workflows, scheduling tools (Azure Data Factory), and ensuring data quality for machine learning applications. Machine Learning Operations (MLOps) - Experience deploying ML models to production environments using containerisation (Docker), orchestration (Kubernetes), model versioning (MLflow, DVC), monitoring model performance and drift, A/B testing frameworks, and implementing CI/CD pipelines for automated model training and deployment. Understanding of model governance, explainability, and compliance requirements. Solution Architecture & Technical Design - Ability to design end-to-end automation architectures that combine multiple technologies (BPA, ML, GenAI, APIs) into cohesive solutions. Experience creating technical design documents, system architecture diagrams, assessing build vs. buy decisions, estimating effort and complexity, and presenting technical recommendations to both technical and non-technical stakeholders. Stakeholder Collaboration & Change Management - Excellent communication skills for gathering requirements from business users, translating business needs into technical specifications, and demonstrating proof-of-concepts. Experience managing stakeholder expectations, conducting user acceptance testing, providing training on automated solutions, measuring automation ROI through KPIs (time saved, error reduction, cost savings), and driving adoption of intelligent automation across the organisation. If you align to the key requirements then please apply with an updated CV.
Barclays Bank Plc
PostgreSQL SRE
Barclays Bank Plc City, London
Join us as a PostgreSQL SRE at Barclays where you'll effectively monitor and maintain the bank's critical technology infrastructure and resolve more complex technical issues, whilst minimizing disruption to operations. In this role you will assume a key technical leadership role. You will shape the direction of our database administration, ensuring our technological approaches are innovative and aligned with the Bank's business goals. To be successful as a PostgreSQL SRE, you should have: Experience as a Database Administrator, with a focus on PostgreSQL and similar database technologies such as Oracle or MS-SQL. A background in implementing and leading SRE practices across large organizations or complex teams. Hands-on experience on Containers and Kubernetes Experience with DevOps automation tools such as Code versioning (git), JIRA, Ansible, database CI/CD tools and their implementation. Some other highly valued skills may include: Expertise with scripting languages (e.g. PowerShell, Python, Bash) for automation/migration tasks Experience of working on Data migration tools and software's Expertise in system configuration management tools such as Chef, Ansible for database server configurations. You may be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills This role can be based in our London, Knutsford or Glasgow locations. Purpose of the role To apply software engineering techniques, automation, and best practices in incident response, to ensure the reliability, availability, and scalability of the systems, platforms, and technology through them. Accountabilities Availability, performance, and scalability of systems and services through proactive monitoring, maintenance, and capacity planning. Resolution, analysis and response to system outages and disruptions, and implement measures to prevent similar incidents from recurring. Development of tools and scripts to automate operational processes, reducing manual workload, increasing efficiency, and improving system resilience. Monitoring and optimisation of system performance and resource usage, identify and address bottlenecks, and implement best practices for performance tuning. Collaboration with development teams to integrate best practices for reliability, scalability, and performance into the software development lifecycle, and work closely with other teams to ensure smooth and efficient operations. Stay informed of industry technology trends and innovations, and actively contribute to the organization's technology communities to foster a culture of technical excellence and growth. Assistant Vice President Expectations To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions. Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L - Listen and be authentic, E - Energise and inspire, A - Align across the enterprise, D - Develop others. OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes. Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. Take ownership for managing risk and strengthening controls in relation to the work done. Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc).to solve problems creatively and effectively. Communicate complex information. 'Complex' information could include sensitive information or information that is difficult to communicate because of its content or its audience. Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship - our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset - to Empower, Challenge and Drive - the operating manual for how we behave. Investment
12/12/2025
Full time
Join us as a PostgreSQL SRE at Barclays where you'll effectively monitor and maintain the bank's critical technology infrastructure and resolve more complex technical issues, whilst minimizing disruption to operations. In this role you will assume a key technical leadership role. You will shape the direction of our database administration, ensuring our technological approaches are innovative and aligned with the Bank's business goals. To be successful as a PostgreSQL SRE, you should have: Experience as a Database Administrator, with a focus on PostgreSQL and similar database technologies such as Oracle or MS-SQL. A background in implementing and leading SRE practices across large organizations or complex teams. Hands-on experience on Containers and Kubernetes Experience with DevOps automation tools such as Code versioning (git), JIRA, Ansible, database CI/CD tools and their implementation. Some other highly valued skills may include: Expertise with scripting languages (e.g. PowerShell, Python, Bash) for automation/migration tasks Experience of working on Data migration tools and software's Expertise in system configuration management tools such as Chef, Ansible for database server configurations. You may be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills This role can be based in our London, Knutsford or Glasgow locations. Purpose of the role To apply software engineering techniques, automation, and best practices in incident response, to ensure the reliability, availability, and scalability of the systems, platforms, and technology through them. Accountabilities Availability, performance, and scalability of systems and services through proactive monitoring, maintenance, and capacity planning. Resolution, analysis and response to system outages and disruptions, and implement measures to prevent similar incidents from recurring. Development of tools and scripts to automate operational processes, reducing manual workload, increasing efficiency, and improving system resilience. Monitoring and optimisation of system performance and resource usage, identify and address bottlenecks, and implement best practices for performance tuning. Collaboration with development teams to integrate best practices for reliability, scalability, and performance into the software development lifecycle, and work closely with other teams to ensure smooth and efficient operations. Stay informed of industry technology trends and innovations, and actively contribute to the organization's technology communities to foster a culture of technical excellence and growth. Assistant Vice President Expectations To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions. Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L - Listen and be authentic, E - Energise and inspire, A - Align across the enterprise, D - Develop others. OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes. Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. Take ownership for managing risk and strengthening controls in relation to the work done. Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc).to solve problems creatively and effectively. Communicate complex information. 'Complex' information could include sensitive information or information that is difficult to communicate because of its content or its audience. Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship - our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset - to Empower, Challenge and Drive - the operating manual for how we behave. Investment
Staffworx Limited
Data & AI Senior Consultants - Dynamic AI Consulting firm
Staffworx Limited
Data & AI Senior Consultants Location - We are flexible: onsite, hybrid or fully remote, depending on what works for you and the client, UK or Netherlands based. What you will actually be doing This is not a role where you build clever models that never get used. Your focus is on creating measurable value for clients using data science, machine learning and GenAI, in a consulting and advisory context. You will own work from the very beginning, asking questions like "What value are we trying to create here?" and "Is this the right problem to solve?" through to "It is live, stakeholders are using it and we can see the impact in the numbers." You will work fairly independently and you will also be someone that more junior team members look to for help and direction. A big part of the job is taking messy, ambiguous business and technical problems and turning them into clear, valuable solutions that make sense to the client. You will do this in a client facing role. That means you will be in the room for key conversations, providing honest advice, managing expectations and helping clients make good decisions about where and how to use AI. What your day to day might look like Getting to the heart of the problem Meeting with stakeholders who may not be clear on what they really need Using discovery sessions, workshops and structured questioning to uncover the real business problem Framing success in terms of value. For example higher revenue, lower cost, reduced risk, increased efficiency or better customer experience Translating business goals into a clear roadmap of data and AI work that everyone can understand Advising clients when AI is not the right solution and suggesting simpler or more cost effective alternatives Consulting and advisory work Acting as a trusted advisor to product owners, heads of department and executives Helping clients prioritise use cases based on value, feasibility and risk Communicating trade offs in a simple way. For example accuracy versus speed, innovation versus compliance, cost versus impact Preparing and delivering client presentations, proposals and updates that tell a clear story Supporting pre sales activities where needed, such as scoping work, estimating effort and defining outcomes Managing client expectations, risks and dependencies so there are no surprises Building things that actually work Once the problem and value are clear, you will design and deliver production ready ML and GenAI solutions. That includes: Designing and building data pipelines, batch or streaming, that support the desired outcomes Working with engineers and architects so your work fits cleanly into existing systems Making sure what you build is reliable in production and moves the needle on agreed metrics, not just offline benchmarks Explaining design decisions to both technical and non technical stakeholders GenAI work You will work with GenAI in ways that are grounded in real use cases and business value: Building RAG systems that improve search, content discovery or productivity rather than existing for their own sake Implementing guardrails so models do not leak PII or generate harmful or off brand content Defining and tracking the right metrics so you and the client can see whether a GenAI solution is useful and cost effective Fine tuning and optimising models so they perform well for the use case and budget Designing agentic workflows where they genuinely improve outcomes rather than add complexity Helping clients understand what GenAI can and cannot do in practice Keeping it running You will set up the foundations that protect value over time: Experiment tracking and model versioning so you know what works and can roll back safely CI/CD pipelines for ML so improvements reach users quickly and reliably Monitoring and alerting for models and data so you can catch issues before they damage trust or results Communicating operational risks and mitigations to non technical stakeholders in plain language Security, quality and compliance You will help make sure: Data is accurate, traceable and well managed so decisions are sound Sensitive data is handled correctly, protecting users and the business Regulatory and compliance requirements are met, avoiding costly mistakes Clients understand the risk profile of AI solutions and the controls in place Working with people You will be a bridge between technical and non technical teams, inside our organisation and on the client side. That means: Explaining complex ML and GenAI ideas in plain language, always tied to business outcomes Working closely with product managers, engineers and business stakeholders to prioritise work that matters Facilitating workshops, playback sessions and show and tells that build buy in and understanding Coaching and supporting junior colleagues so the whole team can deliver more value Representing the company professionally in client meetings and at industry events What we are looking for Experience Around 3 to 6 years of experience shipping ML or GenAI solutions into production A track record of seeing projects through from discovery to delivery, with clear impact Experience working directly with stakeholders or clients in a consulting, advisory or product facing role Education A Bachelor or Master degree in a quantitative field such as Computer Science, Data Science, Statistics, Mathematics or Engineering or Equivalent experience that shows you can deliver results Technical skills Core skills Strong Python and SQL, with clean, maintainable code Solid understanding of ML fundamentals. For example feature engineering, model selection, handling imbalanced data, choosing and interpreting metrics Experience with PyTorch or TensorFlow GenAI specific Hands on experience with LLM APIs or open source models such as Llama or Mistral Experience building RAG systems with vector databases such as FAISS, Pinecone or Weaviate Ability to evaluate and improve prompts and retrieval quality using clear metrics Understanding of safety practices such as PII redaction and content filtering Exposure to agentic frameworks Cloud and infrastructure Comfortable working in at least one major cloud provider. AWS, GCP or Azure Familiar with Docker and CI/CD pipelines Experience with managed ML platforms such as SageMaker, Vertex AI or Azure ML Data engineering and MLOps Experience with data warehouses such as Snowflake, BigQuery or Redshift Workflow orchestration using tools like Airflow or Dagster Experience with MLOps tools such as MLflow, Weights and Biases or similar Awareness of data and model drift, and how to monitor and respond to it before it erodes value Soft skills, the things that really matter You are comfortable in client facing settings and can build trust quickly You can talk with anyone from a CEO to a new data analyst, and always bring the conversation back to business value You can take a vague, messy business problem and turn it into a clear technical plan that links to outcomes and metrics You are happy to push back and challenge assumptions respectfully when it is in the client's best interest You like helping other people grow and are happy to mentor junior colleagues You communicate clearly in writing and in person Nice to have, not required Do not rule yourself out if you do not have these. They are a bonus, not a checklist. Experience with Delta Lake, Iceberg, Spark or Databricks, Palantir Experience optimising LLM serving with tools such as vLLM, TGI or TensorRT LLM Search and ranking experience. For example Elasticsearch or rerankers Background in time series forecasting, causal inference, recommender systems or optimisation Experience managing cloud costs and IAM so value is not lost to waste Ability to work in other languages where needed. For example Java, Scala, Go or bash Experience with BI tools such as Looker or Tableau Prior consulting experience or leading client projects end to end Contributions to open source, conference talks or published papers that show your ability to share ideas and influence the wider community Got a background that fits and you're up for a new challenge? Send over your latest CV, expectations and availability. Staffworx Limited is a UK based recruitment consultancy partnering with leading global brands across digital, AI, software, and business consulting. Let's talk about what you could add to the mix.
11/12/2025
Full time
Data & AI Senior Consultants Location - We are flexible: onsite, hybrid or fully remote, depending on what works for you and the client, UK or Netherlands based. What you will actually be doing This is not a role where you build clever models that never get used. Your focus is on creating measurable value for clients using data science, machine learning and GenAI, in a consulting and advisory context. You will own work from the very beginning, asking questions like "What value are we trying to create here?" and "Is this the right problem to solve?" through to "It is live, stakeholders are using it and we can see the impact in the numbers." You will work fairly independently and you will also be someone that more junior team members look to for help and direction. A big part of the job is taking messy, ambiguous business and technical problems and turning them into clear, valuable solutions that make sense to the client. You will do this in a client facing role. That means you will be in the room for key conversations, providing honest advice, managing expectations and helping clients make good decisions about where and how to use AI. What your day to day might look like Getting to the heart of the problem Meeting with stakeholders who may not be clear on what they really need Using discovery sessions, workshops and structured questioning to uncover the real business problem Framing success in terms of value. For example higher revenue, lower cost, reduced risk, increased efficiency or better customer experience Translating business goals into a clear roadmap of data and AI work that everyone can understand Advising clients when AI is not the right solution and suggesting simpler or more cost effective alternatives Consulting and advisory work Acting as a trusted advisor to product owners, heads of department and executives Helping clients prioritise use cases based on value, feasibility and risk Communicating trade offs in a simple way. For example accuracy versus speed, innovation versus compliance, cost versus impact Preparing and delivering client presentations, proposals and updates that tell a clear story Supporting pre sales activities where needed, such as scoping work, estimating effort and defining outcomes Managing client expectations, risks and dependencies so there are no surprises Building things that actually work Once the problem and value are clear, you will design and deliver production ready ML and GenAI solutions. That includes: Designing and building data pipelines, batch or streaming, that support the desired outcomes Working with engineers and architects so your work fits cleanly into existing systems Making sure what you build is reliable in production and moves the needle on agreed metrics, not just offline benchmarks Explaining design decisions to both technical and non technical stakeholders GenAI work You will work with GenAI in ways that are grounded in real use cases and business value: Building RAG systems that improve search, content discovery or productivity rather than existing for their own sake Implementing guardrails so models do not leak PII or generate harmful or off brand content Defining and tracking the right metrics so you and the client can see whether a GenAI solution is useful and cost effective Fine tuning and optimising models so they perform well for the use case and budget Designing agentic workflows where they genuinely improve outcomes rather than add complexity Helping clients understand what GenAI can and cannot do in practice Keeping it running You will set up the foundations that protect value over time: Experiment tracking and model versioning so you know what works and can roll back safely CI/CD pipelines for ML so improvements reach users quickly and reliably Monitoring and alerting for models and data so you can catch issues before they damage trust or results Communicating operational risks and mitigations to non technical stakeholders in plain language Security, quality and compliance You will help make sure: Data is accurate, traceable and well managed so decisions are sound Sensitive data is handled correctly, protecting users and the business Regulatory and compliance requirements are met, avoiding costly mistakes Clients understand the risk profile of AI solutions and the controls in place Working with people You will be a bridge between technical and non technical teams, inside our organisation and on the client side. That means: Explaining complex ML and GenAI ideas in plain language, always tied to business outcomes Working closely with product managers, engineers and business stakeholders to prioritise work that matters Facilitating workshops, playback sessions and show and tells that build buy in and understanding Coaching and supporting junior colleagues so the whole team can deliver more value Representing the company professionally in client meetings and at industry events What we are looking for Experience Around 3 to 6 years of experience shipping ML or GenAI solutions into production A track record of seeing projects through from discovery to delivery, with clear impact Experience working directly with stakeholders or clients in a consulting, advisory or product facing role Education A Bachelor or Master degree in a quantitative field such as Computer Science, Data Science, Statistics, Mathematics or Engineering or Equivalent experience that shows you can deliver results Technical skills Core skills Strong Python and SQL, with clean, maintainable code Solid understanding of ML fundamentals. For example feature engineering, model selection, handling imbalanced data, choosing and interpreting metrics Experience with PyTorch or TensorFlow GenAI specific Hands on experience with LLM APIs or open source models such as Llama or Mistral Experience building RAG systems with vector databases such as FAISS, Pinecone or Weaviate Ability to evaluate and improve prompts and retrieval quality using clear metrics Understanding of safety practices such as PII redaction and content filtering Exposure to agentic frameworks Cloud and infrastructure Comfortable working in at least one major cloud provider. AWS, GCP or Azure Familiar with Docker and CI/CD pipelines Experience with managed ML platforms such as SageMaker, Vertex AI or Azure ML Data engineering and MLOps Experience with data warehouses such as Snowflake, BigQuery or Redshift Workflow orchestration using tools like Airflow or Dagster Experience with MLOps tools such as MLflow, Weights and Biases or similar Awareness of data and model drift, and how to monitor and respond to it before it erodes value Soft skills, the things that really matter You are comfortable in client facing settings and can build trust quickly You can talk with anyone from a CEO to a new data analyst, and always bring the conversation back to business value You can take a vague, messy business problem and turn it into a clear technical plan that links to outcomes and metrics You are happy to push back and challenge assumptions respectfully when it is in the client's best interest You like helping other people grow and are happy to mentor junior colleagues You communicate clearly in writing and in person Nice to have, not required Do not rule yourself out if you do not have these. They are a bonus, not a checklist. Experience with Delta Lake, Iceberg, Spark or Databricks, Palantir Experience optimising LLM serving with tools such as vLLM, TGI or TensorRT LLM Search and ranking experience. For example Elasticsearch or rerankers Background in time series forecasting, causal inference, recommender systems or optimisation Experience managing cloud costs and IAM so value is not lost to waste Ability to work in other languages where needed. For example Java, Scala, Go or bash Experience with BI tools such as Looker or Tableau Prior consulting experience or leading client projects end to end Contributions to open source, conference talks or published papers that show your ability to share ideas and influence the wider community Got a background that fits and you're up for a new challenge? Send over your latest CV, expectations and availability. Staffworx Limited is a UK based recruitment consultancy partnering with leading global brands across digital, AI, software, and business consulting. Let's talk about what you could add to the mix.
Data Integration Engineer
Halliburton Abingdon, Oxfordshire
We are looking for the right people - people who want to innovate, achieve, grow and lead. We attract and retain the best talent by investing in our employees and empowering them to develop themselves and their careers. Experience the challenges, rewards and opportunity of working for one of the world's largest providers of products and services to the global energy industry. Job Duties We are seeking a skilled and proactive Data Integration Engineer to join the Neftex Technical Services team. Reporting to the Team Lead the Data Integration Engineer will be responsible for designing, building, and maintaining robust data pipelines and integration frameworks that connect diverse systems including LLMs and a proprietary Data Integration solution. Successful candidates will be evidently enthusiastic and motivated people who we can train up in our processes and ultimately play a key role in quality assurance initiatives across different stakeholder groups. This role is based in our Abingdon, Oxfordshire office. Key Responsibilities: Design and implement scalable data integration solutions using ETL/ELT tools and APIs Develop and maintain data pipelines that include Large Language Models (LLMs) Build solutions that include cloud and on-premises environments Collaborate with data architects, analysts, and business stakeholders to understand data requirements Integrate data from various sources including databases, SaaS platforms, APIs, and flat files Monitor and optimize data flows for performance, reliability, and cost-efficiency Ensure data quality, consistency, and governance across integrated systems Automate data workflows and support real-time data streaming Document integration processes and maintain technical specification Qualifications Qualifications & Experience: 3+ years' experience working with database and related tools Strong proficiency with data virtualisation platforms and tools such as Teiid or similar Solid understanding of SQL, relational databases, and data modelling Experience with cloud platforms (AWS, Azure) and cloud-native data services Familiarity with RESTful APIs, JSON, XML, OData, and message queues (Kafka) Knowledge of data governance, security, and compliance best practices Preferred Skills: Experience with cloud-based database solutions. Understanding of data lifecycle management and SOC2 security standards. Familiarity with geoscience disciplines, geospatial data and GIS tools (e.g., ArcGIS, QGIS) is advantageous. Scripting and automation (e.g., PowerShell, Python, Java). Experience with Gitlab. Knowledge of Spotfire data visualization platform or alternative dashboard solutions. Awareness of Agile delivery methodologies. Halliburton is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, disability, genetic information, pregnancy, citizenship, marital status, sex/gender, sexual preference/ orientation, gender identity, age, veteran status, national origin, or any other status protected by law or regulation. Location 97 Jubilee Avenue, Milton Park, Abingdon, Oxfordshire, OX14 4RW, United Kingdom Job Details Requisition Number: 204269 Experience Level: Entry-Level Job Family: Engineering/Science/Technology Product Service Line: division Full Time / Part Time: Full Time Additional Locations for this position: Compensation Information Compensation is competitive and commensurate with experience.
10/12/2025
Full time
We are looking for the right people - people who want to innovate, achieve, grow and lead. We attract and retain the best talent by investing in our employees and empowering them to develop themselves and their careers. Experience the challenges, rewards and opportunity of working for one of the world's largest providers of products and services to the global energy industry. Job Duties We are seeking a skilled and proactive Data Integration Engineer to join the Neftex Technical Services team. Reporting to the Team Lead the Data Integration Engineer will be responsible for designing, building, and maintaining robust data pipelines and integration frameworks that connect diverse systems including LLMs and a proprietary Data Integration solution. Successful candidates will be evidently enthusiastic and motivated people who we can train up in our processes and ultimately play a key role in quality assurance initiatives across different stakeholder groups. This role is based in our Abingdon, Oxfordshire office. Key Responsibilities: Design and implement scalable data integration solutions using ETL/ELT tools and APIs Develop and maintain data pipelines that include Large Language Models (LLMs) Build solutions that include cloud and on-premises environments Collaborate with data architects, analysts, and business stakeholders to understand data requirements Integrate data from various sources including databases, SaaS platforms, APIs, and flat files Monitor and optimize data flows for performance, reliability, and cost-efficiency Ensure data quality, consistency, and governance across integrated systems Automate data workflows and support real-time data streaming Document integration processes and maintain technical specification Qualifications Qualifications & Experience: 3+ years' experience working with database and related tools Strong proficiency with data virtualisation platforms and tools such as Teiid or similar Solid understanding of SQL, relational databases, and data modelling Experience with cloud platforms (AWS, Azure) and cloud-native data services Familiarity with RESTful APIs, JSON, XML, OData, and message queues (Kafka) Knowledge of data governance, security, and compliance best practices Preferred Skills: Experience with cloud-based database solutions. Understanding of data lifecycle management and SOC2 security standards. Familiarity with geoscience disciplines, geospatial data and GIS tools (e.g., ArcGIS, QGIS) is advantageous. Scripting and automation (e.g., PowerShell, Python, Java). Experience with Gitlab. Knowledge of Spotfire data visualization platform or alternative dashboard solutions. Awareness of Agile delivery methodologies. Halliburton is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, disability, genetic information, pregnancy, citizenship, marital status, sex/gender, sexual preference/ orientation, gender identity, age, veteran status, national origin, or any other status protected by law or regulation. Location 97 Jubilee Avenue, Milton Park, Abingdon, Oxfordshire, OX14 4RW, United Kingdom Job Details Requisition Number: 204269 Experience Level: Entry-Level Job Family: Engineering/Science/Technology Product Service Line: division Full Time / Part Time: Full Time Additional Locations for this position: Compensation Information Compensation is competitive and commensurate with experience.
Head Resourcing
Data Engineer
Head Resourcing
Mid-Level Data Engineer (Azure / Databricks) NO VISA REQUIREMENTS Location: Glasgow (3+ days) Reports to: Head of IT My client is undergoing a major transformation of their entire data landscape-migrating from legacy systems and manual reporting into a modern Azure + Databricks Lakehouse. They are building a secure, automated, enterprise-grade platform powered by Lakeflow Declarative Pipelines, Unity Catalog and Azure Data Factory. They are looking for a Mid-Level Data Engineer to help deliver high-quality pipelines and curated datasets used across Finance, Operations, Sales, Customer Care and Logistics. What You'll Do Lakehouse Engineering (Azure + Databricks) Build and maintain scalable ELT pipelines using Lakeflow Declarative Pipelines, PySpark and Spark SQL. Work within a Medallion architecture (Bronze ? Silver ? Gold) to deliver reliable, high-quality datasets. Ingest data from multiple sources including ChargeBee, legacy operational files, SharePoint, SFTP, SQL, REST and GraphQL APIs using Azure Data Factory and metadata-driven patterns. Apply data quality and validation rules using Lakeflow Declarative Pipelines expectations. Curated Layers & Data Modelling Develop clean and conforming Silver & Gold layers aligned to enterprise subject areas. Contribute to dimensional modelling (star schemas), harmonisation logic, SCDs and business marts powering Power BI datasets. Apply governance, lineage and permissioning through Unity Catalog. Orchestration & Observability Use Lakeflow Workflows and ADF to orchestrate and optimise ingestion, transformation and scheduled jobs. Help implement monitoring, alerting, SLAs/SLIs and runbooks to support production reliability. Assist in performance tuning and cost optimisation. DevOps & Platform Engineering Contribute to CI/CD pipelines in Azure DevOps to automate deployment of notebooks, Lakeflow Declarative Pipelines, SQL models and ADF assets. Support secure deployment patterns using private endpoints, managed identities and Key Vault. Participate in code reviews and help improve engineering practices. Collaboration & Delivery Work with BI and Analytics teams to deliver curated datasets that power dashboards across the business. Contribute to architectural discussions and the ongoing data platform roadmap. Tech You'll Use Databricks: Lakeflow Declarative Pipelines, Lakeflow Workflows, Unity Catalog, Delta Lake Azure: ADLS Gen2, Data Factory, Event Hubs (optional), Key Vault, private endpoints Languages: PySpark, Spark SQL, Python, Git DevOps: Azure DevOps Repos & Pipelines, CI/CD Analytics: Power BI, Fabric What We're Looking For Experience Commercial and proven data engineering experience. Hands-on experience delivering solutions on Azure + Databricks . Strong PySpark and Spark SQL skills within distributed compute environments. Experience working in a Lakehouse/Medallion architecture with Delta Lake. Understanding of dimensional modelling (Kimball), including SCD Type 1/2. Exposure to operational concepts such as monitoring, retries, idempotency and backfills. Mindset Keen to grow within a modern Azure Data Platform environment. Comfortable with Git, CI/CD and modern engineering workflows. Able to communicate technical concepts clearly to non-technical stakeholders. Quality-driven, collaborative and proactive. Nice to Have Databricks Certified Data Engineer Associate. Experience with streaming ingestion (Auto Loader, event streams, watermarking). Subscription/entitlement modelling (e.g., ChargeBee). Unity Catalog advanced security (RLS, PII governance). Terraform or Bicep for IaC. Fabric Semantic Models or Direct Lake optimisation experience. Why Join? Opportunity to shape and build a modern enterprise Lakehouse platform. Hands-on work with Azure, Databricks and leading-edge engineering practices. Real progression opportunities within a growing data function. Direct impact across multiple business domains.
10/12/2025
Full time
Mid-Level Data Engineer (Azure / Databricks) NO VISA REQUIREMENTS Location: Glasgow (3+ days) Reports to: Head of IT My client is undergoing a major transformation of their entire data landscape-migrating from legacy systems and manual reporting into a modern Azure + Databricks Lakehouse. They are building a secure, automated, enterprise-grade platform powered by Lakeflow Declarative Pipelines, Unity Catalog and Azure Data Factory. They are looking for a Mid-Level Data Engineer to help deliver high-quality pipelines and curated datasets used across Finance, Operations, Sales, Customer Care and Logistics. What You'll Do Lakehouse Engineering (Azure + Databricks) Build and maintain scalable ELT pipelines using Lakeflow Declarative Pipelines, PySpark and Spark SQL. Work within a Medallion architecture (Bronze ? Silver ? Gold) to deliver reliable, high-quality datasets. Ingest data from multiple sources including ChargeBee, legacy operational files, SharePoint, SFTP, SQL, REST and GraphQL APIs using Azure Data Factory and metadata-driven patterns. Apply data quality and validation rules using Lakeflow Declarative Pipelines expectations. Curated Layers & Data Modelling Develop clean and conforming Silver & Gold layers aligned to enterprise subject areas. Contribute to dimensional modelling (star schemas), harmonisation logic, SCDs and business marts powering Power BI datasets. Apply governance, lineage and permissioning through Unity Catalog. Orchestration & Observability Use Lakeflow Workflows and ADF to orchestrate and optimise ingestion, transformation and scheduled jobs. Help implement monitoring, alerting, SLAs/SLIs and runbooks to support production reliability. Assist in performance tuning and cost optimisation. DevOps & Platform Engineering Contribute to CI/CD pipelines in Azure DevOps to automate deployment of notebooks, Lakeflow Declarative Pipelines, SQL models and ADF assets. Support secure deployment patterns using private endpoints, managed identities and Key Vault. Participate in code reviews and help improve engineering practices. Collaboration & Delivery Work with BI and Analytics teams to deliver curated datasets that power dashboards across the business. Contribute to architectural discussions and the ongoing data platform roadmap. Tech You'll Use Databricks: Lakeflow Declarative Pipelines, Lakeflow Workflows, Unity Catalog, Delta Lake Azure: ADLS Gen2, Data Factory, Event Hubs (optional), Key Vault, private endpoints Languages: PySpark, Spark SQL, Python, Git DevOps: Azure DevOps Repos & Pipelines, CI/CD Analytics: Power BI, Fabric What We're Looking For Experience Commercial and proven data engineering experience. Hands-on experience delivering solutions on Azure + Databricks . Strong PySpark and Spark SQL skills within distributed compute environments. Experience working in a Lakehouse/Medallion architecture with Delta Lake. Understanding of dimensional modelling (Kimball), including SCD Type 1/2. Exposure to operational concepts such as monitoring, retries, idempotency and backfills. Mindset Keen to grow within a modern Azure Data Platform environment. Comfortable with Git, CI/CD and modern engineering workflows. Able to communicate technical concepts clearly to non-technical stakeholders. Quality-driven, collaborative and proactive. Nice to Have Databricks Certified Data Engineer Associate. Experience with streaming ingestion (Auto Loader, event streams, watermarking). Subscription/entitlement modelling (e.g., ChargeBee). Unity Catalog advanced security (RLS, PII governance). Terraform or Bicep for IaC. Fabric Semantic Models or Direct Lake optimisation experience. Why Join? Opportunity to shape and build a modern enterprise Lakehouse platform. Hands-on work with Azure, Databricks and leading-edge engineering practices. Real progression opportunities within a growing data function. Direct impact across multiple business domains.
Tenth Revolution Group
Senior Developer - £400PD - Remote
Tenth Revolution Group City, London
Senior Developer - 400PD - Remote We are seeking a skilled and collaborative Senior Developer to join our engineering team. In this role, you will contribute to the design, development, and maintenance of high-quality backend services and infrastructure platforms. You will work across a modern technology stack and help shape best practices for delivery, automation, and open ways of working. Key Responsibilities Backend Development: Develop and maintain backend services, with Python as the preferred language for new systems. Provide support for services built in Java and .NET, ensuring seamless integration and continued reliability. Enhance and maintain existing solutions using Oracle SQL PL. Database & Data Services: Work with relational databases including Oracle, SQL Server, and Postgres. Optimise data models, queries, and stored procedures to improve performance and maintainability. Infrastructure & DevOps: Build, deploy, and manage applications using cloud platforms such as AWS and Azure. Use modern DevOps tools and practices-including GitHub, Azure DevOps, Docker, Kubernetes, and Linux-based systems-to deliver scalable, reliable services. Implement and maintain CI/CD pipelines with a strong focus on automation, continuous deployment, testing, and monitoring. Quality & Testing: Develop software using Test-Driven Development (TD D ), writing automated tests before implementing code. Ensure high quality across the stack through continuous testing, observability, and feedback loops. Ways of Working: Champion open and transparent engineering practices, including maintaining visible codebases, documentation, design histories, and roadmaps. Collaborate closely with cross-functional teams and contribute to a culture of knowledge sharing and continuous improvement. What We're Looking For Strong experience developing backend services with Python, plus working knowledge of Java and/or .NET. Proficiency in SQL and experience with Oracle PL/SQL or similar technologies. Hands-on experience with cloud platforms (AWS, Azure), containerisation, DevOps tooling, and CI/CD automation. Familiarity with TDD, automated testing frameworks, and modern monitoring/observability tools. A commitment to open, transparent, and collaborative engineering practices. To apply for this role please submit your CV or contact Dillon Blackburn on (phone number removed) or at (url removed). Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.
10/12/2025
Contractor
Senior Developer - 400PD - Remote We are seeking a skilled and collaborative Senior Developer to join our engineering team. In this role, you will contribute to the design, development, and maintenance of high-quality backend services and infrastructure platforms. You will work across a modern technology stack and help shape best practices for delivery, automation, and open ways of working. Key Responsibilities Backend Development: Develop and maintain backend services, with Python as the preferred language for new systems. Provide support for services built in Java and .NET, ensuring seamless integration and continued reliability. Enhance and maintain existing solutions using Oracle SQL PL. Database & Data Services: Work with relational databases including Oracle, SQL Server, and Postgres. Optimise data models, queries, and stored procedures to improve performance and maintainability. Infrastructure & DevOps: Build, deploy, and manage applications using cloud platforms such as AWS and Azure. Use modern DevOps tools and practices-including GitHub, Azure DevOps, Docker, Kubernetes, and Linux-based systems-to deliver scalable, reliable services. Implement and maintain CI/CD pipelines with a strong focus on automation, continuous deployment, testing, and monitoring. Quality & Testing: Develop software using Test-Driven Development (TD D ), writing automated tests before implementing code. Ensure high quality across the stack through continuous testing, observability, and feedback loops. Ways of Working: Champion open and transparent engineering practices, including maintaining visible codebases, documentation, design histories, and roadmaps. Collaborate closely with cross-functional teams and contribute to a culture of knowledge sharing and continuous improvement. What We're Looking For Strong experience developing backend services with Python, plus working knowledge of Java and/or .NET. Proficiency in SQL and experience with Oracle PL/SQL or similar technologies. Hands-on experience with cloud platforms (AWS, Azure), containerisation, DevOps tooling, and CI/CD automation. Familiarity with TDD, automated testing frameworks, and modern monitoring/observability tools. A commitment to open, transparent, and collaborative engineering practices. To apply for this role please submit your CV or contact Dillon Blackburn on (phone number removed) or at (url removed). Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.
Tenth Revolution Group
Senior Developer
Tenth Revolution Group
Senior Software Developers - 6-Month Contract - Inside IR35 400 per day Remote/UK-Based NHS Experience Essential BPSS Eligible We're seeking three Senior Software Developers to join a high-performing team delivering modern digital services across the NHS. You will help build, support, and evolve critical platforms using modern engineering practices, test-driven development, and open, transparent delivery. This is a 6-month Inside IR35 contract, with strong potential for extension. Candidates must be UK-based, BPSS eligible, and have previous NHS experience. Role Overview You will work within a collaborative, multidisciplinary environment to design, build, and maintain high-quality digital services. The role involves contributing to both new and existing systems, supporting multiple languages and platforms, and applying strong DevOps, TDD, and modern engineering practice throughout. Key Responsibilities Develop new services using Python while supporting existing components written in Java and .NET Work with Oracle PL/SQL to support legacy and operational systems Engineer solutions across cloud and infrastructure environments including AWS, Azure, Linux, SQL Server, Postgres, Docker, and Kubernetes Implement CI/CD pipelines using GitHub or Azure DevOps Deliver high-quality software using Test Driven Development (TDD) Contribute to open, transparent delivery with shared documentation, codebases, design histories, and roadmaps Collaborate with engineers, designers, and product teams to deliver secure, scalable, user-centred services Required Technical Skills Strong experience building digital services with Python Familiarity with Java and .NET for supporting existing systems Skilled in Oracle PL/SQL Experience with AWS and Azure Knowledge of GitHub, Azure DevOps, Docker, Kubernetes, and Linux Experience with SQL Server and Postgres Strong TDD practice and automated testing pipelines Non-Technical Skills Excellent communication and collaboration Comfortable working openly and transparently Strong problem-solving abilities Adaptable, proactive, and able to work in complex environments Positive, delivery-focused attitude Additional Requirements Must be UK-based BPSS eligible Previous NHS experience is essential To discuss this role further please submit your CV or contact Brandon Forbes via email at (url removed). Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.
09/12/2025
Contractor
Senior Software Developers - 6-Month Contract - Inside IR35 400 per day Remote/UK-Based NHS Experience Essential BPSS Eligible We're seeking three Senior Software Developers to join a high-performing team delivering modern digital services across the NHS. You will help build, support, and evolve critical platforms using modern engineering practices, test-driven development, and open, transparent delivery. This is a 6-month Inside IR35 contract, with strong potential for extension. Candidates must be UK-based, BPSS eligible, and have previous NHS experience. Role Overview You will work within a collaborative, multidisciplinary environment to design, build, and maintain high-quality digital services. The role involves contributing to both new and existing systems, supporting multiple languages and platforms, and applying strong DevOps, TDD, and modern engineering practice throughout. Key Responsibilities Develop new services using Python while supporting existing components written in Java and .NET Work with Oracle PL/SQL to support legacy and operational systems Engineer solutions across cloud and infrastructure environments including AWS, Azure, Linux, SQL Server, Postgres, Docker, and Kubernetes Implement CI/CD pipelines using GitHub or Azure DevOps Deliver high-quality software using Test Driven Development (TDD) Contribute to open, transparent delivery with shared documentation, codebases, design histories, and roadmaps Collaborate with engineers, designers, and product teams to deliver secure, scalable, user-centred services Required Technical Skills Strong experience building digital services with Python Familiarity with Java and .NET for supporting existing systems Skilled in Oracle PL/SQL Experience with AWS and Azure Knowledge of GitHub, Azure DevOps, Docker, Kubernetes, and Linux Experience with SQL Server and Postgres Strong TDD practice and automated testing pipelines Non-Technical Skills Excellent communication and collaboration Comfortable working openly and transparently Strong problem-solving abilities Adaptable, proactive, and able to work in complex environments Positive, delivery-focused attitude Additional Requirements Must be UK-based BPSS eligible Previous NHS experience is essential To discuss this role further please submit your CV or contact Brandon Forbes via email at (url removed). Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.
Akkodis
Data Engineer - SC Cleared. Stevenage/Hybrid £80k
Akkodis Stevenage, Hertfordshire
Data Engineer (Strong SQL, ETL, Python) - SC Cleared OR Eligible Stevenage (Hybrid) 2-3 days onsite Up to 80,000 High-impact programme - Revolutionary platform I am looking for a Security-Cleared Data Engineer to take the reins on a range of highly ambitious Data Migration projects supporting a range of truly high-impact programmes across the UK. This is a unique opportunity to work on cutting-edge cloud, software, and infrastructure projects that shape the future of technology in both public and private sectors. You'll be part of a collaborative team delivering scalable, next-generation digital ecosystems. What you'll be doing? As a Data Engineer within our Centre of Excellence, you will play a critical role in delivering complex data migration and data engineering projects for our clients. This position focuses on the planning, execution, and optimisation of data migrations-from legacy platforms to modern cloud-based environments-ensuring accuracy, consistency, security, and continuity throughout the process Key Responsibilities Analyse existing data structures and understand business and technical requirements for migration initiatives. Design and deliver robust data migration strategies and ETL solutions. Develop automated data extraction, transformation, and loading (ETL) processes using industry-standard tools and scripts. Work closely with stakeholders to ensure seamless migration and minimal business disruption. Plan, coordinate, and execute data migration projects within defined timelines. Ensure the highest standards of data quality, integrity, and security. Troubleshoot and resolve data-related issues promptly. Collaborate with wider engineering and architecture teams to ensure migrations align with organisational and regulatory standards. Relevant exposure; Expert-level SQL skills for complex query development, performance tuning, indexing, and data transformation across on-premise databases and AWS cloud environments. Strong hands-on experience with ETL processes and tools (Talend, Informatica, Matillion, Pentaho, MuleSoft, Boomi) or scripting using Python, PySpark, and SQL. Solid understanding of data warehousing and modelling techniques (Star Schema, Snowflake Schema). Familiarity with security frameworks such as GDPR, HIPAA, ISO 27001, NIST, SOX, and PII, as well as AWS security features including IAM, KMS, and RBAC. Ability to identify and resolve data quality issues across migration projects. Strong track record of delivering end-to-end data migration projects and working effectively with both technical and non-technical stakeholders. Due to the nature of the work, SC Clearance is required or candidates must be eligible to obtain it. Salary up to 80,000 plus wider benefits - Contact me today for further insight on (phone number removed) or (url removed). Modis International Ltd acts as an employment agency for permanent recruitment and an employment business for the supply of temporary workers in the UK. Modis Europe Ltd provide a variety of international solutions that connect clients to the best talent in the world. For all positions based in Switzerland, Modis Europe Ltd works with its licensed Swiss partner Accurity GmbH to ensure that candidate applications are handled in accordance with Swiss law. Both Modis International Ltd and Modis Europe Ltd are Equal Opportunities Employers. By applying for this role your details will be submitted to Modis International Ltd and/ or Modis Europe Ltd. Our Candidate Privacy Information Statement which explains how we will use your information is available on the Modis website.
09/12/2025
Full time
Data Engineer (Strong SQL, ETL, Python) - SC Cleared OR Eligible Stevenage (Hybrid) 2-3 days onsite Up to 80,000 High-impact programme - Revolutionary platform I am looking for a Security-Cleared Data Engineer to take the reins on a range of highly ambitious Data Migration projects supporting a range of truly high-impact programmes across the UK. This is a unique opportunity to work on cutting-edge cloud, software, and infrastructure projects that shape the future of technology in both public and private sectors. You'll be part of a collaborative team delivering scalable, next-generation digital ecosystems. What you'll be doing? As a Data Engineer within our Centre of Excellence, you will play a critical role in delivering complex data migration and data engineering projects for our clients. This position focuses on the planning, execution, and optimisation of data migrations-from legacy platforms to modern cloud-based environments-ensuring accuracy, consistency, security, and continuity throughout the process Key Responsibilities Analyse existing data structures and understand business and technical requirements for migration initiatives. Design and deliver robust data migration strategies and ETL solutions. Develop automated data extraction, transformation, and loading (ETL) processes using industry-standard tools and scripts. Work closely with stakeholders to ensure seamless migration and minimal business disruption. Plan, coordinate, and execute data migration projects within defined timelines. Ensure the highest standards of data quality, integrity, and security. Troubleshoot and resolve data-related issues promptly. Collaborate with wider engineering and architecture teams to ensure migrations align with organisational and regulatory standards. Relevant exposure; Expert-level SQL skills for complex query development, performance tuning, indexing, and data transformation across on-premise databases and AWS cloud environments. Strong hands-on experience with ETL processes and tools (Talend, Informatica, Matillion, Pentaho, MuleSoft, Boomi) or scripting using Python, PySpark, and SQL. Solid understanding of data warehousing and modelling techniques (Star Schema, Snowflake Schema). Familiarity with security frameworks such as GDPR, HIPAA, ISO 27001, NIST, SOX, and PII, as well as AWS security features including IAM, KMS, and RBAC. Ability to identify and resolve data quality issues across migration projects. Strong track record of delivering end-to-end data migration projects and working effectively with both technical and non-technical stakeholders. Due to the nature of the work, SC Clearance is required or candidates must be eligible to obtain it. Salary up to 80,000 plus wider benefits - Contact me today for further insight on (phone number removed) or (url removed). Modis International Ltd acts as an employment agency for permanent recruitment and an employment business for the supply of temporary workers in the UK. Modis Europe Ltd provide a variety of international solutions that connect clients to the best talent in the world. For all positions based in Switzerland, Modis Europe Ltd works with its licensed Swiss partner Accurity GmbH to ensure that candidate applications are handled in accordance with Swiss law. Both Modis International Ltd and Modis Europe Ltd are Equal Opportunities Employers. By applying for this role your details will be submitted to Modis International Ltd and/ or Modis Europe Ltd. Our Candidate Privacy Information Statement which explains how we will use your information is available on the Modis website.
SF Recruitment
Data Engineer
SF Recruitment
We're supporting a large-scale data programme that requires an experienced Data Engineer to help transform complex, unstructured information into clean, reliable datasets suitable for analysis and reporting. The project involves working with sizeable JSON files and other mixed-format sources, standardising them, and preparing them for downstream use across several internal systems. You'll be responsible for shaping the structure, improving data quality, and ensuring outputs can be easily consumed by non-technical teams. What You'll Work On Converting varied and unstructured data (including JSON) into well-defined relational formats. Designing data models that ensure consistency and interoperability across tools. Preparing datasets for use in spreadsheets, reporting environments, and CRM systems. Resolving data quality issues: type mismatches, missing values, integrity checks, and formatting problems. Building repeatable processes and validation steps to support accurate, sustainable reporting. Partnering with operational and business teams to understand requirements and ensure outputs are fit for purpose. Skills & Experience Needed Strong SQL abilities and experience designing relational schemas. Hands-on Python skills (preferably pandas) for data wrangling and transformation. Solid understanding of data modelling principles and best practices. Good working knowledge of Excel and awareness of CRM/enterprise data structures. Experience with business intelligence/reporting tools (Power BI, Tableau, etc.) is beneficial. Able to interpret complex datasets, identify patterns/issues, and communicate findings clearly to non-technical users. Nice to Have Background in sensitive or regulated data environments. Understanding of data protection considerations. Exposure to ETL or data pipeline development.
09/12/2025
Seasonal
We're supporting a large-scale data programme that requires an experienced Data Engineer to help transform complex, unstructured information into clean, reliable datasets suitable for analysis and reporting. The project involves working with sizeable JSON files and other mixed-format sources, standardising them, and preparing them for downstream use across several internal systems. You'll be responsible for shaping the structure, improving data quality, and ensuring outputs can be easily consumed by non-technical teams. What You'll Work On Converting varied and unstructured data (including JSON) into well-defined relational formats. Designing data models that ensure consistency and interoperability across tools. Preparing datasets for use in spreadsheets, reporting environments, and CRM systems. Resolving data quality issues: type mismatches, missing values, integrity checks, and formatting problems. Building repeatable processes and validation steps to support accurate, sustainable reporting. Partnering with operational and business teams to understand requirements and ensure outputs are fit for purpose. Skills & Experience Needed Strong SQL abilities and experience designing relational schemas. Hands-on Python skills (preferably pandas) for data wrangling and transformation. Solid understanding of data modelling principles and best practices. Good working knowledge of Excel and awareness of CRM/enterprise data structures. Experience with business intelligence/reporting tools (Power BI, Tableau, etc.) is beneficial. Able to interpret complex datasets, identify patterns/issues, and communicate findings clearly to non-technical users. Nice to Have Background in sensitive or regulated data environments. Understanding of data protection considerations. Exposure to ETL or data pipeline development.
Fruition Group
Senior Backend Developer
Fruition Group
Senior Backend Developer Location: London - 1x a month Salary: Up to £95,000 (D.O.E) + benefits Fruition Group working with a leading Insurtech unicorn on a mission to transform the insurance industry. Their products have already generated millions in revenue, and now they're investing heavily in new innovations that will shape the sector for years to come. This is an exciting opportunity for a motivated Senior Engineer who wants to grow in a high-performing, forward-thinking business. As a Senior Backend Engineer, you'll play a central role in building and scaling the systems that power their insurance platform. You'll focus on developing Python-based cloud microservices (FastAPI preferred), contributing to the architecture of resilient, high-performance systems, and collaborating with a cross-functional team to deliver features at pace. This is a hands-on engineering position with plenty of scope to influence technical direction, improve processes, and grow your expertise in distributed systems. What will I be doing: Design, develop, and maintain Python microservices (FastAPI) running in production. Take ownership of features end-to-end, from design to deployment and monitoring. Write high-quality, testable code with a focus on scalability and resilience. Collaborate closely with engineers, product managers, and designers to deliver quickly. Contribute to evolving architecture, tooling, and CI/CD practices. What experience do I need: Strong background in Python development (FastAPI, Flask, or Django). Good knowledge of microservices, APIs, and cloud-based infrastructure. Experience with SQL and NoSQL databases, Git, and CI/CD workflows. Solid engineering fundamentals - data structures, OOP, debugging, and testing. Collaborative, curious, and eager to experiment with AI tools to improve productivity. If this position sounds of interest, please apply and a member of the team will be in touch to discuss further. We are an equal opportunities employer and welcome applications from all suitably qualified persons regardless of their race, sex, disability, religion/belief, sexual orientation, or age.
09/12/2025
Full time
Senior Backend Developer Location: London - 1x a month Salary: Up to £95,000 (D.O.E) + benefits Fruition Group working with a leading Insurtech unicorn on a mission to transform the insurance industry. Their products have already generated millions in revenue, and now they're investing heavily in new innovations that will shape the sector for years to come. This is an exciting opportunity for a motivated Senior Engineer who wants to grow in a high-performing, forward-thinking business. As a Senior Backend Engineer, you'll play a central role in building and scaling the systems that power their insurance platform. You'll focus on developing Python-based cloud microservices (FastAPI preferred), contributing to the architecture of resilient, high-performance systems, and collaborating with a cross-functional team to deliver features at pace. This is a hands-on engineering position with plenty of scope to influence technical direction, improve processes, and grow your expertise in distributed systems. What will I be doing: Design, develop, and maintain Python microservices (FastAPI) running in production. Take ownership of features end-to-end, from design to deployment and monitoring. Write high-quality, testable code with a focus on scalability and resilience. Collaborate closely with engineers, product managers, and designers to deliver quickly. Contribute to evolving architecture, tooling, and CI/CD practices. What experience do I need: Strong background in Python development (FastAPI, Flask, or Django). Good knowledge of microservices, APIs, and cloud-based infrastructure. Experience with SQL and NoSQL databases, Git, and CI/CD workflows. Solid engineering fundamentals - data structures, OOP, debugging, and testing. Collaborative, curious, and eager to experiment with AI tools to improve productivity. If this position sounds of interest, please apply and a member of the team will be in touch to discuss further. We are an equal opportunities employer and welcome applications from all suitably qualified persons regardless of their race, sex, disability, religion/belief, sexual orientation, or age.
Fruition Group
Lead Backend Developer
Fruition Group
Lead Backend Developer Location: London - 1x a month Salary: Up to £120,000 (D.O.E) + benefits Fruition Group are partnering with a high-growth Insurtech unicorn that's scaling its engineering function. This is a unique chance to work across both proven, revenue-generating products and greenfield initiatives that are reshaping the future of insurance. It's an ideal role for a driven Lead Engineer who thrives in ambitious environments and wants to make a tangible impact. As a Lead Backend Engineer, you'll take ownership of designing and scaling cloud-native Back End systems. You'll work primarily with Python (FastAPI) and play a key role in shaping the architecture of microservices that support millions of users. Beyond hands-on development, you'll provide technical leadership, mentor team members, and influence strategic engineering decisions. This is a high-impact role where your work directly drives product growth, system resilience, and platform evolution. What will I be doing: Design, develop, and optimise scalable Back End services in Python, leveraging FastAPI. Lead architectural discussions with a focus on performance, scalability, and reliability. Deliver complex features end-to-end - from design through deployment and monitoring. Provide mentorship through code reviews, technical guidance, and best practices. Collaborate with Product, Design, and Engineering teams to deliver at pace. Continuously raise the bar for engineering standards, code quality, and delivery. Shape the long-term direction of the platform's service-oriented architecture. Champion the use of AI and automation to enhance productivity across the team. What experience do I need: Strong background building and scaling Python-based systems (FastAPI, Flask, or Django REST). Proven leadership experience in a development environment Solid expertise in microservices, APIs, messaging patterns, and distributed systems. Proficient with cloud platforms (AWS, GCP, Azure) and containerisation (Docker; Kubernetes preferred). Strong engineering fundamentals - testing, clean code, performance tuning, and algorithms. Experience with relational and non-relational databases (PostgreSQL, MongoDB). Comfortable working in agile, fast-moving environments with high ownership. Curious about new technology, with a growth mindset and interest in AI-driven tools. If this role sounds of interest, please apply and a member of the team will be in touch to discuss your application. We are an equal opportunities employer and welcome applications from all suitably qualified persons regardless of their race, sex, disability, religion/belief, sexual orientation, or age.
09/12/2025
Full time
Lead Backend Developer Location: London - 1x a month Salary: Up to £120,000 (D.O.E) + benefits Fruition Group are partnering with a high-growth Insurtech unicorn that's scaling its engineering function. This is a unique chance to work across both proven, revenue-generating products and greenfield initiatives that are reshaping the future of insurance. It's an ideal role for a driven Lead Engineer who thrives in ambitious environments and wants to make a tangible impact. As a Lead Backend Engineer, you'll take ownership of designing and scaling cloud-native Back End systems. You'll work primarily with Python (FastAPI) and play a key role in shaping the architecture of microservices that support millions of users. Beyond hands-on development, you'll provide technical leadership, mentor team members, and influence strategic engineering decisions. This is a high-impact role where your work directly drives product growth, system resilience, and platform evolution. What will I be doing: Design, develop, and optimise scalable Back End services in Python, leveraging FastAPI. Lead architectural discussions with a focus on performance, scalability, and reliability. Deliver complex features end-to-end - from design through deployment and monitoring. Provide mentorship through code reviews, technical guidance, and best practices. Collaborate with Product, Design, and Engineering teams to deliver at pace. Continuously raise the bar for engineering standards, code quality, and delivery. Shape the long-term direction of the platform's service-oriented architecture. Champion the use of AI and automation to enhance productivity across the team. What experience do I need: Strong background building and scaling Python-based systems (FastAPI, Flask, or Django REST). Proven leadership experience in a development environment Solid expertise in microservices, APIs, messaging patterns, and distributed systems. Proficient with cloud platforms (AWS, GCP, Azure) and containerisation (Docker; Kubernetes preferred). Strong engineering fundamentals - testing, clean code, performance tuning, and algorithms. Experience with relational and non-relational databases (PostgreSQL, MongoDB). Comfortable working in agile, fast-moving environments with high ownership. Curious about new technology, with a growth mindset and interest in AI-driven tools. If this role sounds of interest, please apply and a member of the team will be in touch to discuss your application. We are an equal opportunities employer and welcome applications from all suitably qualified persons regardless of their race, sex, disability, religion/belief, sexual orientation, or age.
Tenth Revolution Group
Senior Data Engineer
Tenth Revolution Group Havant, Hampshire
Senior Data Engineer Salary: Up to 70,000 I am working with a forward-thinking organisation that is modernising its data platform to support scalable analytics and business intelligence across the Group. With a strong focus on Microsoft technologies and cloud-first architecture, they are looking to bring on a Data Engineer to help design and deliver impactful data solutions using Azure. This is a hands-on role where you will work across the full data stack, collaborating with architects, analysts, and stakeholders to build a future-ready platform that drives insight and decision-making. In this role, you will be responsible for: Building and managing data pipelines using Azure Data Factory and related services. Building and maintaining data lakes, data warehouses, and ETL/ELT processes. Designing scalable data solutions and models for reporting in Power BI. Supporting data migration from legacy systems into the new platform. Ensuring data models are optimised for performance and reusability. To be successful in this role, you will have: Hands-on experience creating data pipelines using Azure services such as Synapse and Data Factory. Reporting experience with Power BI. Strong understanding of SQL, Python, or PySpark. Knowledge of the Azure data platform including Azure Data Lake Storage, Azure SQL Data Warehouse, or Azure Databricks. Some of the package/role details include: Salary up to 70,000 Hybrid working model twice per week in Portsmouth Pension scheme and private healthcare options Opportunities for training and development This is just a brief overview of the role. For the full details, simply apply with your CV and I'll be in touch to discuss it further.
09/12/2025
Full time
Senior Data Engineer Salary: Up to 70,000 I am working with a forward-thinking organisation that is modernising its data platform to support scalable analytics and business intelligence across the Group. With a strong focus on Microsoft technologies and cloud-first architecture, they are looking to bring on a Data Engineer to help design and deliver impactful data solutions using Azure. This is a hands-on role where you will work across the full data stack, collaborating with architects, analysts, and stakeholders to build a future-ready platform that drives insight and decision-making. In this role, you will be responsible for: Building and managing data pipelines using Azure Data Factory and related services. Building and maintaining data lakes, data warehouses, and ETL/ELT processes. Designing scalable data solutions and models for reporting in Power BI. Supporting data migration from legacy systems into the new platform. Ensuring data models are optimised for performance and reusability. To be successful in this role, you will have: Hands-on experience creating data pipelines using Azure services such as Synapse and Data Factory. Reporting experience with Power BI. Strong understanding of SQL, Python, or PySpark. Knowledge of the Azure data platform including Azure Data Lake Storage, Azure SQL Data Warehouse, or Azure Databricks. Some of the package/role details include: Salary up to 70,000 Hybrid working model twice per week in Portsmouth Pension scheme and private healthcare options Opportunities for training and development This is just a brief overview of the role. For the full details, simply apply with your CV and I'll be in touch to discuss it further.
Raynet Recruitment
Data Architect
Raynet Recruitment Grays, Essex
Data Architect Thurrock RM17 6SL Analysis and synthesis of data: You will apply basic techniques for the analysis of data from a variety of internal and external sources and synthesise your findings. Your analysis will support both service improvement and wider strategy development, policy, and service design work across the organisation. You will effectively involve a variety of data professionals and domain experts in this analysis and synthesis and will present clear findings that colleagues can understand and use. Communication: You will communicate effectively with technical and non-technical stakeholders in a variety of roles. You will build strong collaborative relationships with colleagues from front line to senior leadership and host discussions that help define needs, generate new insights, improve data literacy, and promote data culture. You will be an advocate for the team and can manage differing perspectives and potentially difficult dynamics. Data management: You will understand data governance and how it works in relation to other organisational governance structures and will be a proactive participant in and promoter of Thurrock's data governance practices. You will use your experience to manage data, ensuring adherence to standards and maintaining data dictionaries. You will effectively manage risk to privacy in adherence to national legislation and local practices. Data modelling, cleansing and enrichment: You will be able to either produce or maintain data models and understand where to use different types of data models, developing Thurrock's business intelligence architecture in collaboration with our Data Engineers and Data Architects. You will also have some understanding in reverse-engineer a data models from live systems. You will have understanding of different tools and industry-recognised data-modelling patterns and standards, comparing different data models, communicating data structures using documentation such as schema diagrams. Data quality assurance, validation and linkage: You will identify appropriate ways to collect, collate and prepare data as set by the Data Architecture team and Data Engineers. This will involve informing the design of front end system and surveys to ensure enhanced user experience and data quality. You will make judgements as to whether data are accurate and fit for purpose and will support services in maintaining good data quality through the development of data quality auditing systems. You will define and implement batch cleansing processes where appropriate with limited guidance. Data visualisation: You will use the most appropriate medium to visualise data to tell compelling stories that are relevant to business goals and can be acted upon. Your work will take advantage of a wide variety of data visualisation tools and methodologies, presenting complex information in a way that is engaging, useful and readily intelligible to a range of audiences such as front line staff, managers, and senior leadership. You will present, communicate, and disseminate data appropriately and with influence in settings ranging from operational meetings to high profile strategic partnerships. IT and mathematics: You will apply your knowledge and experience of IT and mathematical skills, including tools and techniques. You can adopt those most appropriate for the environment and always work in a manner that is sensitive to information security. You will use your experience of using a variety of tools such as MS Excel, Qlik, SQL, R, Python, QGIS, Tableau. Logical and creative thinking: You will respond effectively to problems in databases, data processes, data products and services as they occur. You will initiate actions, monitor services, and identify trends to resolve problems.
08/12/2025
Seasonal
Data Architect Thurrock RM17 6SL Analysis and synthesis of data: You will apply basic techniques for the analysis of data from a variety of internal and external sources and synthesise your findings. Your analysis will support both service improvement and wider strategy development, policy, and service design work across the organisation. You will effectively involve a variety of data professionals and domain experts in this analysis and synthesis and will present clear findings that colleagues can understand and use. Communication: You will communicate effectively with technical and non-technical stakeholders in a variety of roles. You will build strong collaborative relationships with colleagues from front line to senior leadership and host discussions that help define needs, generate new insights, improve data literacy, and promote data culture. You will be an advocate for the team and can manage differing perspectives and potentially difficult dynamics. Data management: You will understand data governance and how it works in relation to other organisational governance structures and will be a proactive participant in and promoter of Thurrock's data governance practices. You will use your experience to manage data, ensuring adherence to standards and maintaining data dictionaries. You will effectively manage risk to privacy in adherence to national legislation and local practices. Data modelling, cleansing and enrichment: You will be able to either produce or maintain data models and understand where to use different types of data models, developing Thurrock's business intelligence architecture in collaboration with our Data Engineers and Data Architects. You will also have some understanding in reverse-engineer a data models from live systems. You will have understanding of different tools and industry-recognised data-modelling patterns and standards, comparing different data models, communicating data structures using documentation such as schema diagrams. Data quality assurance, validation and linkage: You will identify appropriate ways to collect, collate and prepare data as set by the Data Architecture team and Data Engineers. This will involve informing the design of front end system and surveys to ensure enhanced user experience and data quality. You will make judgements as to whether data are accurate and fit for purpose and will support services in maintaining good data quality through the development of data quality auditing systems. You will define and implement batch cleansing processes where appropriate with limited guidance. Data visualisation: You will use the most appropriate medium to visualise data to tell compelling stories that are relevant to business goals and can be acted upon. Your work will take advantage of a wide variety of data visualisation tools and methodologies, presenting complex information in a way that is engaging, useful and readily intelligible to a range of audiences such as front line staff, managers, and senior leadership. You will present, communicate, and disseminate data appropriately and with influence in settings ranging from operational meetings to high profile strategic partnerships. IT and mathematics: You will apply your knowledge and experience of IT and mathematical skills, including tools and techniques. You can adopt those most appropriate for the environment and always work in a manner that is sensitive to information security. You will use your experience of using a variety of tools such as MS Excel, Qlik, SQL, R, Python, QGIS, Tableau. Logical and creative thinking: You will respond effectively to problems in databases, data processes, data products and services as they occur. You will initiate actions, monitor services, and identify trends to resolve problems.
Trek Recruitment Ltd
IT Engineer
Trek Recruitment Ltd Wrexham, Clwyd
C# Software Developer (with Networking & IT Systems exposure) Location: Wrexham, North Wales Salary: £35,000 + excellent benefits Our client is a leader in developing manufacturing tech and on their behalf we re looking for a sharp, hands-on C# developer who loves building real-world applications that make a genuine difference on the factory floor and in the office. This isn t a typical helpdesk role. Yes, you ll be the go-to person for the company s IT systems, but the main focus is writing clean, maintainable code primarily in C# .NET Core to extend and improve the client's in-house manufacturing and business platforms. Networking and infrastructure are part of the mix (because everything you build has to talk to the real world), but they re secondary to solid programming skills. THE ROLE Designing and building new features and tools in C# .NET Core (this is the bulk of the role) Maintaining and enhancing our existing bespoke applications Writing scripts and automation tools in Python/PHP when it makes sense Occasional Laravel/PHP work on internal web tools Helping keep the infrastructure running smoothly (Windows/Linux servers, Active Directory, VMware, Office 365, backups, Cisco Meraki Wi-Fi, Fortinet firewalls, etc.) Configuring switches, firewalls, laptops, Raspberry Pis and label printers when needed Supporting users (mostly remotely via Teams) but this is light compared to the development work Creating BI dashboards and playing with AI/automation ideas YOU First and foremost: strong C# .NET Core skills you enjoy writing code more than resetting passwords Comfortable with modern development tools (GitHub, VS Code, Jira, etc.) Happy to touch infrastructure when required you understand networking basics, Active Directory, servers, VMware, etc. (you don t need to be a CCNA, just not scared of it) Bonus points if you ve worked with SQL Server, IIS, REST APIs, or have any manufacturing/warehouse exposure Minimum HNC/HND or degree in Computer Science / Software Development (or equivalent experience) 1 2 years+ commercial programming experience (more is fine too) Why you ll love it here You get to own and shape real products used every day by hundreds of people Proper variety one day you re adding a new feature in C#, the next you re deploying a Raspberry Pi on the shop floor Forward-thinking company that s genuinely investing in digital transformation and AI Small, friendly IT team no corporate red tape Great package: enhanced pension, private healthcare, heavily subsidised canteen, 25 days holiday + banks, hybrid flexibility If you re a C# developer who wants to stay close to the metal, see your code make an immediate impact, and doesn t mind rolling up your sleeves on the occasional bit of networking or infrastructure, this could be perfect.
07/12/2025
Full time
C# Software Developer (with Networking & IT Systems exposure) Location: Wrexham, North Wales Salary: £35,000 + excellent benefits Our client is a leader in developing manufacturing tech and on their behalf we re looking for a sharp, hands-on C# developer who loves building real-world applications that make a genuine difference on the factory floor and in the office. This isn t a typical helpdesk role. Yes, you ll be the go-to person for the company s IT systems, but the main focus is writing clean, maintainable code primarily in C# .NET Core to extend and improve the client's in-house manufacturing and business platforms. Networking and infrastructure are part of the mix (because everything you build has to talk to the real world), but they re secondary to solid programming skills. THE ROLE Designing and building new features and tools in C# .NET Core (this is the bulk of the role) Maintaining and enhancing our existing bespoke applications Writing scripts and automation tools in Python/PHP when it makes sense Occasional Laravel/PHP work on internal web tools Helping keep the infrastructure running smoothly (Windows/Linux servers, Active Directory, VMware, Office 365, backups, Cisco Meraki Wi-Fi, Fortinet firewalls, etc.) Configuring switches, firewalls, laptops, Raspberry Pis and label printers when needed Supporting users (mostly remotely via Teams) but this is light compared to the development work Creating BI dashboards and playing with AI/automation ideas YOU First and foremost: strong C# .NET Core skills you enjoy writing code more than resetting passwords Comfortable with modern development tools (GitHub, VS Code, Jira, etc.) Happy to touch infrastructure when required you understand networking basics, Active Directory, servers, VMware, etc. (you don t need to be a CCNA, just not scared of it) Bonus points if you ve worked with SQL Server, IIS, REST APIs, or have any manufacturing/warehouse exposure Minimum HNC/HND or degree in Computer Science / Software Development (or equivalent experience) 1 2 years+ commercial programming experience (more is fine too) Why you ll love it here You get to own and shape real products used every day by hundreds of people Proper variety one day you re adding a new feature in C#, the next you re deploying a Raspberry Pi on the shop floor Forward-thinking company that s genuinely investing in digital transformation and AI Small, friendly IT team no corporate red tape Great package: enhanced pension, private healthcare, heavily subsidised canteen, 25 days holiday + banks, hybrid flexibility If you re a C# developer who wants to stay close to the metal, see your code make an immediate impact, and doesn t mind rolling up your sleeves on the occasional bit of networking or infrastructure, this could be perfect.
Huxley Associates
Python Data Engineer - Hedgefund
Huxley Associates
Python Data Engineer - Multi-Strategy Hedge Fund Location: London Hybrid: 2 days per week on-site Type: Full-time About the Role A leading multi-strategy hedge fund is seeking a highly skilled Python Data Engineer to join its technology and data team. This is a hands-on role focused on building and optimising data infrastructure that powers quantitative research, trading strategies, and risk management. Key Responsibilities Develop and maintain scalable Python-based ETL pipelines for ingesting and transforming market data from multiple sources. Design and manage cloud-based data lake solutions (AWS, Databricks) for large volumes of structured and unstructured data. Implement rigorous data quality, validation, and cleansing routines to ensure accuracy of financial time-series data. Optimize workflows for low latency and high throughput, critical for trading and research. Collaborate with portfolio managers, quantitative researchers, and traders to deliver tailored data solutions for modeling and strategy development. Contribute to the design and implementation of the firm's security master database. Analyse datasets to extract actionable insights for trading and risk management. Document system architecture, data flows, and technical processes for transparency and reproducibility. Requirements Strong proficiency in Python (pandas, NumPy, PySpark) and ETL development. Hands-on experience with AWS services (S3, Glue, Lambda) and Databricks. Solid understanding of financial market data, particularly time-series. Knowledge of data quality frameworks and performance optimisation techniques. Degree in Computer Science, Engineering, or related field. Preferred Skills SQL and relational database design experience. Exposure to quantitative finance or trading environments. Familiarity with containerisation and orchestration (Docker, Kubernetes). What We Offer Competitive compensation and performance-based bonus. Hybrid working model: 2 days per week on-site in London. Opportunity to work on mission-critical data systems for a global hedge fund. Collaborative, high-performance culture with direct exposure to front-office teams To Avoid Disappointment, Apply Now! To find out more about Huxley, please visit (url removed) Huxley, a trading division of SThree Partnership LLP is acting as an Employment Business in relation to this vacancy Registered office 8 Bishopsgate, London, EC2N 4BQ, United Kingdom Partnership Number OC(phone number removed) England and Wales
06/12/2025
Full time
Python Data Engineer - Multi-Strategy Hedge Fund Location: London Hybrid: 2 days per week on-site Type: Full-time About the Role A leading multi-strategy hedge fund is seeking a highly skilled Python Data Engineer to join its technology and data team. This is a hands-on role focused on building and optimising data infrastructure that powers quantitative research, trading strategies, and risk management. Key Responsibilities Develop and maintain scalable Python-based ETL pipelines for ingesting and transforming market data from multiple sources. Design and manage cloud-based data lake solutions (AWS, Databricks) for large volumes of structured and unstructured data. Implement rigorous data quality, validation, and cleansing routines to ensure accuracy of financial time-series data. Optimize workflows for low latency and high throughput, critical for trading and research. Collaborate with portfolio managers, quantitative researchers, and traders to deliver tailored data solutions for modeling and strategy development. Contribute to the design and implementation of the firm's security master database. Analyse datasets to extract actionable insights for trading and risk management. Document system architecture, data flows, and technical processes for transparency and reproducibility. Requirements Strong proficiency in Python (pandas, NumPy, PySpark) and ETL development. Hands-on experience with AWS services (S3, Glue, Lambda) and Databricks. Solid understanding of financial market data, particularly time-series. Knowledge of data quality frameworks and performance optimisation techniques. Degree in Computer Science, Engineering, or related field. Preferred Skills SQL and relational database design experience. Exposure to quantitative finance or trading environments. Familiarity with containerisation and orchestration (Docker, Kubernetes). What We Offer Competitive compensation and performance-based bonus. Hybrid working model: 2 days per week on-site in London. Opportunity to work on mission-critical data systems for a global hedge fund. Collaborative, high-performance culture with direct exposure to front-office teams To Avoid Disappointment, Apply Now! To find out more about Huxley, please visit (url removed) Huxley, a trading division of SThree Partnership LLP is acting as an Employment Business in relation to this vacancy Registered office 8 Bishopsgate, London, EC2N 4BQ, United Kingdom Partnership Number OC(phone number removed) England and Wales
Experis
Lead Automation Test Engineer CGEMJP
Experis City, Sheffield
Role Title: Lead Automation Test Engineer Duration: contract to run until 30/11/2026 Location: Sheffield, Hybrid 3 days per week onsite Rate: up to 482.30 p/d Umbrella inside IR35 Role purpose / summary This role is predominantly required to fulfil the role of Automation test lead, with occasional manual testing where required Solid experience of at least 5 years in the use of automation tooling and framework creation especially with Java and SQL Excellent verbal and written communications skills and stakeholder engagement at all levels Able to develop and execute test plans, test cases, test data, test scenarios, and other testing related plans and documentation based on the requirements and technical specifications Plan, develop, execute, maintain and improve Automated Test Frameworks and Automated Test Scripts for Web and Database applications Proven experience in writing automated test scripts using JavaScript Experience of using automation tools such as Selenium web driver IO (WDIO) / Cucumber etc, Selenium for UI testing and RestAssured for API testing Experience of test tool selection/recommendations based on assessment of the environment/landscape. Experience in defining a comprehensive performance test strategy that fully defines the approach, environment, scope, risks and resources required. Experience of delivering within both an Agile and Waterfall methodology. Jenkins pipeline creation and management for running automated tests, generating reports and notifying team about test results to streamline the CI/CD process. Web UI Testing Database comparison test experience Experience testing in cloud environments such as AWS, Azure, GCP, Ali Cloud Extensive experience using JIRA and Xephyr tooling Accurately report and track testing related defects and issues, by writing or automating, effective and thorough bug reports, attend triage meetings and verify bug fixes Identify process and application issues and provide suggestions to improve Learn new technologies and adapt to them as needed Identify and deploy automation solutions based on changing project needs Strong manual test execution experience Identification and collating test entry and exit criteria Good experience of executing defined test plans including coordination, tracking and reporting Positive team player working as part of the overall test team, both manual and automated Test case review/QA for coverage and traceability to requirements/design Liaison with business areas/technical leads re SIT/OAT/UAT scenario definitions as required Analysis of design and other documents for testability Previous experience within Identity and Access Management- preferable Any exposure to SailPoint IdentityIQ, Identity warehousing, and working with protocols and formats for data ingestion such as SCIM, REST API, LDAP, OIDC and CSV Experience of testing graph database management systems (GDBMS) ServiceNow, AD, AWS, Azure integrations Testing functions and decision points As Code , such as Policy as code Experience of GitOps Repos API Testing, API Gateway testing, Batch ETL testing Team Leadership & Management - Proven ability to lead, mentor, and manage other engineers within the team DevOps & CI/CD Integration - Ability to integrate automation tests into GitLab CI/CD pipelines and implement shift-left testing practices Pub Sub and MQ GCP (Cloud) testing approaches and methodology. SaaS Testing Process Improvement - Establish performance testing standards, best practices, and governance frameworks across the organization Skill set ideally including several of: Java, Cypher, Python, JavaScript, PHP, .NET, Go, SQL Server, MySQL API QMetry Test Rail BDD/TDD Jenkins Postman Insomnia All profiles will be reviewed against the required skills and experience. Due to the high number of applications we will only be able to respond to successful applicants in the first instance. We thank you for your interest and the time taken to apply!
06/12/2025
Contractor
Role Title: Lead Automation Test Engineer Duration: contract to run until 30/11/2026 Location: Sheffield, Hybrid 3 days per week onsite Rate: up to 482.30 p/d Umbrella inside IR35 Role purpose / summary This role is predominantly required to fulfil the role of Automation test lead, with occasional manual testing where required Solid experience of at least 5 years in the use of automation tooling and framework creation especially with Java and SQL Excellent verbal and written communications skills and stakeholder engagement at all levels Able to develop and execute test plans, test cases, test data, test scenarios, and other testing related plans and documentation based on the requirements and technical specifications Plan, develop, execute, maintain and improve Automated Test Frameworks and Automated Test Scripts for Web and Database applications Proven experience in writing automated test scripts using JavaScript Experience of using automation tools such as Selenium web driver IO (WDIO) / Cucumber etc, Selenium for UI testing and RestAssured for API testing Experience of test tool selection/recommendations based on assessment of the environment/landscape. Experience in defining a comprehensive performance test strategy that fully defines the approach, environment, scope, risks and resources required. Experience of delivering within both an Agile and Waterfall methodology. Jenkins pipeline creation and management for running automated tests, generating reports and notifying team about test results to streamline the CI/CD process. Web UI Testing Database comparison test experience Experience testing in cloud environments such as AWS, Azure, GCP, Ali Cloud Extensive experience using JIRA and Xephyr tooling Accurately report and track testing related defects and issues, by writing or automating, effective and thorough bug reports, attend triage meetings and verify bug fixes Identify process and application issues and provide suggestions to improve Learn new technologies and adapt to them as needed Identify and deploy automation solutions based on changing project needs Strong manual test execution experience Identification and collating test entry and exit criteria Good experience of executing defined test plans including coordination, tracking and reporting Positive team player working as part of the overall test team, both manual and automated Test case review/QA for coverage and traceability to requirements/design Liaison with business areas/technical leads re SIT/OAT/UAT scenario definitions as required Analysis of design and other documents for testability Previous experience within Identity and Access Management- preferable Any exposure to SailPoint IdentityIQ, Identity warehousing, and working with protocols and formats for data ingestion such as SCIM, REST API, LDAP, OIDC and CSV Experience of testing graph database management systems (GDBMS) ServiceNow, AD, AWS, Azure integrations Testing functions and decision points As Code , such as Policy as code Experience of GitOps Repos API Testing, API Gateway testing, Batch ETL testing Team Leadership & Management - Proven ability to lead, mentor, and manage other engineers within the team DevOps & CI/CD Integration - Ability to integrate automation tests into GitLab CI/CD pipelines and implement shift-left testing practices Pub Sub and MQ GCP (Cloud) testing approaches and methodology. SaaS Testing Process Improvement - Establish performance testing standards, best practices, and governance frameworks across the organization Skill set ideally including several of: Java, Cypher, Python, JavaScript, PHP, .NET, Go, SQL Server, MySQL API QMetry Test Rail BDD/TDD Jenkins Postman Insomnia All profiles will be reviewed against the required skills and experience. Due to the high number of applications we will only be able to respond to successful applicants in the first instance. We thank you for your interest and the time taken to apply!

Modal Window

  • Home
  • Contact
  • About Us
  • FAQs
  • Terms & Conditions
  • Privacy
  • Employer
  • Post a Job
  • Search Resumes
  • Sign in
  • Job Seeker
  • Find Jobs
  • Create Resume
  • Sign in
  • IT blog
  • Facebook
  • Twitter
  • LinkedIn
  • Youtube
© 2008-2025 IT Job Board