it job board logo
  • Home
  • Find IT Jobs
  • Register CV
  • Career Advice
  • Contact us
  • Employers
    • Register as Employer
    • Pricing Plans
  • Recruiting? Post a job
  • Sign in
  • Sign up
  • Home
  • Find IT Jobs
  • Register CV
  • Career Advice
  • Contact us
  • Employers
    • Register as Employer
    • Pricing Plans
Sorry, that job is no longer available. Here are some results that may be similar to the job you were looking for.

501 jobs found

Email me jobs like this
Refine Search
Current Search
data engineer automation python
AlphaSights
Mid-Level Test Engineer (Hybrid) - London Software Engineering London
AlphaSights
The options provided in this section allow you to customise your consent preferences for any tracking technology used for the purposes described below. To learn more about how these trackers help us and how they work, refer to the . Please be aware that denying consent for a particular purpose may make related features unavailable.Always ActiveThese trackers are used for activities that are strictly necessary to operate or deliver the service you requested from us and, therefore, do not require you to consent.These trackers help us to deliver personalised ads or marketing content to you, and to measure their performance.# Mid-Level Test Engineer (Hybrid) - LondonLondon Location: London Start date: Immediate The role: AlphaSights is looking for a self-driven Test Engineer to join the Software Engineering Team. We are a digital business in which continuous uptime and product quality and user experience are critical to success. The role of the Test Engineer therefore represents a visible and valued opportunity to have an immediate impact. Working alongside Software Engineering and Product Management, you will focus on designing and implementing automated testing solutions that drive development quality at scale, ensuring our applications are reliable, scalable, and meet the highest standards, across frontend, backend, and APIs.You will be given areas of responsibility and be expected to manage your own time. You will need to maintain a positive, problem-solving mindset, and be attracted by the challenge of delivering seamless user experiences through robust test automation.in a fast-paced environment. You must be proactive, creative, and enjoy interacting with other people. You will always be looking for ways to improve your own work while at the same time being committed to helping the wider team succeed. You are excited to make the most of on-the-job and classroom-based learning, and the opportunity to dive into the field of IT through exposure to a wide array of different technologies, regions, and challenges.Working largely with the teams based in London/Portugal, you will also have the opportunity to work with teams based in other offices across EMEA, US and Asia. What you'll do: Design and develop automated test plans and suites across multiple projects to validate software functionality and ensure regression coverage Develop and maintain test automation frameworks and infrastructure and provide tools for developers to test their own code Lead test automation efforts while performing targeted manual testing when needed, such as exploratory testing, UI validation, and edge case verification Create, manage, and maintain test data to support automated testing across environments, ensuring consistency and reproducibility of test results Support robust testing through API test automation, test data management, and cross-platform validation across browsers and devices Collaborate with Engineering and Product teams to define test strategies and acceptance criteria Integrate automated tests into CI/CD pipelines for fast and continuous feedback Monitor, triage, and investigate test failures, raising clear and actionable bugs Provide feedback for UI/UX improvements/enhancements Continuously contribute tothe overall improvement of testing tools, processes, and team best practices Who you are: You probably have a degree in a STEM subject , but we're happy to work with people who perfected their craft via a different route. Experience working at a similar level in a mature Engineering team , and looking to take your career to the next level. We're looking for people who have incredible potential. Technical expertise building and maintaining automated test frameworks using modern tools (ideally Cypress or Playwright) and at least one programming language such as TypeScript, JavaScript, Python, or Java. Strong understanding of backend APIs, data validation (SQL/NoSQL), and CI/CD pipelines. Familiarity with frontend testing (ideally React-based applications) and API or integration testing in distributed systems. Proven track record - y ou've made a demonstrable impact in your previous roles, standing out from your peers. Highly driven and proactive - you relentlessly and independently push through hurdles and drive towards excellent outcomes. Meticulous - you hold high standards and have an obsessive attention to detail. Bonus points if you have: Experience setting up or scaling automation frameworks from scratch Experience with native mobile application testing, Kubernetes, and microservices architecture Experience with performance, load, or security testing tools! Don't worry if your experience or background doesn't match all of these areas, we believe a broad spectrum of experience provides great perspective on solving problems in new and innovative ways and we'd love to hear from you.AlphaSights is an equal opportunity employer.This field is required. Options are provided in accordance to the CEFR Framework. If you have not taken this language test please select to the best of your knowledge. This field is required.This field is required.
07/05/2026
Full time
The options provided in this section allow you to customise your consent preferences for any tracking technology used for the purposes described below. To learn more about how these trackers help us and how they work, refer to the . Please be aware that denying consent for a particular purpose may make related features unavailable.Always ActiveThese trackers are used for activities that are strictly necessary to operate or deliver the service you requested from us and, therefore, do not require you to consent.These trackers help us to deliver personalised ads or marketing content to you, and to measure their performance.# Mid-Level Test Engineer (Hybrid) - LondonLondon Location: London Start date: Immediate The role: AlphaSights is looking for a self-driven Test Engineer to join the Software Engineering Team. We are a digital business in which continuous uptime and product quality and user experience are critical to success. The role of the Test Engineer therefore represents a visible and valued opportunity to have an immediate impact. Working alongside Software Engineering and Product Management, you will focus on designing and implementing automated testing solutions that drive development quality at scale, ensuring our applications are reliable, scalable, and meet the highest standards, across frontend, backend, and APIs.You will be given areas of responsibility and be expected to manage your own time. You will need to maintain a positive, problem-solving mindset, and be attracted by the challenge of delivering seamless user experiences through robust test automation.in a fast-paced environment. You must be proactive, creative, and enjoy interacting with other people. You will always be looking for ways to improve your own work while at the same time being committed to helping the wider team succeed. You are excited to make the most of on-the-job and classroom-based learning, and the opportunity to dive into the field of IT through exposure to a wide array of different technologies, regions, and challenges.Working largely with the teams based in London/Portugal, you will also have the opportunity to work with teams based in other offices across EMEA, US and Asia. What you'll do: Design and develop automated test plans and suites across multiple projects to validate software functionality and ensure regression coverage Develop and maintain test automation frameworks and infrastructure and provide tools for developers to test their own code Lead test automation efforts while performing targeted manual testing when needed, such as exploratory testing, UI validation, and edge case verification Create, manage, and maintain test data to support automated testing across environments, ensuring consistency and reproducibility of test results Support robust testing through API test automation, test data management, and cross-platform validation across browsers and devices Collaborate with Engineering and Product teams to define test strategies and acceptance criteria Integrate automated tests into CI/CD pipelines for fast and continuous feedback Monitor, triage, and investigate test failures, raising clear and actionable bugs Provide feedback for UI/UX improvements/enhancements Continuously contribute tothe overall improvement of testing tools, processes, and team best practices Who you are: You probably have a degree in a STEM subject , but we're happy to work with people who perfected their craft via a different route. Experience working at a similar level in a mature Engineering team , and looking to take your career to the next level. We're looking for people who have incredible potential. Technical expertise building and maintaining automated test frameworks using modern tools (ideally Cypress or Playwright) and at least one programming language such as TypeScript, JavaScript, Python, or Java. Strong understanding of backend APIs, data validation (SQL/NoSQL), and CI/CD pipelines. Familiarity with frontend testing (ideally React-based applications) and API or integration testing in distributed systems. Proven track record - y ou've made a demonstrable impact in your previous roles, standing out from your peers. Highly driven and proactive - you relentlessly and independently push through hurdles and drive towards excellent outcomes. Meticulous - you hold high standards and have an obsessive attention to detail. Bonus points if you have: Experience setting up or scaling automation frameworks from scratch Experience with native mobile application testing, Kubernetes, and microservices architecture Experience with performance, load, or security testing tools! Don't worry if your experience or background doesn't match all of these areas, we believe a broad spectrum of experience provides great perspective on solving problems in new and innovative ways and we'd love to hear from you.AlphaSights is an equal opportunity employer.This field is required. Options are provided in accordance to the CEFR Framework. If you have not taken this language test please select to the best of your knowledge. This field is required.This field is required.
NL-26-016 BMDS Software Engineer
nLogic Tipton, West Midlands
The nLogic team is seeking a BMDS Software Engineer to support the design, implementation, integration, and testing of complex, mission critical software capabilities for large scale, high reliability defense systems. The role involves developing and maintaining software features, algorithms, and system behaviors using modern programming practices and working within a collaborative Agile environment. The ideal candidate thrives in a fast paced setting with diverse technical challenges and works effectively across multidisciplinary engineering teams. Key Responsibilities Design, implement, integrate, and test software features and enhancements in support of mission critical system capabilities. Develop and maintain complex algorithms, including mathematics and physics based solutions. Contribute to the development of large, long lived codebases with high reliability and performance requirements. Perform software debugging, issue resolution, and code optimization. Collaborate with systems engineers, algorithm developers, and test engineers to ensure accurate implementation of system requirements. Participate in Agile ceremonies, technical discussions, peer reviews, and design sessions. Document software behavior, design decisions, and test results clearly and accurately. Support an on site, closed area environment with adherence to security standards. Required Qualifications Bachelor's degree in a STEM discipline from an accredited institution (advanced degrees strongly considered). Professional experience developing software in C++, Java, or Ada. Willingness to learn and become proficient in Ada development when required. Understanding of software engineering principles, algorithms, and data structures. Strong analytical and critical thinking abilities. Excellent written and verbal communication skills. Ability to work both independently and within collaborative team environments. Must be able to work on site in a closed area environment. Active, in scope DoD Secret clearance at time of application. Preferred Qualifications Experience with Linux environments, shell scripting, or system operations. Experience in MATLAB or Python for algorithm prototyping or analysis. Background working on large scale, complex defense systems. Advanced degree or strong foundation in mathematics or physics. Experience with battle management, command and control, or fire control software. Familiarity with Agile project management tools such as Jira and Confluence. Experience with DevSecOps pipelines and tools including Git/GitLab, Jenkins, Ansible, or CI/CD automation. Work Conditions Work Model: On site Travel: Up to 10% Work Hours: Standard Candidate must be a U.S. Citizen. This is a full time position located in Huntsville, AL. Current SECRET clearance is required for consideration.
07/05/2026
Full time
The nLogic team is seeking a BMDS Software Engineer to support the design, implementation, integration, and testing of complex, mission critical software capabilities for large scale, high reliability defense systems. The role involves developing and maintaining software features, algorithms, and system behaviors using modern programming practices and working within a collaborative Agile environment. The ideal candidate thrives in a fast paced setting with diverse technical challenges and works effectively across multidisciplinary engineering teams. Key Responsibilities Design, implement, integrate, and test software features and enhancements in support of mission critical system capabilities. Develop and maintain complex algorithms, including mathematics and physics based solutions. Contribute to the development of large, long lived codebases with high reliability and performance requirements. Perform software debugging, issue resolution, and code optimization. Collaborate with systems engineers, algorithm developers, and test engineers to ensure accurate implementation of system requirements. Participate in Agile ceremonies, technical discussions, peer reviews, and design sessions. Document software behavior, design decisions, and test results clearly and accurately. Support an on site, closed area environment with adherence to security standards. Required Qualifications Bachelor's degree in a STEM discipline from an accredited institution (advanced degrees strongly considered). Professional experience developing software in C++, Java, or Ada. Willingness to learn and become proficient in Ada development when required. Understanding of software engineering principles, algorithms, and data structures. Strong analytical and critical thinking abilities. Excellent written and verbal communication skills. Ability to work both independently and within collaborative team environments. Must be able to work on site in a closed area environment. Active, in scope DoD Secret clearance at time of application. Preferred Qualifications Experience with Linux environments, shell scripting, or system operations. Experience in MATLAB or Python for algorithm prototyping or analysis. Background working on large scale, complex defense systems. Advanced degree or strong foundation in mathematics or physics. Experience with battle management, command and control, or fire control software. Familiarity with Agile project management tools such as Jira and Confluence. Experience with DevSecOps pipelines and tools including Git/GitLab, Jenkins, Ansible, or CI/CD automation. Work Conditions Work Model: On site Travel: Up to 10% Work Hours: Standard Candidate must be a U.S. Citizen. This is a full time position located in Huntsville, AL. Current SECRET clearance is required for consideration.
Specialist Solutions Architect - DE/DWH
Menlo Ventures
Req:FEQ127R163 Location:London Recruiter:Dina Hussain Skills:Data Engineering/DWH As a Specialist Solutions Architect (SSA) - Data Engineering, you will guide customers in building big data solutions on Databricks that span a large variety of use cases. You will be in a customer-facing role, working with and supporting Solution Architects, and will require hands on production experience with Apache Spark and expertise in other data technologies. SSEs help customers through the design and successful implementation of essential workloads while aligning their technical roadmap to expand the use of the Databricks Data Intelligence Platform. As a deep go to expert reporting to the Senior Specialist Field Engineering Manager, you will continue to strengthen your technical skills through mentorship, learning, and internal training programs, and establish yourself in an area of speciality - whether that be streaming, performance tuning, industry expertise, or more. The impact you will have: Provide technical leadership to guide strategic customers to successful implementations on big data projects, ranging from architectural design to data engineering to model deployment Architect production level data pipelines, including end to end pipeline load performance testing and optimisation Become a technical expert in an area such as data lake technology, big data streaming, or big data ingestion and workflows Assist Solution Architects with more advanced aspects of the technical sale, including custom proof of concept content, estimating workload sizing, and custom architectures Provide tutorials and training to improve community adoption (including hackathons and conference presentations) Contribute to the Databricks Community What we look for: Extensive experience in a customer facing technical role. Pre sales or post sales experience working with external clients across a variety of industry markets Nice to have: Databricks Certification Travelling approx. % of the time Data Engineer Skills Experience as a Data Engineer: query tuning, performance tuning, troubleshooting, and debugging Spark or other big data solutions. Extensive experience building big data pipelines Experience in maintaining and extending production data systems to evolve with complex needs Deep Specialty Expertise in at least one of the following areas: Experience scaling big data workloads (such as ETL) that are performant and cost effective Experience migrating Hadoop workloads to the public cloud - AWS, Azure, or GCP Experience with large scale data ingestion pipelines and data migrations - including CDC and streaming ingestion pipelines Expert with cloud data lake technologies - such as Delta and Delta Live Bachelor's degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience Production programming experience in SQL and Python, Scala, or Java Professional experience with Big Data technologies (Ex: Spark, Hadoop, Kafka) and architectures Data Warehousing, Database Skills Experience with the design and implementation of a broad range of analytical and transactional data technologies such as Hadoop, Apache Spark , NoSQL, OLTP, OLAP, and ETL/ELT. Hands on experience working with MPP data warehouse appliances (Oracle Exadata, Teradata, IBM Netezza) or cloud data warehouses (Amazon Redshift, Azure Synapse, Snowflake) Hands on experience with RDBMS systems (PostGres, MySQL, SQL Server, Oracle, MariaDB) Experience in SQL language or any SQL dialect (PL/SQL, Transact SQL or others) Experience with BI tools such as Power BI, Tableau, Qlik, or others Knowledge of development tools and best practices for data engineers, including CI/CD, unit and integration testing, plus automation and orchestration Expertise in data warehousing - such as query tuning, performance tuning, troubleshooting, and debugging MPP data warehouses or other big data solutions. Maintained, extended, or migrated a production data warehouse system to evolve with complex customer needs. Production programming experience in PySpark. About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide - including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 - rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark , Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio economic status, veteran status, and other protected characteristics. Compliance If access to export controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
07/05/2026
Full time
Req:FEQ127R163 Location:London Recruiter:Dina Hussain Skills:Data Engineering/DWH As a Specialist Solutions Architect (SSA) - Data Engineering, you will guide customers in building big data solutions on Databricks that span a large variety of use cases. You will be in a customer-facing role, working with and supporting Solution Architects, and will require hands on production experience with Apache Spark and expertise in other data technologies. SSEs help customers through the design and successful implementation of essential workloads while aligning their technical roadmap to expand the use of the Databricks Data Intelligence Platform. As a deep go to expert reporting to the Senior Specialist Field Engineering Manager, you will continue to strengthen your technical skills through mentorship, learning, and internal training programs, and establish yourself in an area of speciality - whether that be streaming, performance tuning, industry expertise, or more. The impact you will have: Provide technical leadership to guide strategic customers to successful implementations on big data projects, ranging from architectural design to data engineering to model deployment Architect production level data pipelines, including end to end pipeline load performance testing and optimisation Become a technical expert in an area such as data lake technology, big data streaming, or big data ingestion and workflows Assist Solution Architects with more advanced aspects of the technical sale, including custom proof of concept content, estimating workload sizing, and custom architectures Provide tutorials and training to improve community adoption (including hackathons and conference presentations) Contribute to the Databricks Community What we look for: Extensive experience in a customer facing technical role. Pre sales or post sales experience working with external clients across a variety of industry markets Nice to have: Databricks Certification Travelling approx. % of the time Data Engineer Skills Experience as a Data Engineer: query tuning, performance tuning, troubleshooting, and debugging Spark or other big data solutions. Extensive experience building big data pipelines Experience in maintaining and extending production data systems to evolve with complex needs Deep Specialty Expertise in at least one of the following areas: Experience scaling big data workloads (such as ETL) that are performant and cost effective Experience migrating Hadoop workloads to the public cloud - AWS, Azure, or GCP Experience with large scale data ingestion pipelines and data migrations - including CDC and streaming ingestion pipelines Expert with cloud data lake technologies - such as Delta and Delta Live Bachelor's degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience Production programming experience in SQL and Python, Scala, or Java Professional experience with Big Data technologies (Ex: Spark, Hadoop, Kafka) and architectures Data Warehousing, Database Skills Experience with the design and implementation of a broad range of analytical and transactional data technologies such as Hadoop, Apache Spark , NoSQL, OLTP, OLAP, and ETL/ELT. Hands on experience working with MPP data warehouse appliances (Oracle Exadata, Teradata, IBM Netezza) or cloud data warehouses (Amazon Redshift, Azure Synapse, Snowflake) Hands on experience with RDBMS systems (PostGres, MySQL, SQL Server, Oracle, MariaDB) Experience in SQL language or any SQL dialect (PL/SQL, Transact SQL or others) Experience with BI tools such as Power BI, Tableau, Qlik, or others Knowledge of development tools and best practices for data engineers, including CI/CD, unit and integration testing, plus automation and orchestration Expertise in data warehousing - such as query tuning, performance tuning, troubleshooting, and debugging MPP data warehouses or other big data solutions. Maintained, extended, or migrated a production data warehouse system to evolve with complex customer needs. Production programming experience in PySpark. About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide - including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 - rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark , Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio economic status, veteran status, and other protected characteristics. Compliance If access to export controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
Senior Software Engineer - SRE Core Infrastructure London
PhysicsX Ltd
Senior Software Engineer - SRE Core Infrastructure London About us PhysicsX is a deep-tech company with roots in numerical physics and Formula One, dedicated to accelerating hardware innovation at the speed of software. We are building an AI-driven simulation software stack for engineering and manufacturing across advanced industries. By enabling high-fidelity, multi-physics simulation through AI inference across the entire engineering lifecycle, PhysicsX unlocks new levels of optimization and automation in design, manufacturing, and operations - empowering engineers to push the boundaries of possibility. Our customers include leading innovators in Aerospace & Defense, Materials, Energy, Semiconductors, and Automotive. Senior Software Engineer - SRE Core Infra London (Hybrid) Engineering Full Time The Role PhysicsX is growing rapidly and so is the infrastructure that underpins our platform. We are building core infrastructure that is reliable, secure, scalable and reproducible across multiple cloud providers and on premises environments. As the platform evolves to serve increasingly complex engineering workloads, the foundational infrastructure layer becomes ever more critical. We are looking for a Senior Software Engineer to join our Platform SRE Core Infrastructure team. This role is responsible for the design, provisioning and operation of the shared infrastructure that the entire PhysicsX platform depends on. You will work across infrastructure as code, Kubernetes cluster management, secrets management, GPU drivers, networking and multi tenancy architecture to ensure the platform is dependable, scalable and secure. This is a role for an engineer who combines deep infrastructure expertise with a reliability engineering mindset and an appreciation for the developer experience of the teams that consume the platform. What You Will Do Own the design and delivery of core infrastructure across multi cloud providers (GCP, Azure, AWS) and on premises environments using Terraform and Crossplane. Architect and operate Kubernetes clusters supporting both single tenant and multi tenant workloads, with a strong emphasis on isolation, performance and reliability. Define and implement infrastructure provisioning patterns using Crossplane compositions and Terraform modules, ensuring reproducibility and auditability across environments. Design and operate secrets management solutions, including dynamic secret provisioning, rotation and fine grained access control integrated with cluster identity. Manage and maintain GPU driver configurations and accelerated compute node pools, ensuring compatibility and performance for AI and simulation workloads. Own cluster networking design including CNI selection, Istio service mesh integration, ingress strategy and cross cluster connectivity. Implement and maintain vCluster based multi tenancy to provide strong workload isolation within shared infrastructure. Develop lightweight Kubernetes Operators or controllers where automation of infrastructure lifecycle tasks requires it. Establish SLOs and reliability targets for core infrastructure components and lead the response to production incidents. Partner with security and platform teams to enforce infrastructure governance, network policies and compliance controls. Contribute to and uphold engineering standards across the platform organisation. What You Bring to the Table Kubernetes depth - 5 or more years of professional experience operating Kubernetes in production. You have a thorough understanding of cluster architecture, the scheduler, networking, storage and the API lifecycle. Kubernetes certifications such as CKAD, CKA or CKS are highly desirable. Crossplane expertise - significant hands on experience designing and operating Crossplane compositions, providers and managed resources in production environments. Terraform proficiency - strong experience authoring, structuring and operating Terraform at scale, including state management, module design and CI integration. Multi cloud and on premises - practical experience operating infrastructure across more than one cloud provider and on premises environments, with an understanding of the differences in identity, networking and storage. Multi tenancy architecture - experience designing and implementing both single tenant and multi tenant Kubernetes architectures, with strong views on isolation, resource governance and operational overhead. Secrets management - experience with tools such as Vault, External Secrets Operator or cloud native secret stores, including dynamic provisioning and rotation. Networking - solid knowledge of Kubernetes networking, CNI plugins, Istio service mesh and ingress patterns. Experience with cross cluster or hybrid connectivity is valuable. vCluster and virtual clusters - experience using vCluster or similar tooling to provide lightweight, isolated Kubernetes environments within shared clusters. GPU and accelerated compute - familiarity with GPU driver management, device plugins and the operational considerations of running accelerated workloads in Kubernetes. Kubernetes Operators - at least lightweight experience writing or extending Kubernetes Operators or controllers, ideally in Golang or Python. Software engineering capability - you are comfortable writing code to automate and extend infrastructure. Python and Golang are the primary languages used across the platform. Exposure to functional programming languages such as Erlang, Elixir or OCaml is appreciated. Platform engineering mindset - you think about infrastructure as a product consumed by engineering teams and prioritise usability, documentation and long term maintainability. Distributed systems experience - a solid grounding in distributed systems concepts, including failure modes, consistency and the operational challenges of running systems at scale. Ideally Experience with GitOps workflows using tools such as Argo CD or Flux. Contributions to open source infrastructure or Kubernetes ecosystem projects. What We Offer Equity options - share in our success and growth. 10% employer pension contribution - invest in your future. Free office lunches - great food to fuel your workdays. Flexible working - balance your work and life in a way that works for you. Hybrid setup - enjoy our Shoreditch office while keeping remote flexibility. Enhanced parental leave - support for life's biggest milestones. Private healthcare - comprehensive coverage. Personal development - access to learning and training to help you grow. Work from anywhere - extend your remote setup to enjoy the sun or reconnect with loved ones. We value diversity and are committed to equal employment opportunity regardless of sex, race, religion, ethnicity, nationality, disability, age, sexual orientation or gender identity. We strongly encourage individuals from groups traditionally underrepresented in tech to apply. To help make a change, we sponsor bright women from disadvantaged backgrounds through their university degrees in science and mathematics. We collect diversity and inclusion data solely for the purpose of monitoring the effectiveness of our equal opportunities policies and ensuring compliance with UK employment and equality legislation. This information is confidential, used only in aggregate form, and will not influence the outcome of your application.
07/05/2026
Full time
Senior Software Engineer - SRE Core Infrastructure London About us PhysicsX is a deep-tech company with roots in numerical physics and Formula One, dedicated to accelerating hardware innovation at the speed of software. We are building an AI-driven simulation software stack for engineering and manufacturing across advanced industries. By enabling high-fidelity, multi-physics simulation through AI inference across the entire engineering lifecycle, PhysicsX unlocks new levels of optimization and automation in design, manufacturing, and operations - empowering engineers to push the boundaries of possibility. Our customers include leading innovators in Aerospace & Defense, Materials, Energy, Semiconductors, and Automotive. Senior Software Engineer - SRE Core Infra London (Hybrid) Engineering Full Time The Role PhysicsX is growing rapidly and so is the infrastructure that underpins our platform. We are building core infrastructure that is reliable, secure, scalable and reproducible across multiple cloud providers and on premises environments. As the platform evolves to serve increasingly complex engineering workloads, the foundational infrastructure layer becomes ever more critical. We are looking for a Senior Software Engineer to join our Platform SRE Core Infrastructure team. This role is responsible for the design, provisioning and operation of the shared infrastructure that the entire PhysicsX platform depends on. You will work across infrastructure as code, Kubernetes cluster management, secrets management, GPU drivers, networking and multi tenancy architecture to ensure the platform is dependable, scalable and secure. This is a role for an engineer who combines deep infrastructure expertise with a reliability engineering mindset and an appreciation for the developer experience of the teams that consume the platform. What You Will Do Own the design and delivery of core infrastructure across multi cloud providers (GCP, Azure, AWS) and on premises environments using Terraform and Crossplane. Architect and operate Kubernetes clusters supporting both single tenant and multi tenant workloads, with a strong emphasis on isolation, performance and reliability. Define and implement infrastructure provisioning patterns using Crossplane compositions and Terraform modules, ensuring reproducibility and auditability across environments. Design and operate secrets management solutions, including dynamic secret provisioning, rotation and fine grained access control integrated with cluster identity. Manage and maintain GPU driver configurations and accelerated compute node pools, ensuring compatibility and performance for AI and simulation workloads. Own cluster networking design including CNI selection, Istio service mesh integration, ingress strategy and cross cluster connectivity. Implement and maintain vCluster based multi tenancy to provide strong workload isolation within shared infrastructure. Develop lightweight Kubernetes Operators or controllers where automation of infrastructure lifecycle tasks requires it. Establish SLOs and reliability targets for core infrastructure components and lead the response to production incidents. Partner with security and platform teams to enforce infrastructure governance, network policies and compliance controls. Contribute to and uphold engineering standards across the platform organisation. What You Bring to the Table Kubernetes depth - 5 or more years of professional experience operating Kubernetes in production. You have a thorough understanding of cluster architecture, the scheduler, networking, storage and the API lifecycle. Kubernetes certifications such as CKAD, CKA or CKS are highly desirable. Crossplane expertise - significant hands on experience designing and operating Crossplane compositions, providers and managed resources in production environments. Terraform proficiency - strong experience authoring, structuring and operating Terraform at scale, including state management, module design and CI integration. Multi cloud and on premises - practical experience operating infrastructure across more than one cloud provider and on premises environments, with an understanding of the differences in identity, networking and storage. Multi tenancy architecture - experience designing and implementing both single tenant and multi tenant Kubernetes architectures, with strong views on isolation, resource governance and operational overhead. Secrets management - experience with tools such as Vault, External Secrets Operator or cloud native secret stores, including dynamic provisioning and rotation. Networking - solid knowledge of Kubernetes networking, CNI plugins, Istio service mesh and ingress patterns. Experience with cross cluster or hybrid connectivity is valuable. vCluster and virtual clusters - experience using vCluster or similar tooling to provide lightweight, isolated Kubernetes environments within shared clusters. GPU and accelerated compute - familiarity with GPU driver management, device plugins and the operational considerations of running accelerated workloads in Kubernetes. Kubernetes Operators - at least lightweight experience writing or extending Kubernetes Operators or controllers, ideally in Golang or Python. Software engineering capability - you are comfortable writing code to automate and extend infrastructure. Python and Golang are the primary languages used across the platform. Exposure to functional programming languages such as Erlang, Elixir or OCaml is appreciated. Platform engineering mindset - you think about infrastructure as a product consumed by engineering teams and prioritise usability, documentation and long term maintainability. Distributed systems experience - a solid grounding in distributed systems concepts, including failure modes, consistency and the operational challenges of running systems at scale. Ideally Experience with GitOps workflows using tools such as Argo CD or Flux. Contributions to open source infrastructure or Kubernetes ecosystem projects. What We Offer Equity options - share in our success and growth. 10% employer pension contribution - invest in your future. Free office lunches - great food to fuel your workdays. Flexible working - balance your work and life in a way that works for you. Hybrid setup - enjoy our Shoreditch office while keeping remote flexibility. Enhanced parental leave - support for life's biggest milestones. Private healthcare - comprehensive coverage. Personal development - access to learning and training to help you grow. Work from anywhere - extend your remote setup to enjoy the sun or reconnect with loved ones. We value diversity and are committed to equal employment opportunity regardless of sex, race, religion, ethnicity, nationality, disability, age, sexual orientation or gender identity. We strongly encourage individuals from groups traditionally underrepresented in tech to apply. To help make a change, we sponsor bright women from disadvantaged backgrounds through their university degrees in science and mathematics. We collect diversity and inclusion data solely for the purpose of monitoring the effectiveness of our equal opportunities policies and ensuring compliance with UK employment and equality legislation. This information is confidential, used only in aggregate form, and will not influence the outcome of your application.
NL-26-017 BMDS Software Test Engineer
nLogic Tipton, West Midlands
nLogic team is seeking a BMDS Software Test Engineer to support the testing, verification, and validation of complex, mission critical, software intensive systems. This role involves understanding system behavior, analyzing requirements, executing tests, automating test procedures, and contributing to the development and verification of advanced algorithms and system capabilities. You will work as part of a collaborative engineering team responsible for high reliability software used in national defense applications. Candidates should excel in fast paced environments, adapt quickly, and demonstrate strong analytical and communication skills. Key Responsibilities: Develop and execute test plans, procedures, and use cases to verify software functionality and system behaviors. Test both existing and new software capabilities, including mathematics and physics based algorithms. Perform software integration testing, system level testing, and regression testing. Use automation tools and scripting (e.g., Python) to improve test coverage and efficiency. Analyze test data, perform root cause investigations, and document findings. Collaborate closely with software developers, systems engineers, and algorithm developers to ensure accurate implementation and verification. Participate in Agile/Scrum activities, technical discussions, and design reviews. Support an on site, closed area environment while adhering to required security protocols. Required Qualifications: Bachelor's degree in a STEM discipline from an accredited institution (Master's degree strongly considered). Experience in software testing, integration, or verification activities. Working knowledge of system design concepts, requirements development, test planning, and test execution. Experience with object oriented programming languages. Proficiency with Python or similar scripting languages for automation. Strong critical thinking, analytical, and troubleshooting abilities. Excellent written and verbal communication skills. Ability to work independently and as part of a multidisciplinary team. Must be able to work on site in a closed area environment. Active, in scope DoD Secret clearance at time of application. Preferred Qualifications: Experience with configuration management tools (e.g., Git, GitLab, Bitbucket). Understanding of containerized environments or orchestration tools (e.g., Docker, Podman, Kubernetes). Experience with automated testing tools or frameworks (Eggplant experience is a plus). Ability to analyze and interpret large log files or data sets. Background working on large, complex defense or aerospace systems. Experience with Linux environments, scripting, or system level operations. Work Conditions: Work Model: On site Travel: Up to 10% Work Hours: Standard Candidate must be a U.S. Citizen. This is a full time position located in Huntsville, AL. Current SECRET clearance is required for consideration.
07/05/2026
Full time
nLogic team is seeking a BMDS Software Test Engineer to support the testing, verification, and validation of complex, mission critical, software intensive systems. This role involves understanding system behavior, analyzing requirements, executing tests, automating test procedures, and contributing to the development and verification of advanced algorithms and system capabilities. You will work as part of a collaborative engineering team responsible for high reliability software used in national defense applications. Candidates should excel in fast paced environments, adapt quickly, and demonstrate strong analytical and communication skills. Key Responsibilities: Develop and execute test plans, procedures, and use cases to verify software functionality and system behaviors. Test both existing and new software capabilities, including mathematics and physics based algorithms. Perform software integration testing, system level testing, and regression testing. Use automation tools and scripting (e.g., Python) to improve test coverage and efficiency. Analyze test data, perform root cause investigations, and document findings. Collaborate closely with software developers, systems engineers, and algorithm developers to ensure accurate implementation and verification. Participate in Agile/Scrum activities, technical discussions, and design reviews. Support an on site, closed area environment while adhering to required security protocols. Required Qualifications: Bachelor's degree in a STEM discipline from an accredited institution (Master's degree strongly considered). Experience in software testing, integration, or verification activities. Working knowledge of system design concepts, requirements development, test planning, and test execution. Experience with object oriented programming languages. Proficiency with Python or similar scripting languages for automation. Strong critical thinking, analytical, and troubleshooting abilities. Excellent written and verbal communication skills. Ability to work independently and as part of a multidisciplinary team. Must be able to work on site in a closed area environment. Active, in scope DoD Secret clearance at time of application. Preferred Qualifications: Experience with configuration management tools (e.g., Git, GitLab, Bitbucket). Understanding of containerized environments or orchestration tools (e.g., Docker, Podman, Kubernetes). Experience with automated testing tools or frameworks (Eggplant experience is a plus). Ability to analyze and interpret large log files or data sets. Background working on large, complex defense or aerospace systems. Experience with Linux environments, scripting, or system level operations. Work Conditions: Work Model: On site Travel: Up to 10% Work Hours: Standard Candidate must be a U.S. Citizen. This is a full time position located in Huntsville, AL. Current SECRET clearance is required for consideration.
Amazon
Specialist GenAI Solution Architect, AWS Specialist & Partner Industries Organization (ASPI), G ...
Amazon
GenAI and Agentic AI are reshaping how financial institutions operate - from how they serve clients to how they manage risk, compliance, and internal operations. This role is about making that real. As a Specialist GenAI Solution Architect covering Global Financial Services (GFS), you will operate as a trusted technical advisor to CTOs, Chief AI Officers, and engineering leadership at tier one banks, insurers, and capital markets firms across EMEA and APJ - helping them navigate the architecture, governance, and operational decisions required to move agentic AI from experimentation into production at institutional scale. GFS partners with a focused set of the world's most important financial institutions. As these organisations move from experimentation to production with foundation models, LLMs, and agentic systems, this role is about going deep with strategic customers - translating AI ambition into viable architectures, defensible technical strategies, and production ready programmes that meet the demands of highly regulated environments. This is a consultative, disruptive role. You will challenge conventional thinking, introduce new possibilities, and shape how major financial institutions adopt agentic AI. Key job responsibilities Agentic AI Architecture for Financial Services: Design multi agent architectures that solve real financial services problems - claims automation, credit decisioning, regulatory reporting, client advisory - and define the path from prototype to production grade deployment in regulated environments. Strategic Technical Advisory: Be the primary AI architecture partner to a focused set of strategic accounts - engaging CTO, CAIO, and engineering leadership to understand their business deeply, shape their AI roadmaps, and challenge assumptions. This is a consultative role. You are there to push thinking, not take orders. Full Stack AI Architecture: Design end to end system architectures spanning model serving (vLLM/SGLang/TGI), frontend interaction patterns, API orchestration, and data integration - within constraints that satisfy financial services security, data residency, and auditability requirements. You won't be writing production code, but you need the depth to make credible, opinionated architecture calls and guide customer engineering teams through implementation. Governance & Security in Regulated Environments: Design and advise on Human in the Loop protocols, explainability frameworks, audit trails, and model risk management practices aligned with regulatory expectations (MAS, PRA, FCA, EBA, APRA). Performance Optimisation: Advise on the trade offs between model quality, latency, throughput, and token efficiency - particularly within the cost and performance constraints of real time financial workflows. MLOps & AgentOps Strategy: Guide customers on CI/CD pipeline design, automated testing for non deterministic outputs, model versioning, and observability (tracing, drift detection) - establishing patterns that satisfy model risk management and internal audit expectations. Legacy & Modern System Integration: Design integration patterns that connect GenAI components with existing core banking/insurance platforms, ensuring data consistency across vector stores, graph databases, RDBMS, and legacy middleware. Reusable Technical Assets: Build reference architectures, design patterns, and proof of concept frameworks that scale what works from one institution to many across the GFS organisation. Technical Leadership & Enablement: Run architecture deep dives, technical workshops, and executive briefings - translating complex AI concepts into clear, actionable strategies for both technical and non technical audiences. About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn't followed a traditional path, or includes alternative experiences, don't let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating - that's why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it's in our nature to learn and be curious. Our employee led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We're continuously raising our performance bar as we strive to become Earth's Best Employer. That's why you'll find endless knowledge sharing, mentorship and other career advancing resources here to help you develop into a better rounded professional. Work/Life Balance We value work life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there's nothing we can't achieve. Basic Qualifications Experience architecting or leading the delivery of AI/ML systems at scale within or for financial services organisations Deep understanding of modern AI/ML stacks - PyTorch or TensorFlow, Hugging Face, LLM serving infrastructure - and strong familiarity with agentic frameworks (Strands, LangGraph, CrewAI) and how they manage complex, stateful workflows Strong technical foundation in Python and at least one of Java, Go, or TypeScript, with deep understanding of modern backend architectures - enough to make credible architecture decisions and guide engineering teams, not necessarily write production code daily Hands on experience with AWS AI/ML services (Bedrock, AgentCore, SageMaker), including designing secure, private network AI environments and RAG patterns (embeddings, vector stores, semantic search) Demonstrated ability to engage and influence senior technical stakeholders (CTO, CAIO, VP Engineering) in complex, regulated enterprise environments Preferred Qualifications Direct domain experience in banking, insurance, or capital markets - understanding the workflows, risk frameworks, and regulatory landscape (MAS, PRA, FCA, EBA, APRA) Track record defining LLMOps practices - evaluation frameworks for non deterministic outputs, reasoning traceability, drift detection Prior experience in a Solution Architecture, Sales Engineering, or CTO advisory capacity Amazon is an equal opportunities employer. We believe passionately that employing a diverse workforce is central to our success. We make recruiting decisions based on your experience and skills. Protecting your privacy and the security of your data is a longstanding top priority for Amazon. Please consult our Privacy Notice () to know more about how we collect, use and transfer the personal data of our candidates. Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.
07/05/2026
Full time
GenAI and Agentic AI are reshaping how financial institutions operate - from how they serve clients to how they manage risk, compliance, and internal operations. This role is about making that real. As a Specialist GenAI Solution Architect covering Global Financial Services (GFS), you will operate as a trusted technical advisor to CTOs, Chief AI Officers, and engineering leadership at tier one banks, insurers, and capital markets firms across EMEA and APJ - helping them navigate the architecture, governance, and operational decisions required to move agentic AI from experimentation into production at institutional scale. GFS partners with a focused set of the world's most important financial institutions. As these organisations move from experimentation to production with foundation models, LLMs, and agentic systems, this role is about going deep with strategic customers - translating AI ambition into viable architectures, defensible technical strategies, and production ready programmes that meet the demands of highly regulated environments. This is a consultative, disruptive role. You will challenge conventional thinking, introduce new possibilities, and shape how major financial institutions adopt agentic AI. Key job responsibilities Agentic AI Architecture for Financial Services: Design multi agent architectures that solve real financial services problems - claims automation, credit decisioning, regulatory reporting, client advisory - and define the path from prototype to production grade deployment in regulated environments. Strategic Technical Advisory: Be the primary AI architecture partner to a focused set of strategic accounts - engaging CTO, CAIO, and engineering leadership to understand their business deeply, shape their AI roadmaps, and challenge assumptions. This is a consultative role. You are there to push thinking, not take orders. Full Stack AI Architecture: Design end to end system architectures spanning model serving (vLLM/SGLang/TGI), frontend interaction patterns, API orchestration, and data integration - within constraints that satisfy financial services security, data residency, and auditability requirements. You won't be writing production code, but you need the depth to make credible, opinionated architecture calls and guide customer engineering teams through implementation. Governance & Security in Regulated Environments: Design and advise on Human in the Loop protocols, explainability frameworks, audit trails, and model risk management practices aligned with regulatory expectations (MAS, PRA, FCA, EBA, APRA). Performance Optimisation: Advise on the trade offs between model quality, latency, throughput, and token efficiency - particularly within the cost and performance constraints of real time financial workflows. MLOps & AgentOps Strategy: Guide customers on CI/CD pipeline design, automated testing for non deterministic outputs, model versioning, and observability (tracing, drift detection) - establishing patterns that satisfy model risk management and internal audit expectations. Legacy & Modern System Integration: Design integration patterns that connect GenAI components with existing core banking/insurance platforms, ensuring data consistency across vector stores, graph databases, RDBMS, and legacy middleware. Reusable Technical Assets: Build reference architectures, design patterns, and proof of concept frameworks that scale what works from one institution to many across the GFS organisation. Technical Leadership & Enablement: Run architecture deep dives, technical workshops, and executive briefings - translating complex AI concepts into clear, actionable strategies for both technical and non technical audiences. About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn't followed a traditional path, or includes alternative experiences, don't let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating - that's why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it's in our nature to learn and be curious. Our employee led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We're continuously raising our performance bar as we strive to become Earth's Best Employer. That's why you'll find endless knowledge sharing, mentorship and other career advancing resources here to help you develop into a better rounded professional. Work/Life Balance We value work life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there's nothing we can't achieve. Basic Qualifications Experience architecting or leading the delivery of AI/ML systems at scale within or for financial services organisations Deep understanding of modern AI/ML stacks - PyTorch or TensorFlow, Hugging Face, LLM serving infrastructure - and strong familiarity with agentic frameworks (Strands, LangGraph, CrewAI) and how they manage complex, stateful workflows Strong technical foundation in Python and at least one of Java, Go, or TypeScript, with deep understanding of modern backend architectures - enough to make credible architecture decisions and guide engineering teams, not necessarily write production code daily Hands on experience with AWS AI/ML services (Bedrock, AgentCore, SageMaker), including designing secure, private network AI environments and RAG patterns (embeddings, vector stores, semantic search) Demonstrated ability to engage and influence senior technical stakeholders (CTO, CAIO, VP Engineering) in complex, regulated enterprise environments Preferred Qualifications Direct domain experience in banking, insurance, or capital markets - understanding the workflows, risk frameworks, and regulatory landscape (MAS, PRA, FCA, EBA, APRA) Track record defining LLMOps practices - evaluation frameworks for non deterministic outputs, reasoning traceability, drift detection Prior experience in a Solution Architecture, Sales Engineering, or CTO advisory capacity Amazon is an equal opportunities employer. We believe passionately that employing a diverse workforce is central to our success. We make recruiting decisions based on your experience and skills. Protecting your privacy and the security of your data is a longstanding top priority for Amazon. Please consult our Privacy Notice () to know more about how we collect, use and transfer the personal data of our candidates. Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.
Senior QA Engineer
Swap
Swap is the infrastructure behind modern agentic commerce. The only AI-native platform connecting backend operations with a forward-thinking storefront experience. Built for brands that want to sell anything - anywhere, Swap centralises global operations, powers intelligent workflows, and unlocks margin-protecting decisions with real-time data and capability. Our products span cross-border, tax, returns, demand planning, and our next-generation agentic storefront, giving merchants full transparency and the ability to act with confidence. At Swap, we're building a culture that values clarity, creativity, and shared ownership as we redefine how global commerce works. About the role We are looking for a Senior QA Engineer / SDET to work with product, engineering, and our Agentic AI team building an agentic storefront and new e-commerce workflows. You will lead quality across a complex system, define the QA strategy and operating model from scratch, and build the tooling, processes, and automation needed to ship reliably. Responsibilities Own QA strategy and execution for agentic AI and core ecommerce flows (definition of done, quality gates, release readiness). Define and roll out QA processes from scratch: test planning, risk assessment, defect triage, reporting, and continuous improvement. Lead cross-team alignment on quality standards and drive adoption through coaching and clear workflows. Design test architecture across unit/integration/contract/E2E, with a focus on reliability, maintainability, and signal quality. Build and maintain automation and CI/CD integration; reduce flakiness and improve time-to-signal. Define AI-specific validation for agent workflows: tool-calling paths, multi-step journeys, failure handling, regressions, and guardrails. Partner on system design to identify bottlenecks and failure modes; propose changes that improve testability, observability, and resilience. Requirements 5+ years in QA / quality engineering for modern web products (APIs + UI). Strong coding skills and hands-on automation experience (language flexible; Python/TypeScript common). Proven track record of building QA processes and test strategy from scratch in a fast-moving environment. Strong leadership and ownership: can organize work across teams, set direction, and drive execution without formal authority. Solid CI/CD experience: test automation in pipelines, quality gates, environment strategy, and test data management. Strong engineering mindset and system design skills; able to reason about distributed systems and propose pragmatic improvements. Stock options in a high-growth startup. Competitive PTO with public holidays additional. Private Health. Pension. Wellness benefits. Breakfast Mondays. Diversity & Equal Opportunities We embrace diversity and equality in a serious way. We are committed to building a team with a variety of backgrounds, skills, and views. The more inclusive we are, the better our work will be. Creating a culture of equality isn't just the right thing to do; it's also the smart thing.
07/05/2026
Full time
Swap is the infrastructure behind modern agentic commerce. The only AI-native platform connecting backend operations with a forward-thinking storefront experience. Built for brands that want to sell anything - anywhere, Swap centralises global operations, powers intelligent workflows, and unlocks margin-protecting decisions with real-time data and capability. Our products span cross-border, tax, returns, demand planning, and our next-generation agentic storefront, giving merchants full transparency and the ability to act with confidence. At Swap, we're building a culture that values clarity, creativity, and shared ownership as we redefine how global commerce works. About the role We are looking for a Senior QA Engineer / SDET to work with product, engineering, and our Agentic AI team building an agentic storefront and new e-commerce workflows. You will lead quality across a complex system, define the QA strategy and operating model from scratch, and build the tooling, processes, and automation needed to ship reliably. Responsibilities Own QA strategy and execution for agentic AI and core ecommerce flows (definition of done, quality gates, release readiness). Define and roll out QA processes from scratch: test planning, risk assessment, defect triage, reporting, and continuous improvement. Lead cross-team alignment on quality standards and drive adoption through coaching and clear workflows. Design test architecture across unit/integration/contract/E2E, with a focus on reliability, maintainability, and signal quality. Build and maintain automation and CI/CD integration; reduce flakiness and improve time-to-signal. Define AI-specific validation for agent workflows: tool-calling paths, multi-step journeys, failure handling, regressions, and guardrails. Partner on system design to identify bottlenecks and failure modes; propose changes that improve testability, observability, and resilience. Requirements 5+ years in QA / quality engineering for modern web products (APIs + UI). Strong coding skills and hands-on automation experience (language flexible; Python/TypeScript common). Proven track record of building QA processes and test strategy from scratch in a fast-moving environment. Strong leadership and ownership: can organize work across teams, set direction, and drive execution without formal authority. Solid CI/CD experience: test automation in pipelines, quality gates, environment strategy, and test data management. Strong engineering mindset and system design skills; able to reason about distributed systems and propose pragmatic improvements. Stock options in a high-growth startup. Competitive PTO with public holidays additional. Private Health. Pension. Wellness benefits. Breakfast Mondays. Diversity & Equal Opportunities We embrace diversity and equality in a serious way. We are committed to building a team with a variety of backgrounds, skills, and views. The more inclusive we are, the better our work will be. Creating a culture of equality isn't just the right thing to do; it's also the smart thing.
Site Reliability Engineer
N Consulting Limited
LocationLondon, England, United KingdomRole- Site Reliability Engineer Location : Hove, UK Work Mode :Hybrid Mandatory primary skills on Datadog / Dynatrace tools, SLO management skills (AWS cloud skills is secondary). Primary Responsibilities: • Work closely with Product Engineering team and implement strategies for modernizing IT operations enhancing observability and toil reduction. • Architect and deploy observability platforms to monitor system health, performance, and reliability effectively. • Propose & drive strategies for AI-driven alerting and proactive anomaly detection to reduce MTTD & MTTR. • Develop and enforce SRE best practices, including Service Level Objectives (SLOs), Service Level Indicators (SLIs), and Error Budgets. • Establish & create AIOPS roadmap for improving operational efficiency. • Lead efforts to automate repetitive tasks (toil) using scripting, orchestration tools, and AI/ML-based solutions. • Drive toil automation initiatives for automated incident responses & self-healing automation for achieving autonomous operations. • Collaborate with cross-functional teams to ensure systems are scalable, resilient, and maintainable. • Drive incident management and root cause analysis processes through automation, ensuring continuous improvement to enable autonomous operations. • Partner with engineering, architecture, and product teams to enable shift-left engineering practices ensuring reliability. • Mentor and guide teams on adopting SRE principles and tools. • Advocate for a culture of reliability, automation, and continuous improvement across the organization. Key Skills: • Strong expertise in implementing Site Reliability Engineering (SRE) principles. • Advanced knowledge of establishing observability using tools - Dynatrace & Datadog (primary skills). • Proficiency in automation & scripting using Python & Ansible (primary skills). • Strong experience with cloud platforms - AWS & Azure (primary skills). • Solid understanding of containerization and orchestration tools like Docker and Kubernetes. • Proficiency in cloud native distributed systems & microservices architecture. • Exposure to AI/ML techniques for predictive analytics and automated problem resolution. • Familiarity with CI/CD pipelines & enabling automated release & deployment engineering solutions. • Good to have experience with chaos engineering tools like Gremlin or Chaos Monkey and implementing automation frameworks for resilience tracking. • Ability to manage and prioritize multiple projects in a fast-paced environment. • Strong interpersonal and communication skills to work effectively across teams. • Excellent problem solving, analytical thinking, and adaptability. • Strategic mindset balancing engineering excellence with business priorities. Preferred Qualifications: • 12+ years of experience in IT operations, SRE, or DevOps roles. • Proven track record of SRE experience in implementing observability and automation solutions in large-scale environments. • Certifications in cloud platforms, observability tools & other SRE related areas.# Site Reliability Engineer at N Consulting LtdLocationLondon, England, United KingdomSalary£80000 - £85000 /yearJob TypeContractDate PostedMarch 9th, 2026Apply Now
07/05/2026
Full time
LocationLondon, England, United KingdomRole- Site Reliability Engineer Location : Hove, UK Work Mode :Hybrid Mandatory primary skills on Datadog / Dynatrace tools, SLO management skills (AWS cloud skills is secondary). Primary Responsibilities: • Work closely with Product Engineering team and implement strategies for modernizing IT operations enhancing observability and toil reduction. • Architect and deploy observability platforms to monitor system health, performance, and reliability effectively. • Propose & drive strategies for AI-driven alerting and proactive anomaly detection to reduce MTTD & MTTR. • Develop and enforce SRE best practices, including Service Level Objectives (SLOs), Service Level Indicators (SLIs), and Error Budgets. • Establish & create AIOPS roadmap for improving operational efficiency. • Lead efforts to automate repetitive tasks (toil) using scripting, orchestration tools, and AI/ML-based solutions. • Drive toil automation initiatives for automated incident responses & self-healing automation for achieving autonomous operations. • Collaborate with cross-functional teams to ensure systems are scalable, resilient, and maintainable. • Drive incident management and root cause analysis processes through automation, ensuring continuous improvement to enable autonomous operations. • Partner with engineering, architecture, and product teams to enable shift-left engineering practices ensuring reliability. • Mentor and guide teams on adopting SRE principles and tools. • Advocate for a culture of reliability, automation, and continuous improvement across the organization. Key Skills: • Strong expertise in implementing Site Reliability Engineering (SRE) principles. • Advanced knowledge of establishing observability using tools - Dynatrace & Datadog (primary skills). • Proficiency in automation & scripting using Python & Ansible (primary skills). • Strong experience with cloud platforms - AWS & Azure (primary skills). • Solid understanding of containerization and orchestration tools like Docker and Kubernetes. • Proficiency in cloud native distributed systems & microservices architecture. • Exposure to AI/ML techniques for predictive analytics and automated problem resolution. • Familiarity with CI/CD pipelines & enabling automated release & deployment engineering solutions. • Good to have experience with chaos engineering tools like Gremlin or Chaos Monkey and implementing automation frameworks for resilience tracking. • Ability to manage and prioritize multiple projects in a fast-paced environment. • Strong interpersonal and communication skills to work effectively across teams. • Excellent problem solving, analytical thinking, and adaptability. • Strategic mindset balancing engineering excellence with business priorities. Preferred Qualifications: • 12+ years of experience in IT operations, SRE, or DevOps roles. • Proven track record of SRE experience in implementing observability and automation solutions in large-scale environments. • Certifications in cloud platforms, observability tools & other SRE related areas.# Site Reliability Engineer at N Consulting LtdLocationLondon, England, United KingdomSalary£80000 - £85000 /yearJob TypeContractDate PostedMarch 9th, 2026Apply Now
Data Scientist - Internship
Intellium AI Ltd Bristol, Gloucestershire
Are you interested in starting your career with a deep tech company that works with the Aerospace and Defence sector? Are you interested in developing Generative AI technology for knowledge automation? Do you enjoy working on cutting-edge, scalable technology in a team environment? If the above questions excite you, then please continue reading! At Intellium AI, we hire the best minds in technology to innovate and build solutions that our customers desperately want to adopt AI successfully within their businesses. We have built an Enterprise AI platform to give our customers the power of AI irrespective of their skill background. Key Job Responsibilities Develop supervised and unsupervised machine learning models, such as XGBoost, KNN, etc. Develop Generative AI applications using open and closed source LLMs. Develop Explainable AI modules to present machine learning results to end users Implement optimisation algorithms using the trained machine learning (surrogate) models Develop uncertainty quantification modules to highlight the uncertainty in model predictions Implement cutting-edge AI algorithms from published scientific documents Write technical articles/blogs on industrial use cases. Qualifications Master's in data science with a background in Engineering Mastery of Python programming language Strong written and verbal communication skills Basic working experience/knowledge in Linux environment Basic knowledge of containerised applications, for example, Docker Basic knowledge of GPU-based highly parallel software development
07/05/2026
Full time
Are you interested in starting your career with a deep tech company that works with the Aerospace and Defence sector? Are you interested in developing Generative AI technology for knowledge automation? Do you enjoy working on cutting-edge, scalable technology in a team environment? If the above questions excite you, then please continue reading! At Intellium AI, we hire the best minds in technology to innovate and build solutions that our customers desperately want to adopt AI successfully within their businesses. We have built an Enterprise AI platform to give our customers the power of AI irrespective of their skill background. Key Job Responsibilities Develop supervised and unsupervised machine learning models, such as XGBoost, KNN, etc. Develop Generative AI applications using open and closed source LLMs. Develop Explainable AI modules to present machine learning results to end users Implement optimisation algorithms using the trained machine learning (surrogate) models Develop uncertainty quantification modules to highlight the uncertainty in model predictions Implement cutting-edge AI algorithms from published scientific documents Write technical articles/blogs on industrial use cases. Qualifications Master's in data science with a background in Engineering Mastery of Python programming language Strong written and verbal communication skills Basic working experience/knowledge in Linux environment Basic knowledge of containerised applications, for example, Docker Basic knowledge of GPU-based highly parallel software development
AWS DevOps Engineer
UBDS Group
UBDS group is looking for an experienced, Security Cleared (SC Cleared) AWS DevOps Engineer to design, build and maintain resilient cloud infrastructure and automation frameworks in support of client engagements. This role involves ownership of CI/CD pipelines, infrastructure-as-code (IaC), and deployment automation within secure environments, particularly across public sector clients. The ideal candidate will bring deep technical expertise in AWS, a strong understanding of DevSecOps practices, and a commitment to delivering reliable, repeatable solutions. Key Responsibilities Build and maintain scalable, secure and automated CI/CD pipelines using GitLab or GitHub Manage infrastructure-as-code solutions using Terraform to support repeatable and compliant environments Implement automation across build, test and deployment workflows to ensure consistent and reliable delivery Apply OWASP security principles and DevSecOps practices throughout the delivery lifecycle Deploy and maintain containerised workloads using Kubernetes or AWS-native services such as ECS or EKS Monitor infrastructure performance and availability, implementing proactive alerting and logging solutions Collaborate with cross-functional teams to embed DevOps capabilities and drive continuous improvement Support troubleshooting of infrastructure and deployment-related issues in production and non-production environments Qualifications Active Security Check (SC) clearance is mandatory Strong hands-on experience with Amazon Web Services (AWS), including core compute, networking and security services Proficiency in GitLab or GitHub for version control and pipeline management Demonstrable experience with Terraform and infrastructure-as-code delivery Deep understanding of Kubernetes architecture and operations in cloud environments Working knowledge of OWASP security principles and how to apply them in CI/CD and infrastructure design Competence in scripting languages such as Python, Bash or PowerShell Experience operating within Agile delivery environments and supporting multidisciplinary technical teams Desirable AWS certifications (e.g., AWS Certified DevOps Engineer - Professional or Solutions Architect) Experience working with public sector clients or within regulated industries Why people choose to grow their careers at UBDS Group Professionals choose to grow their careers at UBDS Group for its reputation as a dynamic and forward-thinking organisation that is deeply committed to both innovation and employee development. At UBDS Group, employees are given unique opportunities to work on cutting edge projects across a diverse range of industries, exposing them to new challenges and learning opportunities that are pivotal for professional growth. The Group's culture emphasises continuous improvement, offering ample training programs, mentorship, and the chance to gain certifications that enhance their skills and marketability. UBDS Group fosters a collaborative environment where creativity and innovation are encouraged, allowing employees to contribute ideas and solutions that have a tangible impact on the company and its clients. This combination of professional development, a culture of innovation, and the opportunity to make meaningful contributions makes UBDS Group an attractive place for those looking to advance their careers and be at the forefront of technological and operational excellence. Employee Benefits Training - All team members are offered a number of options in terms of personal development, whether it is technical led, business acumen or methodologies. We want you to grow with us and to help us achieve more Private medical cover for you and your spouse/partner, offered via Vitality Discretionary bonus based on a blend of personal and company performance Holiday - You will receive 25 Days holiday, plus 1 day for Birthday and 1 day for your work anniversary in addition to UK bank holidays Electric Vehicle leasing with salary sacrifice Contributed Pension Scheme Death in service cover About UBDS Group At UBDS Group our mission is to support entrepreneurs who are setting new standards with technology solutions across cloud services, cybersecurity, data and AI, ensuring that every investment advances our commitment to innovation, making a difference, and creating impactful solutions for organisations and society. Equal Opportunities We are an equal opportunities employer and do not discriminate on the grounds of gender, sexual orientation, marital or civil partner status, pregnancy or maternity, gender reassignment, race, colour, nationality, ethnic or national origin, religion or belief, disability or age.
07/05/2026
Full time
UBDS group is looking for an experienced, Security Cleared (SC Cleared) AWS DevOps Engineer to design, build and maintain resilient cloud infrastructure and automation frameworks in support of client engagements. This role involves ownership of CI/CD pipelines, infrastructure-as-code (IaC), and deployment automation within secure environments, particularly across public sector clients. The ideal candidate will bring deep technical expertise in AWS, a strong understanding of DevSecOps practices, and a commitment to delivering reliable, repeatable solutions. Key Responsibilities Build and maintain scalable, secure and automated CI/CD pipelines using GitLab or GitHub Manage infrastructure-as-code solutions using Terraform to support repeatable and compliant environments Implement automation across build, test and deployment workflows to ensure consistent and reliable delivery Apply OWASP security principles and DevSecOps practices throughout the delivery lifecycle Deploy and maintain containerised workloads using Kubernetes or AWS-native services such as ECS or EKS Monitor infrastructure performance and availability, implementing proactive alerting and logging solutions Collaborate with cross-functional teams to embed DevOps capabilities and drive continuous improvement Support troubleshooting of infrastructure and deployment-related issues in production and non-production environments Qualifications Active Security Check (SC) clearance is mandatory Strong hands-on experience with Amazon Web Services (AWS), including core compute, networking and security services Proficiency in GitLab or GitHub for version control and pipeline management Demonstrable experience with Terraform and infrastructure-as-code delivery Deep understanding of Kubernetes architecture and operations in cloud environments Working knowledge of OWASP security principles and how to apply them in CI/CD and infrastructure design Competence in scripting languages such as Python, Bash or PowerShell Experience operating within Agile delivery environments and supporting multidisciplinary technical teams Desirable AWS certifications (e.g., AWS Certified DevOps Engineer - Professional or Solutions Architect) Experience working with public sector clients or within regulated industries Why people choose to grow their careers at UBDS Group Professionals choose to grow their careers at UBDS Group for its reputation as a dynamic and forward-thinking organisation that is deeply committed to both innovation and employee development. At UBDS Group, employees are given unique opportunities to work on cutting edge projects across a diverse range of industries, exposing them to new challenges and learning opportunities that are pivotal for professional growth. The Group's culture emphasises continuous improvement, offering ample training programs, mentorship, and the chance to gain certifications that enhance their skills and marketability. UBDS Group fosters a collaborative environment where creativity and innovation are encouraged, allowing employees to contribute ideas and solutions that have a tangible impact on the company and its clients. This combination of professional development, a culture of innovation, and the opportunity to make meaningful contributions makes UBDS Group an attractive place for those looking to advance their careers and be at the forefront of technological and operational excellence. Employee Benefits Training - All team members are offered a number of options in terms of personal development, whether it is technical led, business acumen or methodologies. We want you to grow with us and to help us achieve more Private medical cover for you and your spouse/partner, offered via Vitality Discretionary bonus based on a blend of personal and company performance Holiday - You will receive 25 Days holiday, plus 1 day for Birthday and 1 day for your work anniversary in addition to UK bank holidays Electric Vehicle leasing with salary sacrifice Contributed Pension Scheme Death in service cover About UBDS Group At UBDS Group our mission is to support entrepreneurs who are setting new standards with technology solutions across cloud services, cybersecurity, data and AI, ensuring that every investment advances our commitment to innovation, making a difference, and creating impactful solutions for organisations and society. Equal Opportunities We are an equal opportunities employer and do not discriminate on the grounds of gender, sexual orientation, marital or civil partner status, pregnancy or maternity, gender reassignment, race, colour, nationality, ethnic or national origin, religion or belief, disability or age.
Senior Full Stack Engineer (Node & React)
Space Executive
The Company A fast-scaling RegTech startup is looking for a Node & React Full Stack Engineer to join their team. You are accountable for the quality, speed, and reliability of our microservices architecture, ensuring seamless user experiences and robust data security. You will transform innovative ideas into scalable code, automate everything from CI/CD to testing, and solve real customer problems at scale. Backed by a major global bank and already working with several high-profile financial institutions, the company combines start-up agility with enterprise-grade ambition and infrastructure. Currently operating in stealth mode, the company is preparing for public launch with a strong client pipeline and a high-calibre technical team. What You Bring Degree in Computer Science, Software Engineering, or equivalent experience. Extensive hands-on experience designing, building, and deploying distributed systems using microservices and event sourcing with Node.js and React. Solid experience of service-to-service communication, API design, and contract testing. You are comfortable in agile environments, resolving ambiguity. Strong focus on customer needs, with a passion for building platforms that empower clients. Commitment to cybersecurity best practices and data privacy. Automation-first mindset: from infrastructure to testing and deployment. Growth mindset: you embrace change, continuous learning, and innovation. What's In It For You? End-to-end ownership: Engage directly with customers to close feedback loops and shape the product. Modern DevOps culture: Deploy microservices multiple times per day with confidence. Work with AI and expand your skills into Python and other emerging technologies. Be part of a collaborative, forward-thinking team building the next generation of microservices platforms. Continuous learning, we believe ongoing training is key to success.
07/05/2026
Full time
The Company A fast-scaling RegTech startup is looking for a Node & React Full Stack Engineer to join their team. You are accountable for the quality, speed, and reliability of our microservices architecture, ensuring seamless user experiences and robust data security. You will transform innovative ideas into scalable code, automate everything from CI/CD to testing, and solve real customer problems at scale. Backed by a major global bank and already working with several high-profile financial institutions, the company combines start-up agility with enterprise-grade ambition and infrastructure. Currently operating in stealth mode, the company is preparing for public launch with a strong client pipeline and a high-calibre technical team. What You Bring Degree in Computer Science, Software Engineering, or equivalent experience. Extensive hands-on experience designing, building, and deploying distributed systems using microservices and event sourcing with Node.js and React. Solid experience of service-to-service communication, API design, and contract testing. You are comfortable in agile environments, resolving ambiguity. Strong focus on customer needs, with a passion for building platforms that empower clients. Commitment to cybersecurity best practices and data privacy. Automation-first mindset: from infrastructure to testing and deployment. Growth mindset: you embrace change, continuous learning, and innovation. What's In It For You? End-to-end ownership: Engage directly with customers to close feedback loops and shape the product. Modern DevOps culture: Deploy microservices multiple times per day with confidence. Work with AI and expand your skills into Python and other emerging technologies. Be part of a collaborative, forward-thinking team building the next generation of microservices platforms. Continuous learning, we believe ongoing training is key to success.
Bauer Media Group
Senior Software Engineer
Bauer Media Group
About us Bauer Media is a leading media business reaching millions of people across Europe through audio, digital, and out of home advertising. We're behind well known brands including Kiss, Absolute, Magic, Grazia, and Empire. Within our Outdoor division, we connect audiences through thousands of digital screens and poster sites in high impact locations. Bauer Media Outdoor Tech vision Create tech that makes a difference. Empower teams. Delight customers. Shape the media world of tomorrow. What you will be working on You will be building world class advertising products and ad tech that allow our employees and customers to discover advertising products, plan campaigns, target audiences using powerful proprietary data, manage campaigns and content, bid for inventory in real time, and view performance and impact. A core part of the role is delivering an outstanding experience while ensuring platforms operate reliably, securely, and cost effectively at significant scale. You will play a key role in embedding AI into our products in a way that is practical, responsible, and valuable to the business. Role focus As a Senior Engineer, you are a technical leader who helps shapes the direction and engineering excellence across products. You drive system design, set standards, and mentor engineers while remaining hands on with complex problem solving. You will guide the development of large scale systems that support programmatic buying, self service experiences, data products, and AI enabled capabilities. What you will do Lead the design, implementation, and evolution of key systems Hands on design and build of complex features and platform capabilities Set and uphold technical standards and engineering best practices Guide engineers through design reviews, mentoring, and hands on delivery Balance short term delivery with long term technical health Collaborate with Product, UX, and Data leads to shape technical roadmaps Act as a technical point of contact for cross functional stakeholders Develop deep understanding of the Outdoor advertising industry, business strategy, and customer needs Focus on non functional requirements including reliability, scalability, security, and cost effectiveness How you will behave and act Coach and mentor engineers across teams Provide constructive feedback that raises overall engineering capabilityLead through ambiguity and provide clarity and direction Resolve technical and delivery challenges using sound judgement Communicate complex technical concepts simply and effectively What you will bring At least 8 years of software engineering experience Experience working in ad tech or the advertising industry Strong full stack capability Proficiency in modern engineering stacks, cloud platforms, and agile ways of working Experience with APIs, data flows, CI/CD pipelines, and monitoring Deep expertise in system design, cloud architecture, and distributed systems Experience delivering systems end to end from design through to production Accountability for products or large scale components with significant user volumes, transactions, or revenue Strong communication skills with the ability to translate business context into technical solutions A genuine passion for mentoring and raising engineering standards How success is measured Delivery of robust, scalable systems that generate measurable business value Clear technical direction and decision making across squads Improved delivery velocity and engineering quality Strong mentoring impact and growth of engineers across teams Consistent delivery against objectives and key results The technology environment Our Outdoor engineering teams work in a modern, cloud native environment. You do not need experience with everything listed, but you should be comfortable learning new tools and working across a varied stack. Frontend and web React, Next.js, TypeScript Backend and services Node.js, Python, serverless architectures Datastores SQL and NoSQL technologies including Postgres and DynamoDB Cloud and infrastructure AWS including Lambda, CloudWatch, API Gateway, and S3 Infrastructure as Code using Terraform Ways of working Agile delivery, embedded non functional requirements for scale, CI/CD automation, test driven development, observability, and modern DevOps practices
07/05/2026
Full time
About us Bauer Media is a leading media business reaching millions of people across Europe through audio, digital, and out of home advertising. We're behind well known brands including Kiss, Absolute, Magic, Grazia, and Empire. Within our Outdoor division, we connect audiences through thousands of digital screens and poster sites in high impact locations. Bauer Media Outdoor Tech vision Create tech that makes a difference. Empower teams. Delight customers. Shape the media world of tomorrow. What you will be working on You will be building world class advertising products and ad tech that allow our employees and customers to discover advertising products, plan campaigns, target audiences using powerful proprietary data, manage campaigns and content, bid for inventory in real time, and view performance and impact. A core part of the role is delivering an outstanding experience while ensuring platforms operate reliably, securely, and cost effectively at significant scale. You will play a key role in embedding AI into our products in a way that is practical, responsible, and valuable to the business. Role focus As a Senior Engineer, you are a technical leader who helps shapes the direction and engineering excellence across products. You drive system design, set standards, and mentor engineers while remaining hands on with complex problem solving. You will guide the development of large scale systems that support programmatic buying, self service experiences, data products, and AI enabled capabilities. What you will do Lead the design, implementation, and evolution of key systems Hands on design and build of complex features and platform capabilities Set and uphold technical standards and engineering best practices Guide engineers through design reviews, mentoring, and hands on delivery Balance short term delivery with long term technical health Collaborate with Product, UX, and Data leads to shape technical roadmaps Act as a technical point of contact for cross functional stakeholders Develop deep understanding of the Outdoor advertising industry, business strategy, and customer needs Focus on non functional requirements including reliability, scalability, security, and cost effectiveness How you will behave and act Coach and mentor engineers across teams Provide constructive feedback that raises overall engineering capabilityLead through ambiguity and provide clarity and direction Resolve technical and delivery challenges using sound judgement Communicate complex technical concepts simply and effectively What you will bring At least 8 years of software engineering experience Experience working in ad tech or the advertising industry Strong full stack capability Proficiency in modern engineering stacks, cloud platforms, and agile ways of working Experience with APIs, data flows, CI/CD pipelines, and monitoring Deep expertise in system design, cloud architecture, and distributed systems Experience delivering systems end to end from design through to production Accountability for products or large scale components with significant user volumes, transactions, or revenue Strong communication skills with the ability to translate business context into technical solutions A genuine passion for mentoring and raising engineering standards How success is measured Delivery of robust, scalable systems that generate measurable business value Clear technical direction and decision making across squads Improved delivery velocity and engineering quality Strong mentoring impact and growth of engineers across teams Consistent delivery against objectives and key results The technology environment Our Outdoor engineering teams work in a modern, cloud native environment. You do not need experience with everything listed, but you should be comfortable learning new tools and working across a varied stack. Frontend and web React, Next.js, TypeScript Backend and services Node.js, Python, serverless architectures Datastores SQL and NoSQL technologies including Postgres and DynamoDB Cloud and infrastructure AWS including Lambda, CloudWatch, API Gateway, and S3 Infrastructure as Code using Terraform Ways of working Agile delivery, embedded non functional requirements for scale, CI/CD automation, test driven development, observability, and modern DevOps practices
iBSC
GCP DevOps Engineer (GKE, mTLS & Networking) - London OR Manchester OR Bristol (Hybrid) - IR35
iBSC
Title: GCP DevOps Engineer (GKE, mTLS & Networking) Location: Manchester OR London OR Bristol (Hybrid - 2x-3x days a week onsite is needed) Duration: 6 months + Extension (Long term project) THIS PROJECT IS INSIDE IR35 DevOps Engineer - GCP/GKE We are looking for an experienced DevOps Engineer with strong hands on experience across GCP, GKE, Kubernetes, networking, TLS/mTLS, Terraform, CI/CD, Linux, monitoring and live system troubleshooting . The role will involve working closely with development teams to support application delivery, troubleshoot complex platform and application issues, manage secure configuration and build reliable CI/CD pipelines. Key Responsibilities Troubleshoot live systems across GCP, GKE, Kubernetes, networking, ingress/egress traffic, and application configuration Support secure application communication using TLS, mTLS, PKI, certificates, and secrets Work with GKE and kubectl to investigate and resolve issues across running applications Safely apply Kubernetes configuration changes, including ConfigMaps, Secrets, deployment manifests, init containers, HPA, and PDB Develop and use Terraform modules to apply infrastructure changes in GCP Build and support CI/CD pipelines using Jenkins and/or Azure DevOps Use GitHub and command-line Git to collaborate with developers, make/revert code changes, and support webhooks Configure and troubleshoot Istio , including CRDs, traffic flow, and TLS/mTLS Monitor and investigate issues using Dynatrace and GCP logs Write and maintain automation scripts using Bash, Python, and Ansible Work closely with development teams across the SDLC to support application builds, deployments, and troubleshooting Required Skills & Experience Strong hands-on experience with GCP , including core components, Firewall concepts, database connectivity, log tracing, and ingress/egress troubleshooting Strong experience with GKE, Kubernetes, and kubectl Experience troubleshooting complex Kubernetes application configuration issues Strong understanding of networking and traffic flow Strong knowledge of TLS, PKI, certificate management, TLS/mTLS, and secure application communication Strong Bash Scripting experience Python Scripting experience Strong Linux command-line experience Experience with Terraform , including developing and using modules in GCP Strong command-line Git and GitHub experience Hands-on experience building CI/CD pipelines from scratch using Jenkins and/or Azure DevOps Good understanding or hands-on experience with Harness Experience with Istio , including CRDs, traffic routing, and TLS/mTLS configuration Monitoring, alerting, and logging experience, ideally with Dynatrace and GCP logs Experience writing or understanding Ansible playbooks and scripts Strong troubleshooting, analytical, and problem-solving skills Experience working closely with development teams across the SDLC Desirable Skills Understanding of Internal Developer Platform/IDP concepts Java application experience, ideally Spring Boot or Quarkus Familiarity with Gradle Familiarity with Node.js and ReactJS applications Experience integrating testing and security tools into CI/CD pipelines Exposure to tools such as Cucumber BDD, ZAP, Gatling, LoadRunner, RestAssured, Burp Suite, SonarQube, Aqua Sec, Nexus IQ, Axe, and Playwright
07/05/2026
Contractor
Title: GCP DevOps Engineer (GKE, mTLS & Networking) Location: Manchester OR London OR Bristol (Hybrid - 2x-3x days a week onsite is needed) Duration: 6 months + Extension (Long term project) THIS PROJECT IS INSIDE IR35 DevOps Engineer - GCP/GKE We are looking for an experienced DevOps Engineer with strong hands on experience across GCP, GKE, Kubernetes, networking, TLS/mTLS, Terraform, CI/CD, Linux, monitoring and live system troubleshooting . The role will involve working closely with development teams to support application delivery, troubleshoot complex platform and application issues, manage secure configuration and build reliable CI/CD pipelines. Key Responsibilities Troubleshoot live systems across GCP, GKE, Kubernetes, networking, ingress/egress traffic, and application configuration Support secure application communication using TLS, mTLS, PKI, certificates, and secrets Work with GKE and kubectl to investigate and resolve issues across running applications Safely apply Kubernetes configuration changes, including ConfigMaps, Secrets, deployment manifests, init containers, HPA, and PDB Develop and use Terraform modules to apply infrastructure changes in GCP Build and support CI/CD pipelines using Jenkins and/or Azure DevOps Use GitHub and command-line Git to collaborate with developers, make/revert code changes, and support webhooks Configure and troubleshoot Istio , including CRDs, traffic flow, and TLS/mTLS Monitor and investigate issues using Dynatrace and GCP logs Write and maintain automation scripts using Bash, Python, and Ansible Work closely with development teams across the SDLC to support application builds, deployments, and troubleshooting Required Skills & Experience Strong hands-on experience with GCP , including core components, Firewall concepts, database connectivity, log tracing, and ingress/egress troubleshooting Strong experience with GKE, Kubernetes, and kubectl Experience troubleshooting complex Kubernetes application configuration issues Strong understanding of networking and traffic flow Strong knowledge of TLS, PKI, certificate management, TLS/mTLS, and secure application communication Strong Bash Scripting experience Python Scripting experience Strong Linux command-line experience Experience with Terraform , including developing and using modules in GCP Strong command-line Git and GitHub experience Hands-on experience building CI/CD pipelines from scratch using Jenkins and/or Azure DevOps Good understanding or hands-on experience with Harness Experience with Istio , including CRDs, traffic routing, and TLS/mTLS configuration Monitoring, alerting, and logging experience, ideally with Dynatrace and GCP logs Experience writing or understanding Ansible playbooks and scripts Strong troubleshooting, analytical, and problem-solving skills Experience working closely with development teams across the SDLC Desirable Skills Understanding of Internal Developer Platform/IDP concepts Java application experience, ideally Spring Boot or Quarkus Familiarity with Gradle Familiarity with Node.js and ReactJS applications Experience integrating testing and security tools into CI/CD pipelines Exposure to tools such as Cucumber BDD, ZAP, Gatling, LoadRunner, RestAssured, Burp Suite, SonarQube, Aqua Sec, Nexus IQ, Axe, and Playwright
AI Engineer/Data Scientist
Space Executive
The Company A fast-scaling RegTech startup is looking for a Senior AI Engineer to help shape and scale its next-generation regulatory change management platform, powered by GenAI and automation. Backed by a major global bank and already working with several high-profile financial institutions, the company combines start-up agility with enterprise-grade ambition and infrastructure. Currently operating in stealth mode, the company is preparing for public launch with a strong client pipeline and a high-calibre technical team. The Role You are accountable for the quality of our AI output. Your are responsible for the design, experimentation, and deployment of cutting-edge AI solutions that directly impact our customers. What Success Looks Like AI outputs are consistently accurate, actionable, and valuable to customers. Rapid iteration leads to product-market fit and scalable solutions. The system adapts and improves through direct user and customer feedback. What You Bring Your curiosity lets you ask what if, and surprise us again, and again. Proven track record delivering impactful ML/AI solutions in production. Deep expertise in Python and modern AI/ML frameworks (e.g., PyTorch, TensorFlow, scikit-learn, NumPy, Pandas). Hands-on experience with GenAI, agentic AI, and automated testing for AI systems. Curiosity and creativity to challenge assumptions and explore new approaches. Strong communication skills and a passion for clear, concise documentation. Adaptability and pragmatism in fast-paced, ambiguous environments.
07/05/2026
Full time
The Company A fast-scaling RegTech startup is looking for a Senior AI Engineer to help shape and scale its next-generation regulatory change management platform, powered by GenAI and automation. Backed by a major global bank and already working with several high-profile financial institutions, the company combines start-up agility with enterprise-grade ambition and infrastructure. Currently operating in stealth mode, the company is preparing for public launch with a strong client pipeline and a high-calibre technical team. The Role You are accountable for the quality of our AI output. Your are responsible for the design, experimentation, and deployment of cutting-edge AI solutions that directly impact our customers. What Success Looks Like AI outputs are consistently accurate, actionable, and valuable to customers. Rapid iteration leads to product-market fit and scalable solutions. The system adapts and improves through direct user and customer feedback. What You Bring Your curiosity lets you ask what if, and surprise us again, and again. Proven track record delivering impactful ML/AI solutions in production. Deep expertise in Python and modern AI/ML frameworks (e.g., PyTorch, TensorFlow, scikit-learn, NumPy, Pandas). Hands-on experience with GenAI, agentic AI, and automated testing for AI systems. Curiosity and creativity to challenge assumptions and explore new approaches. Strong communication skills and a passion for clear, concise documentation. Adaptability and pragmatism in fast-paced, ambiguous environments.
Hays Specialist Recruitment
Demand/Capacity Manager
Hays Specialist Recruitment
Your new company This role is working for Hays, a FTSE 250 recruitment leader with a global footprint, combining decades of expertise with a bold technology strategy focused on modernisation, digitalisation and innovation to power progress through people and market-leading tech. With deep specialism across STEM and digital domains, Hays leverages data-driven insight and a worldwide tech talent network to help organisations secure the skills they need today and for the future. Backed by significant investment in its technology transformation and strategic partnerships, Hays is shaping the future of tech recruitment and supporting businesses as they build tomorrow's workforce. Your new role The Demand & Capacity Manager ensures that Technology Operations has the resources, capacity, and performance headroom needed to deliver stable, predictable, and scalable services globally. The role is responsible for forecasting demand, analysing consumption trends, modelling capacity needs, and identifying risks related to saturation, seasonal patterns, and strategic growth. This includes coordinating with Finance, PMO, Service Performance Management, Engineering teams, and vendors to ensure that demand is understood, capacity is planned, and costs are optimised across infrastructure, platforms, cloud, and global operations. Core Responsibilities: Own the global demand and capacity management framework across infrastructure, cloud, platforms, and operational delivery services. Develop and maintain capacity models incorporating historic trends, business forecasts, and technology growth patterns. Forecast demand for infrastructure resources, cloud consumption, platform usage, licensing, storage, workloads, and workforce/operational capacity. Identify saturation risks, constraints, seasonal spikes, and capacity-related service vulnerabilities. Provide capacity insights to PMO reprioritisation, investment planning, and readiness assessments. Work with Service Performance Manager to correlate capacity with stability, recurrence patterns, and performance bottlenecks. Partner with Vendor/Contract Manager to assess vendor capacity commitments, delivery models, and scalability. Collaborate with EA to ensure capacity plans align with technology roadmaps and transformation initiatives. Ensure appropriate capacity for major business events, releases, migrations, and peak periods. Maintain regular reporting covering consumption, forecasts, risks, and recommended actions. Accountable for the accuracy and quality of global demand and capacity forecasts. Ensure capacity risks are identified early, documented, communicated, and mitigated with clear action plans. Maintain a single source of truth for demand, consumption, and capacity insights. Provide leadership with proactive recommendations for investment, optimisation, and scaling actions. Drive alignment between capacity planning, financial forecasting, and platform/infrastructure strategies. Prepare and run capacity governance routines including monthly capacity reviews. Ensure readiness and capacity availability for major business or technology events. Support cloud optimisation and FinOps activities with accurate consumption modelling. Collaborate with operational teams to ensure capacity actions support service stability and avoid degradations. Global Delivery & Collaboration: Work with Regional Service Managers to capture local demand patterns, constraints, and capacity needs. Collaborate with EA to ensure strategic alignment with long-term architectural evolution. Partner with Infrastructure and Platform teams to understand scaling limits, performance boundaries, and capacity signals. Engage with PMO to validate capacity readiness for projects, migrations, and releases. Coordinate globally with MSPs (incl. Cognizant) to validate vendor capacity and delivery throughput. Work with Finance and Cost Management teams to validate budget impact, cost-to-serve models, and cloud consumption forecasts. Support Security and Compliance capacity requirements for logging, monitoring, DR, and backup workloads. Key Deliverable: Global Demand & Capacity Forecast (rolling 12-36 months). Capacity Models & Dashboards (infrastructure, cloud, platform, operational workload). Monthly Consumption & Capacity Report including risks, hotspots, and future projections. Quarterly Capacity Review Pack including investment proposals and optimisation insights. Capacity inputs for PMO readiness assessments, budgeting, and prioritisation. Documentation of mitigation actions for capacity-related risks. Cloud consumption models and cost-optimisation recommendations. KPIs & Success Measures: Accuracy of demand and capacity forecasts. Reduction of unplanned capacity-related incidents or outages. Timeliness of capacity reporting and insights. Optimisation impact (cloud savings, resource efficiency gains). Stakeholder satisfaction across Technology, Finance, Regions, and Vendors. Alignment of capacity with business demand and technology roadmaps. What you'll need to succeed Hands-on experience with automation platforms (ServiceNow Flow Designer, Power Automate, Rundeck, Ansible, Terraform, or similar). Scripting skills (PowerShell, Python, Bash, or equivalent). Understanding of monitoring and alerting systems (eg, Dynatrace, Datadog, Splunk, Azure Monitor). Knowledge of ITSM processes (Incident, Problem, Change, Request) and workflow automation. Experience integrating automation with CI/CD, APIs, and cloud-native services. Strong understanding of identity models, RBAC, and secure automation practices. Ability nonrepresentational issues and translate them into automation solutions. Experience working with MSPs and global delivery models What you'll get in return Competitive base salary + bonus + benefits aligned to the seniority of the role. What you need to do now If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found on our website.
07/05/2026
Full time
Your new company This role is working for Hays, a FTSE 250 recruitment leader with a global footprint, combining decades of expertise with a bold technology strategy focused on modernisation, digitalisation and innovation to power progress through people and market-leading tech. With deep specialism across STEM and digital domains, Hays leverages data-driven insight and a worldwide tech talent network to help organisations secure the skills they need today and for the future. Backed by significant investment in its technology transformation and strategic partnerships, Hays is shaping the future of tech recruitment and supporting businesses as they build tomorrow's workforce. Your new role The Demand & Capacity Manager ensures that Technology Operations has the resources, capacity, and performance headroom needed to deliver stable, predictable, and scalable services globally. The role is responsible for forecasting demand, analysing consumption trends, modelling capacity needs, and identifying risks related to saturation, seasonal patterns, and strategic growth. This includes coordinating with Finance, PMO, Service Performance Management, Engineering teams, and vendors to ensure that demand is understood, capacity is planned, and costs are optimised across infrastructure, platforms, cloud, and global operations. Core Responsibilities: Own the global demand and capacity management framework across infrastructure, cloud, platforms, and operational delivery services. Develop and maintain capacity models incorporating historic trends, business forecasts, and technology growth patterns. Forecast demand for infrastructure resources, cloud consumption, platform usage, licensing, storage, workloads, and workforce/operational capacity. Identify saturation risks, constraints, seasonal spikes, and capacity-related service vulnerabilities. Provide capacity insights to PMO reprioritisation, investment planning, and readiness assessments. Work with Service Performance Manager to correlate capacity with stability, recurrence patterns, and performance bottlenecks. Partner with Vendor/Contract Manager to assess vendor capacity commitments, delivery models, and scalability. Collaborate with EA to ensure capacity plans align with technology roadmaps and transformation initiatives. Ensure appropriate capacity for major business events, releases, migrations, and peak periods. Maintain regular reporting covering consumption, forecasts, risks, and recommended actions. Accountable for the accuracy and quality of global demand and capacity forecasts. Ensure capacity risks are identified early, documented, communicated, and mitigated with clear action plans. Maintain a single source of truth for demand, consumption, and capacity insights. Provide leadership with proactive recommendations for investment, optimisation, and scaling actions. Drive alignment between capacity planning, financial forecasting, and platform/infrastructure strategies. Prepare and run capacity governance routines including monthly capacity reviews. Ensure readiness and capacity availability for major business or technology events. Support cloud optimisation and FinOps activities with accurate consumption modelling. Collaborate with operational teams to ensure capacity actions support service stability and avoid degradations. Global Delivery & Collaboration: Work with Regional Service Managers to capture local demand patterns, constraints, and capacity needs. Collaborate with EA to ensure strategic alignment with long-term architectural evolution. Partner with Infrastructure and Platform teams to understand scaling limits, performance boundaries, and capacity signals. Engage with PMO to validate capacity readiness for projects, migrations, and releases. Coordinate globally with MSPs (incl. Cognizant) to validate vendor capacity and delivery throughput. Work with Finance and Cost Management teams to validate budget impact, cost-to-serve models, and cloud consumption forecasts. Support Security and Compliance capacity requirements for logging, monitoring, DR, and backup workloads. Key Deliverable: Global Demand & Capacity Forecast (rolling 12-36 months). Capacity Models & Dashboards (infrastructure, cloud, platform, operational workload). Monthly Consumption & Capacity Report including risks, hotspots, and future projections. Quarterly Capacity Review Pack including investment proposals and optimisation insights. Capacity inputs for PMO readiness assessments, budgeting, and prioritisation. Documentation of mitigation actions for capacity-related risks. Cloud consumption models and cost-optimisation recommendations. KPIs & Success Measures: Accuracy of demand and capacity forecasts. Reduction of unplanned capacity-related incidents or outages. Timeliness of capacity reporting and insights. Optimisation impact (cloud savings, resource efficiency gains). Stakeholder satisfaction across Technology, Finance, Regions, and Vendors. Alignment of capacity with business demand and technology roadmaps. What you'll need to succeed Hands-on experience with automation platforms (ServiceNow Flow Designer, Power Automate, Rundeck, Ansible, Terraform, or similar). Scripting skills (PowerShell, Python, Bash, or equivalent). Understanding of monitoring and alerting systems (eg, Dynatrace, Datadog, Splunk, Azure Monitor). Knowledge of ITSM processes (Incident, Problem, Change, Request) and workflow automation. Experience integrating automation with CI/CD, APIs, and cloud-native services. Strong understanding of identity models, RBAC, and secure automation practices. Ability nonrepresentational issues and translate them into automation solutions. Experience working with MSPs and global delivery models What you'll get in return Competitive base salary + bonus + benefits aligned to the seniority of the role. What you need to do now If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found on our website.
Akkodis
Lead Data Platform Engineer | Remote | £85k + 4 day week
Akkodis
Lead Data Platform Specialist - Up to £85k + c.15% Bonus Fully Remote - Condensed & flexible hours available (4 day working week available - 9 out of 10, etc.) My client, a nation-wide organisation with a reputation for excellence and a supportive, inclusive culture, is seeking a Lead Data Platform Engineer to join their Data Engineering and Machine Learning team. This is a high-impact, senior role, ideal for someone with deep experience in modern cloud data platforms, looking to shape and deliver a scalable, secure, and innovative data platform. *You need to have strong experience working with Databricks* You'll play a pivotal role in designing, building, and optimising their Azure-based Data Lakehouse, with a focus on Databricks, PySpark, Spark SQL, and Azure Data Factory. This isn't just about coding - you'll also provide architectural guidance, mentor engineers, and ensure solutions are scalable, secure, and aligned with business needs. Hands-on experience with CI/CD, automation, and infrastructure-as-code (Terraform, ARM templates) is essential. Experience in machine learning platforms or ML engineering is a bonus. Key Responsibilities: Build and maintain the Data Lakehouse platform in a secure Azure environment Develop automation for cluster management, integration runtimes, and networking Lead architectural design and ensure platform scalability, reliability, and governance Write efficient, maintainable code in PySpark, Python, and SQL Implement CI/CD pipelines and cloud infrastructure via Terraform/ARM Collaborate with data engineers, architects, and business stakeholders Mentor and coach engineers, fostering a culture of learning and excellence Essential Skills: Deep experience in Databricks, Azure Data Factory, and Lakehouse architecture Strong solution architecture and data platform engineering skills DevOps and automation expertise, including CI/CD, monitoring, and code quality Infrastructure-as-code (Terraform or ARM templates) for cloud resource provisioning Excellent communication and mentoring skills It's not often a role comes along in a team like this, where you get the chance to flex or condense hours, receive a strong salary with a bonus, potential for growth, independence and autonomy, and a clear Pathway of progression. You'll get Private medical options, 25 days holiday, plus bank holiday, and the chance to buy/sell holiday. This is a rare opportunity to join a forward-thinking team at a leading organisation, working fully remotely with flexibility and excellent benefits. You'll be shaping the future of their data platform while collaborating with a talented and diverse team. To apply, please submit your CV as we are looking to move quickly on this role. Modis International Ltd acts as an employment agency for permanent recruitment and an employment business for the supply of temporary workers in the UK. Modis Europe Ltd provide a variety of international solutions that connect clients to the best talent in the world. For all positions based in Switzerland, Modis Europe Ltd works with its licensed Swiss partner Accurity GmbH to ensure that candidate applications are handled in accordance with Swiss law. Both Modis International Ltd and Modis Europe Ltd are Equal Opportunities Employers. By applying for this role your details will be submitted to Modis International Ltd and/or Modis Europe Ltd. Our Candidate Privacy Information Statement which explains how we will use your information is available on the Modis website.
07/05/2026
Full time
Lead Data Platform Specialist - Up to £85k + c.15% Bonus Fully Remote - Condensed & flexible hours available (4 day working week available - 9 out of 10, etc.) My client, a nation-wide organisation with a reputation for excellence and a supportive, inclusive culture, is seeking a Lead Data Platform Engineer to join their Data Engineering and Machine Learning team. This is a high-impact, senior role, ideal for someone with deep experience in modern cloud data platforms, looking to shape and deliver a scalable, secure, and innovative data platform. *You need to have strong experience working with Databricks* You'll play a pivotal role in designing, building, and optimising their Azure-based Data Lakehouse, with a focus on Databricks, PySpark, Spark SQL, and Azure Data Factory. This isn't just about coding - you'll also provide architectural guidance, mentor engineers, and ensure solutions are scalable, secure, and aligned with business needs. Hands-on experience with CI/CD, automation, and infrastructure-as-code (Terraform, ARM templates) is essential. Experience in machine learning platforms or ML engineering is a bonus. Key Responsibilities: Build and maintain the Data Lakehouse platform in a secure Azure environment Develop automation for cluster management, integration runtimes, and networking Lead architectural design and ensure platform scalability, reliability, and governance Write efficient, maintainable code in PySpark, Python, and SQL Implement CI/CD pipelines and cloud infrastructure via Terraform/ARM Collaborate with data engineers, architects, and business stakeholders Mentor and coach engineers, fostering a culture of learning and excellence Essential Skills: Deep experience in Databricks, Azure Data Factory, and Lakehouse architecture Strong solution architecture and data platform engineering skills DevOps and automation expertise, including CI/CD, monitoring, and code quality Infrastructure-as-code (Terraform or ARM templates) for cloud resource provisioning Excellent communication and mentoring skills It's not often a role comes along in a team like this, where you get the chance to flex or condense hours, receive a strong salary with a bonus, potential for growth, independence and autonomy, and a clear Pathway of progression. You'll get Private medical options, 25 days holiday, plus bank holiday, and the chance to buy/sell holiday. This is a rare opportunity to join a forward-thinking team at a leading organisation, working fully remotely with flexibility and excellent benefits. You'll be shaping the future of their data platform while collaborating with a talented and diverse team. To apply, please submit your CV as we are looking to move quickly on this role. Modis International Ltd acts as an employment agency for permanent recruitment and an employment business for the supply of temporary workers in the UK. Modis Europe Ltd provide a variety of international solutions that connect clients to the best talent in the world. For all positions based in Switzerland, Modis Europe Ltd works with its licensed Swiss partner Accurity GmbH to ensure that candidate applications are handled in accordance with Swiss law. Both Modis International Ltd and Modis Europe Ltd are Equal Opportunities Employers. By applying for this role your details will be submitted to Modis International Ltd and/or Modis Europe Ltd. Our Candidate Privacy Information Statement which explains how we will use your information is available on the Modis website.
Alexander Mann Solutions - Public Sector Resourcing
Senior Data Scientist (AI)
Alexander Mann Solutions - Public Sector Resourcing
On behalf of Network Rail, we are looking for a Senior Data Scientist (AI) for a 12 month contract based Hybrid in London or York. About the role: This role will play a key part in a new, early-stage AI programme within Network Rail, applying advanced analytics and machine learning to improve operational performance and deliver innovative, real-world AI solutions. You will build machine learning models and AI-driven products, including a new customer-facing AI assistant currently in testing, working with complex and fragmented datasets from across the rail ecosystem. Working closely with senior stakeholders, you will translate data into actionable insights that drive measurable improvements in operations, customer experience and strategic decision making. You will have the opportunity to shape and influence how AI is adopted across the organisation. As a Senior Data Scientist (AI), your main responsibilities will be to: . Design and build advanced machine learning models to analyse complex operational, infrastructure and performance datasets. . Work with large, fragmented datasets from multiple sources, assessing data quality and shaping datasets for modelling and AI use cases. . Contribute to the development of AI-driven solutions, including customer-facing tools and operational optimisation models. . Apply predictive analytics and advanced modelling techniques to improve planning, maintenance strategies, asset optimisation and customer outcomes. . Collaborate with data and engineering teams to evolve data structures and support scalable AI capability. . Continuously improve data science workflows, tooling and processes, including automation and experimentation. . Engage with senior stakeholders across the organisation, challenging thinking where needed and translating complex insights into clear, actionable outcomes. . Support the prioritisation and governance of AI initiatives, ensuring alignment with business objectives and responsible AI practices. . Communicate technical findings clearly through presentations, dashboards and reports. Essential: . Proven experience in data science, machine learning and predictive modelling in real-world environments. . Strong programming skills in Python (or R), with experience using libraries such as Pandas and Scikit-learn, along with SQL and data visualisation tools. . Proven ability to work with complex or imperfect datasets and deliver practical solutions. . Strong stakeholder engagement skills, with the ability to influence and communicate at senior level. . Experience delivering data-driven solutions from concept through to implementation. . Excellent problem-solving abilities and a collaborative mindset. Desirable: . A degree (or equivalent qualification) in a STEM discipline (Science, Technology, Engineering, Mathematics) or a closely related field such as Data Science, Computer Science, or Statistics. . Familiarity with the railway sector and its operational challenges is a plus. . Experience working on AI-driven products (eg chatbots, NLP, or customer-facing AI solutions). Please be aware that this role can only be worked within the UK and not Overseas. Network Rail is an equal opportunity employer and values diversity. They welcome applications from everyone. Disability Confident As a member of the Disability Confident Scheme, Network Rail guarantees to interview all candidates who have a disability and who meet all the essential criteria for the vacancy. In cases where we have a high volume of candidates who have a disability who meet all the essential criteria, we will interview the best candidates from within that group. This scheme encourages candidates with a disability and/or neurodivergence to apply. In exceptional circumstances, we may also need to apply the desirable criteria in our shortlisting process which may include holding active security clearance. Armed Forces Covenant Network Rail guarantees to interview veterans or spouses/partners of military personnel who meet all the essential criteria for the vacancy. In cases where we have a high volume of ex-military candidates/military spouses or partners, who meet all of the essential criteria, we will interview the best candidates from within that group. In exceptional circumstances, we may also need to apply the desirable criteria in our shortlisting process which may include holding active security clearance. In applying for this role, you acknowledge the following "this role falls in scope of the Off Payroll Working in the Public Sector legislation. Any rates of payment quoted will reflect the gross rate per day for the assignment and will be subject to appropriate taxes and statutory costs. As such the payment to the intermediary and your income resulting from this contract will be different".
07/05/2026
Contractor
On behalf of Network Rail, we are looking for a Senior Data Scientist (AI) for a 12 month contract based Hybrid in London or York. About the role: This role will play a key part in a new, early-stage AI programme within Network Rail, applying advanced analytics and machine learning to improve operational performance and deliver innovative, real-world AI solutions. You will build machine learning models and AI-driven products, including a new customer-facing AI assistant currently in testing, working with complex and fragmented datasets from across the rail ecosystem. Working closely with senior stakeholders, you will translate data into actionable insights that drive measurable improvements in operations, customer experience and strategic decision making. You will have the opportunity to shape and influence how AI is adopted across the organisation. As a Senior Data Scientist (AI), your main responsibilities will be to: . Design and build advanced machine learning models to analyse complex operational, infrastructure and performance datasets. . Work with large, fragmented datasets from multiple sources, assessing data quality and shaping datasets for modelling and AI use cases. . Contribute to the development of AI-driven solutions, including customer-facing tools and operational optimisation models. . Apply predictive analytics and advanced modelling techniques to improve planning, maintenance strategies, asset optimisation and customer outcomes. . Collaborate with data and engineering teams to evolve data structures and support scalable AI capability. . Continuously improve data science workflows, tooling and processes, including automation and experimentation. . Engage with senior stakeholders across the organisation, challenging thinking where needed and translating complex insights into clear, actionable outcomes. . Support the prioritisation and governance of AI initiatives, ensuring alignment with business objectives and responsible AI practices. . Communicate technical findings clearly through presentations, dashboards and reports. Essential: . Proven experience in data science, machine learning and predictive modelling in real-world environments. . Strong programming skills in Python (or R), with experience using libraries such as Pandas and Scikit-learn, along with SQL and data visualisation tools. . Proven ability to work with complex or imperfect datasets and deliver practical solutions. . Strong stakeholder engagement skills, with the ability to influence and communicate at senior level. . Experience delivering data-driven solutions from concept through to implementation. . Excellent problem-solving abilities and a collaborative mindset. Desirable: . A degree (or equivalent qualification) in a STEM discipline (Science, Technology, Engineering, Mathematics) or a closely related field such as Data Science, Computer Science, or Statistics. . Familiarity with the railway sector and its operational challenges is a plus. . Experience working on AI-driven products (eg chatbots, NLP, or customer-facing AI solutions). Please be aware that this role can only be worked within the UK and not Overseas. Network Rail is an equal opportunity employer and values diversity. They welcome applications from everyone. Disability Confident As a member of the Disability Confident Scheme, Network Rail guarantees to interview all candidates who have a disability and who meet all the essential criteria for the vacancy. In cases where we have a high volume of candidates who have a disability who meet all the essential criteria, we will interview the best candidates from within that group. This scheme encourages candidates with a disability and/or neurodivergence to apply. In exceptional circumstances, we may also need to apply the desirable criteria in our shortlisting process which may include holding active security clearance. Armed Forces Covenant Network Rail guarantees to interview veterans or spouses/partners of military personnel who meet all the essential criteria for the vacancy. In cases where we have a high volume of ex-military candidates/military spouses or partners, who meet all of the essential criteria, we will interview the best candidates from within that group. In exceptional circumstances, we may also need to apply the desirable criteria in our shortlisting process which may include holding active security clearance. In applying for this role, you acknowledge the following "this role falls in scope of the Off Payroll Working in the Public Sector legislation. Any rates of payment quoted will reflect the gross rate per day for the assignment and will be subject to appropriate taxes and statutory costs. As such the payment to the intermediary and your income resulting from this contract will be different".
iBSC
GCP DevOps Engineer - London (Hybrid) - Inside IR35
iBSC
GCP DevOps Engineer - London Hybrid - Inside IR35 Title: GCP DevOps Engineer Location: London - hybrid, 2 days per week onsite Duration: 6-12 months rolling IR35: Inside IR35 I'm looking for a strong GCP DevOps Engineer for a London based client. Summary This role suits someone who has strong practical GCP DevOps experience and can help build, automate, deploy and support cloud infrastructure in a production environment Core experience required (Mandatory) Strong hands-on Google Cloud Platform/GCP experience DevOps engineering background CI/CD pipeline experience Infrastructure as Code, ideally Terraform Kubernetes/container experience, ideally GKE Linux and Scripting experience, such as Bash or Python Production support, troubleshooting, automation, and platform reliability experience Nice to have GCP certification, such as Professional Cloud DevOps Engineer, Cloud Architect, or Associate Cloud Engineer Monitoring/logging tools such as Prometheus, Grafana, Datadog, ELK, or Google Cloud Monitoring GCP networking, IAM, security, landing zones, or cloud governance SRE experience Wider GCP services such as Cloud Run, Compute Engine, Cloud Storage, Pub/Sub, Cloud SQL, BigQuery, or Cloud Functions
07/05/2026
Contractor
GCP DevOps Engineer - London Hybrid - Inside IR35 Title: GCP DevOps Engineer Location: London - hybrid, 2 days per week onsite Duration: 6-12 months rolling IR35: Inside IR35 I'm looking for a strong GCP DevOps Engineer for a London based client. Summary This role suits someone who has strong practical GCP DevOps experience and can help build, automate, deploy and support cloud infrastructure in a production environment Core experience required (Mandatory) Strong hands-on Google Cloud Platform/GCP experience DevOps engineering background CI/CD pipeline experience Infrastructure as Code, ideally Terraform Kubernetes/container experience, ideally GKE Linux and Scripting experience, such as Bash or Python Production support, troubleshooting, automation, and platform reliability experience Nice to have GCP certification, such as Professional Cloud DevOps Engineer, Cloud Architect, or Associate Cloud Engineer Monitoring/logging tools such as Prometheus, Grafana, Datadog, ELK, or Google Cloud Monitoring GCP networking, IAM, security, landing zones, or cloud governance SRE experience Wider GCP services such as Cloud Run, Compute Engine, Cloud Storage, Pub/Sub, Cloud SQL, BigQuery, or Cloud Functions
Akkodis
Remote Network Monitoring Specialist - Streaming Telemetry
Akkodis
Remote Network Monitoring Specialist - Streaming Telemetry Salary: £70,000 - £75,000 Location: Remote Contract: 6-month FTC Role Overview: Our client is looking for an experienced Network Monitoring Specialist to support a major network infrastructure rollout on a 6-month fixed-term basis. This is a hands-on role focused on designing, implementing and commissioning monitoring capability across newly deployed network and fibre infrastructure. The priority is to ensure the environment is fully visible, measurable and supportable from day one. The role would suit someone with strong experience across network observability, alerting, telemetry, dashboards, service health, performance baselining and operational handover. The client is open to different monitoring backgrounds, particularly where candidates have worked with tools such as VictoriaMetrics, Prometheus, Grafana, Nagios, Zabbix, InfluxDB, Telegraf, SolarWinds, PRTG, Datadog, Elastic, OpenTelemetry, SNMP, NetFlow/IPFIX or syslog pipelines. You will work closely with network engineering and operational teams to deliver reliable monitoring at pace within a project-led environment. Key Responsibilities: Design and deploy monitoring solutions across newly delivered network infrastructure. Build monitoring capability that provides clear visibility of network health, performance and service availability. Work with monitoring and observability platforms such as VictoriaMetrics, Prometheus, Grafana, Nagios, Zabbix, InfluxDB, SolarWinds, PRTG, Datadog, Elastic or similar. Support metrics ingestion, retention, alerting, dashboarding and performance visibility. Build or support streaming telemetry pipelines to provide Real Time visibility across the network. Implement and refine alerting workflows for service health, escalation and operational response. Develop dashboards and reporting views to support engineering and operational teams. Commission monitoring across network devices, access infrastructure and Layer 1-3 equipment. Define baseline performance metrics, thresholds and SLA-led alerting. Work closely with network and operational teams to align monitoring with changing infrastructure requirements. Support analytics-led monitoring for anomaly detection and predictive fault identification where relevant. Improve monitoring architecture, tooling, documentation and handover processes. Produce clear runbooks, escalation paths and operational guides. Support knowledge transfer into internal technical teams. What We're Looking For: Previous experience in a senior network monitoring, network engineering or observability-focused role. Experience working in a telecoms, ISP, managed network or large-scale infrastructure environment. Strong understanding of network monitoring principles, including alerting, telemetry, dashboards, service health and performance baselining. Hands-on experience with monitoring or observability tools such as VictoriaMetrics, Prometheus, Grafana, Nagios, Zabbix, InfluxDB, Telegraf, SolarWinds, PRTG, Datadog, Elastic, OpenTelemetry or similar. Experience with network data sources and protocols such as streaming telemetry, gNMI, gRPC, SNMP, NetFlow/IPFIX or syslog. Good understanding of time-series monitoring, metrics ingestion, retention and performance visibility. Strong networking fundamentals across TCP/IP, BGP, OSPF, VLANs and optical or fibre environments. Familiarity with dashboarding, alert tuning, service health monitoring and operational reporting. Exposure to AI/ML-led monitoring, anomaly detection or predictive fault identification would be beneficial. Scripting or automation experience, such as Python or Bash, would be advantageous. Comfortable working independently and delivering against defined project milestones. Strong communication, documentation and stakeholder engagement skills. Proactive, detail-focused and comfortable solving problems without heavy direction. Why Consider This Role? This is a strong opportunity to join a business delivering a major network infrastructure programme, in a role where monitoring and observability are central to successful delivery. You will be taking ownership of a critical technical area rather than simply maintaining an existing setup. The focus is on making sure newly deployed infrastructure is properly monitored, operationally ready and reliable from day one. For someone with strong network monitoring experience, this offers a focused 6-month project where you can make a visible impact across a live network environment, using a range of modern monitoring, telemetry and observability technologies. Modis International Ltd acts as an employment agency for permanent recruitment and an employment business for the supply of temporary workers in the UK. Modis Europe Ltd provide a variety of international solutions that connect clients to the best talent in the world. For all positions based in Switzerland, Modis Europe Ltd works with its licensed Swiss partner Accurity GmbH to ensure that candidate applications are handled in accordance with Swiss law. Both Modis International Ltd and Modis Europe Ltd are Equal Opportunities Employers. By applying for this role your details will be submitted to Modis International Ltd and/or Modis Europe Ltd. Our Candidate Privacy Information Statement which explains how we will use your information is available on the Modis website.
07/05/2026
Remote Network Monitoring Specialist - Streaming Telemetry Salary: £70,000 - £75,000 Location: Remote Contract: 6-month FTC Role Overview: Our client is looking for an experienced Network Monitoring Specialist to support a major network infrastructure rollout on a 6-month fixed-term basis. This is a hands-on role focused on designing, implementing and commissioning monitoring capability across newly deployed network and fibre infrastructure. The priority is to ensure the environment is fully visible, measurable and supportable from day one. The role would suit someone with strong experience across network observability, alerting, telemetry, dashboards, service health, performance baselining and operational handover. The client is open to different monitoring backgrounds, particularly where candidates have worked with tools such as VictoriaMetrics, Prometheus, Grafana, Nagios, Zabbix, InfluxDB, Telegraf, SolarWinds, PRTG, Datadog, Elastic, OpenTelemetry, SNMP, NetFlow/IPFIX or syslog pipelines. You will work closely with network engineering and operational teams to deliver reliable monitoring at pace within a project-led environment. Key Responsibilities: Design and deploy monitoring solutions across newly delivered network infrastructure. Build monitoring capability that provides clear visibility of network health, performance and service availability. Work with monitoring and observability platforms such as VictoriaMetrics, Prometheus, Grafana, Nagios, Zabbix, InfluxDB, SolarWinds, PRTG, Datadog, Elastic or similar. Support metrics ingestion, retention, alerting, dashboarding and performance visibility. Build or support streaming telemetry pipelines to provide Real Time visibility across the network. Implement and refine alerting workflows for service health, escalation and operational response. Develop dashboards and reporting views to support engineering and operational teams. Commission monitoring across network devices, access infrastructure and Layer 1-3 equipment. Define baseline performance metrics, thresholds and SLA-led alerting. Work closely with network and operational teams to align monitoring with changing infrastructure requirements. Support analytics-led monitoring for anomaly detection and predictive fault identification where relevant. Improve monitoring architecture, tooling, documentation and handover processes. Produce clear runbooks, escalation paths and operational guides. Support knowledge transfer into internal technical teams. What We're Looking For: Previous experience in a senior network monitoring, network engineering or observability-focused role. Experience working in a telecoms, ISP, managed network or large-scale infrastructure environment. Strong understanding of network monitoring principles, including alerting, telemetry, dashboards, service health and performance baselining. Hands-on experience with monitoring or observability tools such as VictoriaMetrics, Prometheus, Grafana, Nagios, Zabbix, InfluxDB, Telegraf, SolarWinds, PRTG, Datadog, Elastic, OpenTelemetry or similar. Experience with network data sources and protocols such as streaming telemetry, gNMI, gRPC, SNMP, NetFlow/IPFIX or syslog. Good understanding of time-series monitoring, metrics ingestion, retention and performance visibility. Strong networking fundamentals across TCP/IP, BGP, OSPF, VLANs and optical or fibre environments. Familiarity with dashboarding, alert tuning, service health monitoring and operational reporting. Exposure to AI/ML-led monitoring, anomaly detection or predictive fault identification would be beneficial. Scripting or automation experience, such as Python or Bash, would be advantageous. Comfortable working independently and delivering against defined project milestones. Strong communication, documentation and stakeholder engagement skills. Proactive, detail-focused and comfortable solving problems without heavy direction. Why Consider This Role? This is a strong opportunity to join a business delivering a major network infrastructure programme, in a role where monitoring and observability are central to successful delivery. You will be taking ownership of a critical technical area rather than simply maintaining an existing setup. The focus is on making sure newly deployed infrastructure is properly monitored, operationally ready and reliable from day one. For someone with strong network monitoring experience, this offers a focused 6-month project where you can make a visible impact across a live network environment, using a range of modern monitoring, telemetry and observability technologies. Modis International Ltd acts as an employment agency for permanent recruitment and an employment business for the supply of temporary workers in the UK. Modis Europe Ltd provide a variety of international solutions that connect clients to the best talent in the world. For all positions based in Switzerland, Modis Europe Ltd works with its licensed Swiss partner Accurity GmbH to ensure that candidate applications are handled in accordance with Swiss law. Both Modis International Ltd and Modis Europe Ltd are Equal Opportunities Employers. By applying for this role your details will be submitted to Modis International Ltd and/or Modis Europe Ltd. Our Candidate Privacy Information Statement which explains how we will use your information is available on the Modis website.
Akkodis
Remote Network Monitoring Specialist - Streaming Telemetry
Akkodis Glasgow, Lanarkshire
Remote Network Monitoring Specialist - Streaming Telemetry Salary: £70,000 - £75,000 Location: Remote Contract: 6-month FTC Role Overview: Our client is looking for an experienced Network Monitoring Specialist to support a major network infrastructure rollout on a 6-month fixed-term basis. This is a hands-on role focused on designing, implementing and commissioning monitoring capability across newly deployed network and fibre infrastructure. The priority is to ensure the environment is fully visible, measurable and supportable from day one. The role would suit someone with strong experience across network observability, alerting, telemetry, dashboards, service health, performance baselining and operational handover. The client is open to different monitoring backgrounds, particularly where candidates have worked with tools such as VictoriaMetrics, Prometheus, Grafana, Nagios, Zabbix, InfluxDB, Telegraf, SolarWinds, PRTG, Datadog, Elastic, OpenTelemetry, SNMP, NetFlow/IPFIX or syslog pipelines. You will work closely with network engineering and operational teams to deliver reliable monitoring at pace within a project-led environment. Key Responsibilities: Design and deploy monitoring solutions across newly delivered network infrastructure. Build monitoring capability that provides clear visibility of network health, performance and service availability. Work with monitoring and observability platforms such as VictoriaMetrics, Prometheus, Grafana, Nagios, Zabbix, InfluxDB, SolarWinds, PRTG, Datadog, Elastic or similar. Support metrics ingestion, retention, alerting, dashboarding and performance visibility. Build or support streaming telemetry pipelines to provide Real Time visibility across the network. Implement and refine alerting workflows for service health, escalation and operational response. Develop dashboards and reporting views to support engineering and operational teams. Commission monitoring across network devices, access infrastructure and Layer 1-3 equipment. Define baseline performance metrics, thresholds and SLA-led alerting. Work closely with network and operational teams to align monitoring with changing infrastructure requirements. Support analytics-led monitoring for anomaly detection and predictive fault identification where relevant. Improve monitoring architecture, tooling, documentation and handover processes. Produce clear runbooks, escalation paths and operational guides. Support knowledge transfer into internal technical teams. What We're Looking For: Previous experience in a senior network monitoring, network engineering or observability-focused role. Experience working in a telecoms, ISP, managed network or large-scale infrastructure environment. Strong understanding of network monitoring principles, including alerting, telemetry, dashboards, service health and performance baselining. Hands-on experience with monitoring or observability tools such as VictoriaMetrics, Prometheus, Grafana, Nagios, Zabbix, InfluxDB, Telegraf, SolarWinds, PRTG, Datadog, Elastic, OpenTelemetry or similar. Experience with network data sources and protocols such as streaming telemetry, gNMI, gRPC, SNMP, NetFlow/IPFIX or syslog. Good understanding of time-series monitoring, metrics ingestion, retention and performance visibility. Strong networking fundamentals across TCP/IP, BGP, OSPF, VLANs and optical or fibre environments. Familiarity with dashboarding, alert tuning, service health monitoring and operational reporting. Exposure to AI/ML-led monitoring, anomaly detection or predictive fault identification would be beneficial. Scripting or automation experience, such as Python or Bash, would be advantageous. Comfortable working independently and delivering against defined project milestones. Strong communication, documentation and stakeholder engagement skills. Proactive, detail-focused and comfortable solving problems without heavy direction. Why Consider This Role? This is a strong opportunity to join a business delivering a major network infrastructure programme, in a role where monitoring and observability are central to successful delivery. You will be taking ownership of a critical technical area rather than simply maintaining an existing setup. The focus is on making sure newly deployed infrastructure is properly monitored, operationally ready and reliable from day one. For someone with strong network monitoring experience, this offers a focused 6-month project where you can make a visible impact across a live network environment, using a range of modern monitoring, telemetry and observability technologies. Modis International Ltd acts as an employment agency for permanent recruitment and an employment business for the supply of temporary workers in the UK. Modis Europe Ltd provide a variety of international solutions that connect clients to the best talent in the world. For all positions based in Switzerland, Modis Europe Ltd works with its licensed Swiss partner Accurity GmbH to ensure that candidate applications are handled in accordance with Swiss law. Both Modis International Ltd and Modis Europe Ltd are Equal Opportunities Employers. By applying for this role your details will be submitted to Modis International Ltd and/or Modis Europe Ltd. Our Candidate Privacy Information Statement which explains how we will use your information is available on the Modis website.
07/05/2026
Remote Network Monitoring Specialist - Streaming Telemetry Salary: £70,000 - £75,000 Location: Remote Contract: 6-month FTC Role Overview: Our client is looking for an experienced Network Monitoring Specialist to support a major network infrastructure rollout on a 6-month fixed-term basis. This is a hands-on role focused on designing, implementing and commissioning monitoring capability across newly deployed network and fibre infrastructure. The priority is to ensure the environment is fully visible, measurable and supportable from day one. The role would suit someone with strong experience across network observability, alerting, telemetry, dashboards, service health, performance baselining and operational handover. The client is open to different monitoring backgrounds, particularly where candidates have worked with tools such as VictoriaMetrics, Prometheus, Grafana, Nagios, Zabbix, InfluxDB, Telegraf, SolarWinds, PRTG, Datadog, Elastic, OpenTelemetry, SNMP, NetFlow/IPFIX or syslog pipelines. You will work closely with network engineering and operational teams to deliver reliable monitoring at pace within a project-led environment. Key Responsibilities: Design and deploy monitoring solutions across newly delivered network infrastructure. Build monitoring capability that provides clear visibility of network health, performance and service availability. Work with monitoring and observability platforms such as VictoriaMetrics, Prometheus, Grafana, Nagios, Zabbix, InfluxDB, SolarWinds, PRTG, Datadog, Elastic or similar. Support metrics ingestion, retention, alerting, dashboarding and performance visibility. Build or support streaming telemetry pipelines to provide Real Time visibility across the network. Implement and refine alerting workflows for service health, escalation and operational response. Develop dashboards and reporting views to support engineering and operational teams. Commission monitoring across network devices, access infrastructure and Layer 1-3 equipment. Define baseline performance metrics, thresholds and SLA-led alerting. Work closely with network and operational teams to align monitoring with changing infrastructure requirements. Support analytics-led monitoring for anomaly detection and predictive fault identification where relevant. Improve monitoring architecture, tooling, documentation and handover processes. Produce clear runbooks, escalation paths and operational guides. Support knowledge transfer into internal technical teams. What We're Looking For: Previous experience in a senior network monitoring, network engineering or observability-focused role. Experience working in a telecoms, ISP, managed network or large-scale infrastructure environment. Strong understanding of network monitoring principles, including alerting, telemetry, dashboards, service health and performance baselining. Hands-on experience with monitoring or observability tools such as VictoriaMetrics, Prometheus, Grafana, Nagios, Zabbix, InfluxDB, Telegraf, SolarWinds, PRTG, Datadog, Elastic, OpenTelemetry or similar. Experience with network data sources and protocols such as streaming telemetry, gNMI, gRPC, SNMP, NetFlow/IPFIX or syslog. Good understanding of time-series monitoring, metrics ingestion, retention and performance visibility. Strong networking fundamentals across TCP/IP, BGP, OSPF, VLANs and optical or fibre environments. Familiarity with dashboarding, alert tuning, service health monitoring and operational reporting. Exposure to AI/ML-led monitoring, anomaly detection or predictive fault identification would be beneficial. Scripting or automation experience, such as Python or Bash, would be advantageous. Comfortable working independently and delivering against defined project milestones. Strong communication, documentation and stakeholder engagement skills. Proactive, detail-focused and comfortable solving problems without heavy direction. Why Consider This Role? This is a strong opportunity to join a business delivering a major network infrastructure programme, in a role where monitoring and observability are central to successful delivery. You will be taking ownership of a critical technical area rather than simply maintaining an existing setup. The focus is on making sure newly deployed infrastructure is properly monitored, operationally ready and reliable from day one. For someone with strong network monitoring experience, this offers a focused 6-month project where you can make a visible impact across a live network environment, using a range of modern monitoring, telemetry and observability technologies. Modis International Ltd acts as an employment agency for permanent recruitment and an employment business for the supply of temporary workers in the UK. Modis Europe Ltd provide a variety of international solutions that connect clients to the best talent in the world. For all positions based in Switzerland, Modis Europe Ltd works with its licensed Swiss partner Accurity GmbH to ensure that candidate applications are handled in accordance with Swiss law. Both Modis International Ltd and Modis Europe Ltd are Equal Opportunities Employers. By applying for this role your details will be submitted to Modis International Ltd and/or Modis Europe Ltd. Our Candidate Privacy Information Statement which explains how we will use your information is available on the Modis website.

Modal Window

  • Home
  • Contact
  • About Us
  • FAQs
  • Terms & Conditions
  • Privacy
  • Employer
  • Post a Job
  • Search Resumes
  • Sign in
  • Job Seeker
  • Find Jobs
  • Create Resume
  • Sign in
  • IT blog
  • Facebook
  • Twitter
  • LinkedIn
  • Youtube
© 2008-2026 IT Job Board