Software Engineer - Power BI
Title: Software Engineer - Power BI
Contract Type: Fixed Term Contract (12 Months)
Salary: Salary starting from £49,502 pa (Regional) and from £57,094 pa (Inside London) depending on experience and location.
Reporting Office: London, Stratford or Manchester, Trafford
Persona: Agile (20-40% in the office per week)
Closing Date: 26th April 2026
Interviews will take place on: 1st stage is a virtual interview on 5th May 2026 followed by 2nd stage in-person interview on 13th May 2026
Benefits include: Excellent pension plan (up to 6% double contribution), 28 days Annual Leave rising to 31 days with length of service + Bank Holidays, Westfield Health Cash Plan, non-contributory life assurance, up to 21 hours volunteering paid days, lifestyle benefits, Employee Assistance Programme and many more …
Early applications are encouraged as we reserve the right to close the advertisement and interview earlier than stated.
Join our Software Engineering Team at L&Q:
We are building the next generation of data-driven reporting and analytics solutions — and this role puts you at the centre of it. You will develop and manage Power BI reports, dashboards, and data models that turn complex information into clear insights for the organisation.
Working as part of a collaborative agile squad, you will design and maintain enterprise-level BI solutions using Power BI, SQL, DAX, KQL and Power Query (M).
You will help shape our reporting standards, data models, and best practices — ensuring our data is trusted, consistent, and ready for self-service analytics across the business.
Building Power BI CI/CD pipelines in Azure DevOps, improving security, governance and implementing Azure monitoring capabilities using Power BI.
You will report to the Lead Software Engineer and work closely with Data Engineers, Analysts, Architects, and Product Owners. It is a hands-on, creative role that blends technical skill with problem-solving and collaboration.
If this sounds like you, we would love for you to apply!
Your impact in the role will be to:
Design, develop, and maintain Power BI reports, dashboards, and datasets using best practices in DAX, Power Query (M), KQL and data modelling.
Building Power BI CI/CD pipelines in Azure DevOps for version controlling.
Build and optimise reusable data models to support enterprise-level reporting and self-service analytics.
Develop SQL queries, stored procedures, and views to support analytical and operational reporting needs.
Implement and manage Power BI Service administration, including workspace permissions, data refreshes, and row-level security.
Collaborate with engineers, architects, and analysts to design efficient data solutions that integrate with the wider enterprise data platform.
Write clean, scalable, and well-documented code while applying engineering best practices through peer reviews and agile ceremonies.
Troubleshoot issues, analyse root causes, and deliver effective technical solutions.
Communicate complex technical concepts clearly to both technical and non-technical stakeholders.
Stay up to date with emerging BI technologies, tools, and practices to continuously improve reporting capabilities.
Share knowledge and support capability growth across the teams.
By creating robust Power BI solutions and collaborating closely with technical and business teams, you will help drive a culture of data-led insight across the organisation.
What you'll bring:
Strong hands-on experience designing and developing Power BI reports, dashboards, and data models.
Advanced knowledge of DAX, Power Query (M), KQL and Power BI Service administration.
Advanced Knowledge of AzureDevOps (CI/CD, version control, automated deployment) for Power BI.
Strong SQL expertise, including writing complex queries, stored procedures, and CTEs, with experience in query optimization, performance tuning, and designing data models for relational databases.
Good understanding of relational databases, data warehousing concepts, and dimensional modelling (Kimball or similar).
Experience collaborating within agile scrum delivery teams, throughout SDLC and contributing to iterative development cycles. Methodologies. Familiarity with best practices for credential management in a highly secure environment using DevSecOps.
Effective communication and documentation skills, with the ability to present technical concepts to non-technical audiences.
Passion for continuous learning and staying current with Power BI, Azure, and wider data technology trends.
Desirable:
Exposure to enterprise-level reporting governance, security, and performance optimisation practices.
Familiarity with Azure Data Services such as Azure SQL, Data Factory, Log Analytics, Azure Monitor and Synapse Analytics.
Integration with Microsoft tools (SharePoint, Power Apps, Teams)
Exposure to RESTful APIs
Understanding of data integration design, data repositories, and master data management (MDM) tools such as Semarchy.
Exposure to Unit 4, NEC Housing Management Systems (advantageous).
About L&Q:
We’re one of the UK’s leading housing associations and developers. We were founded on a simple belief: high quality housing is vital for people’s health, happiness and security. Everyone deserves a quality home that gives them the chance to live a better life.
250,000 people call our properties ‘home’, and we’re proud to serve diverse communities across London, the South East and North West of England.
At L&Q, people are at the heart of our business and our success depends on employing the best people and getting the best from them. The foundation of everything that we are is built on our corporate values and behavioural framework , which outlines our core expectations and should be demonstrated at all times, and all levels, when representing L&Q.
L&Q strongly believe a diverse and inclusive workforce is important, and inclusion is part of our core values and everyday working practices. We make hiring decisions based on your experiences, skills and merits and we are recognised externally for our commitment to inclusion. We are a Stonewall Diversity Champion, a Disability Confident (Committed) employer and have signed the Time to Change Employer Pledge to demonstrate our commitment to end mental health discrimination in the workplace.
At L&Q, sustainability is at the heart of what we do. We recognise the responsibility we hold as one of the UK’s largest housing associations.
#TJ
16/04/2026
Contractor
Software Engineer - Power BI
Title: Software Engineer - Power BI
Contract Type: Fixed Term Contract (12 Months)
Salary: Salary starting from £49,502 pa (Regional) and from £57,094 pa (Inside London) depending on experience and location.
Reporting Office: London, Stratford or Manchester, Trafford
Persona: Agile (20-40% in the office per week)
Closing Date: 26th April 2026
Interviews will take place on: 1st stage is a virtual interview on 5th May 2026 followed by 2nd stage in-person interview on 13th May 2026
Benefits include: Excellent pension plan (up to 6% double contribution), 28 days Annual Leave rising to 31 days with length of service + Bank Holidays, Westfield Health Cash Plan, non-contributory life assurance, up to 21 hours volunteering paid days, lifestyle benefits, Employee Assistance Programme and many more …
Early applications are encouraged as we reserve the right to close the advertisement and interview earlier than stated.
Join our Software Engineering Team at L&Q:
We are building the next generation of data-driven reporting and analytics solutions — and this role puts you at the centre of it. You will develop and manage Power BI reports, dashboards, and data models that turn complex information into clear insights for the organisation.
Working as part of a collaborative agile squad, you will design and maintain enterprise-level BI solutions using Power BI, SQL, DAX, KQL and Power Query (M).
You will help shape our reporting standards, data models, and best practices — ensuring our data is trusted, consistent, and ready for self-service analytics across the business.
Building Power BI CI/CD pipelines in Azure DevOps, improving security, governance and implementing Azure monitoring capabilities using Power BI.
You will report to the Lead Software Engineer and work closely with Data Engineers, Analysts, Architects, and Product Owners. It is a hands-on, creative role that blends technical skill with problem-solving and collaboration.
If this sounds like you, we would love for you to apply!
Your impact in the role will be to:
Design, develop, and maintain Power BI reports, dashboards, and datasets using best practices in DAX, Power Query (M), KQL and data modelling.
Building Power BI CI/CD pipelines in Azure DevOps for version controlling.
Build and optimise reusable data models to support enterprise-level reporting and self-service analytics.
Develop SQL queries, stored procedures, and views to support analytical and operational reporting needs.
Implement and manage Power BI Service administration, including workspace permissions, data refreshes, and row-level security.
Collaborate with engineers, architects, and analysts to design efficient data solutions that integrate with the wider enterprise data platform.
Write clean, scalable, and well-documented code while applying engineering best practices through peer reviews and agile ceremonies.
Troubleshoot issues, analyse root causes, and deliver effective technical solutions.
Communicate complex technical concepts clearly to both technical and non-technical stakeholders.
Stay up to date with emerging BI technologies, tools, and practices to continuously improve reporting capabilities.
Share knowledge and support capability growth across the teams.
By creating robust Power BI solutions and collaborating closely with technical and business teams, you will help drive a culture of data-led insight across the organisation.
What you'll bring:
Strong hands-on experience designing and developing Power BI reports, dashboards, and data models.
Advanced knowledge of DAX, Power Query (M), KQL and Power BI Service administration.
Advanced Knowledge of AzureDevOps (CI/CD, version control, automated deployment) for Power BI.
Strong SQL expertise, including writing complex queries, stored procedures, and CTEs, with experience in query optimization, performance tuning, and designing data models for relational databases.
Good understanding of relational databases, data warehousing concepts, and dimensional modelling (Kimball or similar).
Experience collaborating within agile scrum delivery teams, throughout SDLC and contributing to iterative development cycles. Methodologies. Familiarity with best practices for credential management in a highly secure environment using DevSecOps.
Effective communication and documentation skills, with the ability to present technical concepts to non-technical audiences.
Passion for continuous learning and staying current with Power BI, Azure, and wider data technology trends.
Desirable:
Exposure to enterprise-level reporting governance, security, and performance optimisation practices.
Familiarity with Azure Data Services such as Azure SQL, Data Factory, Log Analytics, Azure Monitor and Synapse Analytics.
Integration with Microsoft tools (SharePoint, Power Apps, Teams)
Exposure to RESTful APIs
Understanding of data integration design, data repositories, and master data management (MDM) tools such as Semarchy.
Exposure to Unit 4, NEC Housing Management Systems (advantageous).
About L&Q:
We’re one of the UK’s leading housing associations and developers. We were founded on a simple belief: high quality housing is vital for people’s health, happiness and security. Everyone deserves a quality home that gives them the chance to live a better life.
250,000 people call our properties ‘home’, and we’re proud to serve diverse communities across London, the South East and North West of England.
At L&Q, people are at the heart of our business and our success depends on employing the best people and getting the best from them. The foundation of everything that we are is built on our corporate values and behavioural framework , which outlines our core expectations and should be demonstrated at all times, and all levels, when representing L&Q.
L&Q strongly believe a diverse and inclusive workforce is important, and inclusion is part of our core values and everyday working practices. We make hiring decisions based on your experiences, skills and merits and we are recognised externally for our commitment to inclusion. We are a Stonewall Diversity Champion, a Disability Confident (Committed) employer and have signed the Time to Change Employer Pledge to demonstrate our commitment to end mental health discrimination in the workplace.
At L&Q, sustainability is at the heart of what we do. We recognise the responsibility we hold as one of the UK’s largest housing associations.
#TJ
We're sourcing a Cloud Security Engineer (Azure / Terraform) for a 6-month UK-based contract (fully remote). This role is ideal for a hands-on Azure / Terraform contractor who can quickly take ownership and deliver secure, production-grade infrastructure through code.You'll be embedded in a modern Azure environment, leading security engineering initiatives with a strong focus on Terraform-first delivery. Expect to spend the majority of your time writing clean, reusable modules, embedding security controls, and pushing everything through CI/CD.Key deliverables: Build and enforce secure Azure infrastructure using Terraform (modular, scalable, production-ready) Implement perimeter security with Azure Front Door + WAF (OWASP, bot protection, rule tuning) Define governance using Azure Policy as code (networking, firewall, compliance controls) Secure AKS workloads with container scanning and runtime protections Design and roll out Conditional Access and identity protections in Microsoft Entra ID (P2) Harden Azure DevOps pipelines using managed identities and least-privilege principles Drive risk visibility and remediation via Microsoft Defender for Cloud You'll suit this contract if you're a strong coder with Terraform, security-focused by default, and comfortable delivering autonomously without hand-holding.
22/04/2026
Contractor
We're sourcing a Cloud Security Engineer (Azure / Terraform) for a 6-month UK-based contract (fully remote). This role is ideal for a hands-on Azure / Terraform contractor who can quickly take ownership and deliver secure, production-grade infrastructure through code.You'll be embedded in a modern Azure environment, leading security engineering initiatives with a strong focus on Terraform-first delivery. Expect to spend the majority of your time writing clean, reusable modules, embedding security controls, and pushing everything through CI/CD.Key deliverables: Build and enforce secure Azure infrastructure using Terraform (modular, scalable, production-ready) Implement perimeter security with Azure Front Door + WAF (OWASP, bot protection, rule tuning) Define governance using Azure Policy as code (networking, firewall, compliance controls) Secure AKS workloads with container scanning and runtime protections Design and roll out Conditional Access and identity protections in Microsoft Entra ID (P2) Harden Azure DevOps pipelines using managed identities and least-privilege principles Drive risk visibility and remediation via Microsoft Defender for Cloud You'll suit this contract if you're a strong coder with Terraform, security-focused by default, and comfortable delivering autonomously without hand-holding.
Junior Data Engineer - Public Sector Contract: Initial 7 months (extension possible) Rate: £310 per day, Inside IR35 Location: Remote with travel to Waterloo (2-3 days per month) Security Clearance: SC-eligible (5 years UK residency required) I am working with a key consultancy delivering a major UK public sector programme and are looking for a Junior Data Engineer / Scientist to join a mixed delivery team building and operating secure, reliable data platforms that support critical public services.This role is designed for someone early in their data career who wants to develop strong engineering fundamentals in a real production environment. The role - a junior, generalist data engineering position. This is an engineering-led role, not a specialist or senior position. The team is ideally looking for a generalist in their first few years within data engineering or data science, who is building breadth across data platforms, pipelines and operations. You'll focus on: Designing, building and maintaining data pipelines Supporting the operation of data lakes and data warehouses Implementing and improving ETL / ELT processes Using Python and SQL to transform, validate and move data Working with analysts and developers to turn data requirements into technical solutions Monitoring data quality, documenting data models and lineage, and resolving issues Automating data workflows and operational tasks Participating in Agile delivery, sprint work and collaboration Supporting incidents and helping improve platform reliability over time Working within public sector data governance, security and privacy standards This role offers exposure to how data platforms are built, operated and supported in a regulated environment - forming the foundations of a long-term data engineering career. What this role is not To avoid misalignment, it's important to be clear about what this role is not focused on: Not a Data Analyst role Not a Power BI / dashboard developer role Not an insight, reporting or MI position Not a modelling, ML or research-focused role Not an LLM, AI or advanced data science role While you may work alongside analysts and data scientists, this role does not centre on: Building dashboards Producing insights or reports Statistical modelling Predictive or machine learning solutions The emphasis is on data engineering foundations and platform delivery. Ideal candidate profile This role is best suited to someone who: Is in their first few years of a data engineering or data science career Wants to build core engineering skills rather than specialise immediately Has hands-on experience with SQL and Python Understands basic data modelling and ETL concepts Is comfortable learning through delivery in a production environment Is interested in how data platforms work end-to-end, including operations and support Is keen to grow within public sector data platforms Who this role is unlikely to sui t This role is unlikely to be appropriate for candidates who: Are very senior data engineers or architects Have primarily worked in advanced ML, AI, or research-focused roles Are specialised Power BI, reporting or MI developers Are looking for a role centred on analysis, insights or modelling Are seeking leadership, ownership of platform strategy, or advanced optimisation work Applications that demonstrate significant seniority or deep specialisation rather than junior-to-mid generalist experience may not be progressed. Required skills and experience Your CV should clearly demonstrate: A degree in a technical discipline (Computer Science, Data Science, Mathematics or similar) Hands-on experience with SQL Experience using Python, Java or Bash Understanding of ETL processes and data modelling fundamentals Experience with version control (e.g. Git) Comfort working in Agile / DevOps environments Awareness of data security and privacy Eligibility for UK SC clearance Nice to have (but not essential) Exposure to AWS, Azure or GCP Familiarity with tools such as Airflow, dbt, Spark Awareness of CI/CD pipelines or containerisation Experience in public sector or regulated environments Important note for applicants This role is deliberately positioned as a junior, generalist data engineering role. Please ensure your CV clearly demonstrates hands-on data engineering fundamentals, rather than senior leadership, advanced AI/ML work, or analytics-only experience. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at hays.co.uk
22/04/2026
Contractor
Junior Data Engineer - Public Sector Contract: Initial 7 months (extension possible) Rate: £310 per day, Inside IR35 Location: Remote with travel to Waterloo (2-3 days per month) Security Clearance: SC-eligible (5 years UK residency required) I am working with a key consultancy delivering a major UK public sector programme and are looking for a Junior Data Engineer / Scientist to join a mixed delivery team building and operating secure, reliable data platforms that support critical public services.This role is designed for someone early in their data career who wants to develop strong engineering fundamentals in a real production environment. The role - a junior, generalist data engineering position. This is an engineering-led role, not a specialist or senior position. The team is ideally looking for a generalist in their first few years within data engineering or data science, who is building breadth across data platforms, pipelines and operations. You'll focus on: Designing, building and maintaining data pipelines Supporting the operation of data lakes and data warehouses Implementing and improving ETL / ELT processes Using Python and SQL to transform, validate and move data Working with analysts and developers to turn data requirements into technical solutions Monitoring data quality, documenting data models and lineage, and resolving issues Automating data workflows and operational tasks Participating in Agile delivery, sprint work and collaboration Supporting incidents and helping improve platform reliability over time Working within public sector data governance, security and privacy standards This role offers exposure to how data platforms are built, operated and supported in a regulated environment - forming the foundations of a long-term data engineering career. What this role is not To avoid misalignment, it's important to be clear about what this role is not focused on: Not a Data Analyst role Not a Power BI / dashboard developer role Not an insight, reporting or MI position Not a modelling, ML or research-focused role Not an LLM, AI or advanced data science role While you may work alongside analysts and data scientists, this role does not centre on: Building dashboards Producing insights or reports Statistical modelling Predictive or machine learning solutions The emphasis is on data engineering foundations and platform delivery. Ideal candidate profile This role is best suited to someone who: Is in their first few years of a data engineering or data science career Wants to build core engineering skills rather than specialise immediately Has hands-on experience with SQL and Python Understands basic data modelling and ETL concepts Is comfortable learning through delivery in a production environment Is interested in how data platforms work end-to-end, including operations and support Is keen to grow within public sector data platforms Who this role is unlikely to sui t This role is unlikely to be appropriate for candidates who: Are very senior data engineers or architects Have primarily worked in advanced ML, AI, or research-focused roles Are specialised Power BI, reporting or MI developers Are looking for a role centred on analysis, insights or modelling Are seeking leadership, ownership of platform strategy, or advanced optimisation work Applications that demonstrate significant seniority or deep specialisation rather than junior-to-mid generalist experience may not be progressed. Required skills and experience Your CV should clearly demonstrate: A degree in a technical discipline (Computer Science, Data Science, Mathematics or similar) Hands-on experience with SQL Experience using Python, Java or Bash Understanding of ETL processes and data modelling fundamentals Experience with version control (e.g. Git) Comfort working in Agile / DevOps environments Awareness of data security and privacy Eligibility for UK SC clearance Nice to have (but not essential) Exposure to AWS, Azure or GCP Familiarity with tools such as Airflow, dbt, Spark Awareness of CI/CD pipelines or containerisation Experience in public sector or regulated environments Important note for applicants This role is deliberately positioned as a junior, generalist data engineering role. Please ensure your CV clearly demonstrates hands-on data engineering fundamentals, rather than senior leadership, advanced AI/ML work, or analytics-only experience. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at hays.co.uk
DevOps Engineer SC cleared Permanent Flexible AWS Terraform SC Cleared At Peregrine, we re always seeking Specialist Talent that have the ideal mix of skills, experience, and attitude, to place with our vast array of clients. From Business Analysts in large government organisations to Software Developers in the private sector we are always in search of the best talent to place, now. The role: We are seeking an SC cleared DevOps Engineer to work as a forward deployed engineer, embedded within the Cyber Capability Unit. The role will support the design, build and deployment of AI powered solutions that strengthen cyber security and fraud prevention capabilities. You will work closely with engineers, product owners and stakeholders to understand operational needs, develop prototypes and deploy secure, reliable solutions within approved platforms and environments. This role directly supports the Cyber Resilience Centre s mission and contributes to the wider security strategy by delivering practical, governed AI solutions that provide measurable operational value. Responsibilities: Cloud and Platform Integration æ Design and deploy solutions in AWS cloud environments æ Use infrastructure as code to ensure repeatable and compliant deployments æ Ensure all solutions meet organisational governance, security and compliance standards CI/CD and Automation æ Configure, manage and maintain GitLab CI pipelines æ Automate testing, build and deployment of infrastructure, applications and services æ Promote best practice DevOps ways of working across environments Testing and Quality æ Implement unit, integration and performance testing for all components æ Ensure solutions are reliable, reproducible and stable across releases æ Support continuous improvement of testing practices Monitoring and Incident Response æ Implement observability and monitoring tooling æ Track system performance and detect anomalies æ Support incident response, troubleshooting and root cause analysis in live environments Collaboration and Delivery æ Work closely with engineers, analysts and stakeholders æ Translate requirements into working technical solutions æ Support deployment, handover and ongoing optimisation of delivered capabilities Skills & Experience: æ Active SC clearance æ Strong experience deploying and operating solutions in AWS æ Infrastructure as code using Terraform æ CI/CD pipeline development using GitLab CI æ Experience with monitoring, logging and alerting tools æ Understanding of secure DevOps practices in regulated environments æ Experience working with large data stores or big data platforms Desirable skills: æ Experience supporting AI or data driven platforms æ Knowledge of cyber security or fraud prevention domains æ Experience working within government or critical national infrastructure environments About Peregrine We build workforces that deliver tech and change programmes at leading UK organisations. By combining data science from Peregrine Intelligence, our industry-accredited Peregrine Academy, and market-leading attraction and diversity initiatives, we bridge capability gaps at all levels in public and private sector organisations. We work closely with our clients to understand their challenges and deliver flexible, long-term solutions that make a real difference. When you join Peregrine, you become part of a team that s focused on growth, both yours, our clients , and the sectors we support. You ll also get access to a full range of benefits alongside your salary. How Specialist Talent Works As a permanent employee at Peregrine, you ll be part of our Specialist Talent team. That means you ll work on-site or remotely with our clients, supporting them on complex, high-impact projects in Data, Digital and Business Transformation. You ll get the variety and challenge of consultancy work, with the stability and support of a permanent role. You re not a contractor - you re a valued member of our team, with access to all the same benefits, learning opportunities, and community. Find out more: peregrine.global or check out our LinkedIn page: peregrine-resourcing
22/04/2026
Full time
DevOps Engineer SC cleared Permanent Flexible AWS Terraform SC Cleared At Peregrine, we re always seeking Specialist Talent that have the ideal mix of skills, experience, and attitude, to place with our vast array of clients. From Business Analysts in large government organisations to Software Developers in the private sector we are always in search of the best talent to place, now. The role: We are seeking an SC cleared DevOps Engineer to work as a forward deployed engineer, embedded within the Cyber Capability Unit. The role will support the design, build and deployment of AI powered solutions that strengthen cyber security and fraud prevention capabilities. You will work closely with engineers, product owners and stakeholders to understand operational needs, develop prototypes and deploy secure, reliable solutions within approved platforms and environments. This role directly supports the Cyber Resilience Centre s mission and contributes to the wider security strategy by delivering practical, governed AI solutions that provide measurable operational value. Responsibilities: Cloud and Platform Integration æ Design and deploy solutions in AWS cloud environments æ Use infrastructure as code to ensure repeatable and compliant deployments æ Ensure all solutions meet organisational governance, security and compliance standards CI/CD and Automation æ Configure, manage and maintain GitLab CI pipelines æ Automate testing, build and deployment of infrastructure, applications and services æ Promote best practice DevOps ways of working across environments Testing and Quality æ Implement unit, integration and performance testing for all components æ Ensure solutions are reliable, reproducible and stable across releases æ Support continuous improvement of testing practices Monitoring and Incident Response æ Implement observability and monitoring tooling æ Track system performance and detect anomalies æ Support incident response, troubleshooting and root cause analysis in live environments Collaboration and Delivery æ Work closely with engineers, analysts and stakeholders æ Translate requirements into working technical solutions æ Support deployment, handover and ongoing optimisation of delivered capabilities Skills & Experience: æ Active SC clearance æ Strong experience deploying and operating solutions in AWS æ Infrastructure as code using Terraform æ CI/CD pipeline development using GitLab CI æ Experience with monitoring, logging and alerting tools æ Understanding of secure DevOps practices in regulated environments æ Experience working with large data stores or big data platforms Desirable skills: æ Experience supporting AI or data driven platforms æ Knowledge of cyber security or fraud prevention domains æ Experience working within government or critical national infrastructure environments About Peregrine We build workforces that deliver tech and change programmes at leading UK organisations. By combining data science from Peregrine Intelligence, our industry-accredited Peregrine Academy, and market-leading attraction and diversity initiatives, we bridge capability gaps at all levels in public and private sector organisations. We work closely with our clients to understand their challenges and deliver flexible, long-term solutions that make a real difference. When you join Peregrine, you become part of a team that s focused on growth, both yours, our clients , and the sectors we support. You ll also get access to a full range of benefits alongside your salary. How Specialist Talent Works As a permanent employee at Peregrine, you ll be part of our Specialist Talent team. That means you ll work on-site or remotely with our clients, supporting them on complex, high-impact projects in Data, Digital and Business Transformation. You ll get the variety and challenge of consultancy work, with the stability and support of a permanent role. You re not a contractor - you re a valued member of our team, with access to all the same benefits, learning opportunities, and community. Find out more: peregrine.global or check out our LinkedIn page: peregrine-resourcing
Cloud Platform Engineering Lead - Azure/Platform Ownership/IaC A leading London Market Insurance organisation is seeking an experienced Cloud Platform Engineering Lead to take full ownership of its cloud platform strategy and engineering capability. This is a newly created, high impact position within a modern, forward thinking technology environment, offering the autonomy to shape cloud best practice and drive platform excellence across the organisation. You'll work closely with the Enterprise Architect, senior technology stakeholders, and cross functional engineering teams to design, build, and evolve a secure, scalable, and automated Azure cloud estate. This is a hands on technical leadership role - not people management - ideal for someone who thrives on ownership, architectural thinking, and deep technical problem solving. You'll be a great fit for this role if: You have extensive experience designing and operating cloud environments, ideally within the insurance sector You bring years of hands on cloud engineering expertise, primarily across Azure You've built and owned cloud platforms using Infrastructure as Code, including life cycle management of Terraform or similar tooling You have a strong background in platform engineering, cloud architecture, automation, and modern DevOps practices You have deep technical understanding This is a rare opportunity to take end to end ownership of a critical cloud domain, influence engineering standards, and help shape the future of cloud capability within a growing and ambitious organisation. This is a permanent opportunity paying £90k-£110k + Excellent Bonus requiring 3 days a week onsite in central London. Cloud Platform Engineering Lead - Azure/Platform Ownership/IaC
22/04/2026
Full time
Cloud Platform Engineering Lead - Azure/Platform Ownership/IaC A leading London Market Insurance organisation is seeking an experienced Cloud Platform Engineering Lead to take full ownership of its cloud platform strategy and engineering capability. This is a newly created, high impact position within a modern, forward thinking technology environment, offering the autonomy to shape cloud best practice and drive platform excellence across the organisation. You'll work closely with the Enterprise Architect, senior technology stakeholders, and cross functional engineering teams to design, build, and evolve a secure, scalable, and automated Azure cloud estate. This is a hands on technical leadership role - not people management - ideal for someone who thrives on ownership, architectural thinking, and deep technical problem solving. You'll be a great fit for this role if: You have extensive experience designing and operating cloud environments, ideally within the insurance sector You bring years of hands on cloud engineering expertise, primarily across Azure You've built and owned cloud platforms using Infrastructure as Code, including life cycle management of Terraform or similar tooling You have a strong background in platform engineering, cloud architecture, automation, and modern DevOps practices You have deep technical understanding This is a rare opportunity to take end to end ownership of a critical cloud domain, influence engineering standards, and help shape the future of cloud capability within a growing and ambitious organisation. This is a permanent opportunity paying £90k-£110k + Excellent Bonus requiring 3 days a week onsite in central London. Cloud Platform Engineering Lead - Azure/Platform Ownership/IaC
Lynx Recruitment are partnered with a leading global consultancy to source an experienced Cloud Security Engineer to join a high-performing cloud and cybersecurity team. This is an exciting opportunity to work on enterprise-scale cloud environments, driving security best practices and implementing cutting-edge cloud-native application protection solutions. The Role You will play a key role in designing, implementing, and managing cloud security controls across large-scale AWS environments, with a strong focus on policy-as-code and automation. Key Responsibilities Implement and manage CNAPP policies using Wiz for continuous cloud posture assessment and remediation Develop and maintain policy-as-code frameworks using OPA/Rego Integrate security controls into Infrastructure-as-Code (IaC) workflows using Terraform Collaborate closely with DevOps and Cyber Security teams to remediate non-compliant resources Monitor and enhance cloud governance and policy effectiveness Embed security into the SDLC through CI/CD pipelines (eg GitLab Runners), including vulnerability scanning and compliance checks Key Requirements Strong experience with AWS (essential) Hands-on experience with Wiz (including custom rule development, graph rules, or configuration policies) Expertise in OPA/Rego for policy-as-code Proven experience with Terraform for infrastructure and security automation Scripting experience (Python, Bash, or PowerShell) Experience working within DevSecOps environments and implementing shift-left security Degree in an IT or technology-related subject with a minimum of a 2:1 (or equivalent)
22/04/2026
Full time
Lynx Recruitment are partnered with a leading global consultancy to source an experienced Cloud Security Engineer to join a high-performing cloud and cybersecurity team. This is an exciting opportunity to work on enterprise-scale cloud environments, driving security best practices and implementing cutting-edge cloud-native application protection solutions. The Role You will play a key role in designing, implementing, and managing cloud security controls across large-scale AWS environments, with a strong focus on policy-as-code and automation. Key Responsibilities Implement and manage CNAPP policies using Wiz for continuous cloud posture assessment and remediation Develop and maintain policy-as-code frameworks using OPA/Rego Integrate security controls into Infrastructure-as-Code (IaC) workflows using Terraform Collaborate closely with DevOps and Cyber Security teams to remediate non-compliant resources Monitor and enhance cloud governance and policy effectiveness Embed security into the SDLC through CI/CD pipelines (eg GitLab Runners), including vulnerability scanning and compliance checks Key Requirements Strong experience with AWS (essential) Hands-on experience with Wiz (including custom rule development, graph rules, or configuration policies) Expertise in OPA/Rego for policy-as-code Proven experience with Terraform for infrastructure and security automation Scripting experience (Python, Bash, or PowerShell) Experience working within DevSecOps environments and implementing shift-left security Degree in an IT or technology-related subject with a minimum of a 2:1 (or equivalent)
NetDevOps Engineer - SC Cleared - £650 to £700 PD - Inside IR 35 An enterprise leading government corporation is hiring a NetDevOps Engineer who has the ability to design, build and evolve a large-scale enterprise network platform. The chosen engineer will have a strong focus on modern, automate and cloud integrated infrastructure. Our client is seeking someone to be based in Blackpool on a hybrid basis. As our client continues to modernise, they require a consultant who has strong capabilites in networking automation using Python, Ansible, Terraform and CI/CD experience with a background in Network & Infrastructure Engineering. Key Responsibilities Design, implement, and enhance enterprise network infrastructure across data centre, hybrid, and cloud environments Engineer and implement network traffic flows to support business-critical services Build and maintain secure hybrid connectivity across Azure, AWS, and OCI Implement and manage Palo Alto Firewall policies across on-prem and cloud environments, aligned to Zero Trust principles Design and operate high-availability network services, including routing, segmentation, and resilience Develop and maintain network automation using tools such as Python, Ansible, and Infrastructure as Code Collaborate with architecture and platform teams to ensure solutions align with engineering standards and strategic direction Contribute immediately to delivery work, demonstrating the ability to operate with minimal ramp up Document designs and changes clearly and consistently, supporting maintainability and knowledge sharing Nice to Have Aruba Central/ClearPass SD WAN technologies SaaS and cloud-delivered WLAN/WiFi solutions Prior experience modernising Legacy network environments One stage interview, MS teams to start ASAP.
22/04/2026
Contractor
NetDevOps Engineer - SC Cleared - £650 to £700 PD - Inside IR 35 An enterprise leading government corporation is hiring a NetDevOps Engineer who has the ability to design, build and evolve a large-scale enterprise network platform. The chosen engineer will have a strong focus on modern, automate and cloud integrated infrastructure. Our client is seeking someone to be based in Blackpool on a hybrid basis. As our client continues to modernise, they require a consultant who has strong capabilites in networking automation using Python, Ansible, Terraform and CI/CD experience with a background in Network & Infrastructure Engineering. Key Responsibilities Design, implement, and enhance enterprise network infrastructure across data centre, hybrid, and cloud environments Engineer and implement network traffic flows to support business-critical services Build and maintain secure hybrid connectivity across Azure, AWS, and OCI Implement and manage Palo Alto Firewall policies across on-prem and cloud environments, aligned to Zero Trust principles Design and operate high-availability network services, including routing, segmentation, and resilience Develop and maintain network automation using tools such as Python, Ansible, and Infrastructure as Code Collaborate with architecture and platform teams to ensure solutions align with engineering standards and strategic direction Contribute immediately to delivery work, demonstrating the ability to operate with minimal ramp up Document designs and changes clearly and consistently, supporting maintainability and knowledge sharing Nice to Have Aruba Central/ClearPass SD WAN technologies SaaS and cloud-delivered WLAN/WiFi solutions Prior experience modernising Legacy network environments One stage interview, MS teams to start ASAP.
Lead Azure DevOps Engineer Leicester (3 days per week onsite) | Relocation Package Available (UK ONLY) £65,000 - £75,000 + Bonus | Excellent Benefits | No Sponsorship Available VIQU have partnered with a rapidly growing, technology-driven retail organisation that is investing heavily in its digital and eCommerce platforms. With a strong focus on Azure cloud and modern engineering practices, they are looking for a Lead Azure DevOps Engineer to take ownership of their DevOps function and drive cloud-first transformation. This is a key leadership role where you'll shape DevOps strategy, tooling, and best practice across a high-performing engineering team supporting large-scale, customer-facing platforms. Key Responsibilities: Lead the design and evolution of Azure-based cloud infrastructure Own and develop Infrastructure as Code using Terraform Build, maintain, and optimise CI/CD pipelines with Azure DevOps Collaborate with development teams working across .NET, Node.js, and React Drive containerisation strategy using Docker and Kubernetes Implement robust monitoring, logging, and alerting solutions Embed DevSecOps principles across the development life cycle Key Requirements: Strong experience within Azure cloud environments Proven expertise with Terraform and Infrastructure as Code Hands-on experience with Azure DevOps CI/CD pipelines Experience with Docker and Kubernetes in production Exposure to modern development stacks (.NET, Node.js, React) Strong understanding of monitoring and observability tooling Previous experience in a Lead or Senior DevOps role This is a fantastic opportunity to take real ownership in a business undergoing significant digital transformation, with the chance to influence architecture, tooling, and engineering standards at scale. Apply now to speak with VIQU IT in confidence. Or contact Aaron Chiverton on (see below). Know someone great? Refer them and receive up to £1,000 if successful (terms apply).
22/04/2026
Full time
Lead Azure DevOps Engineer Leicester (3 days per week onsite) | Relocation Package Available (UK ONLY) £65,000 - £75,000 + Bonus | Excellent Benefits | No Sponsorship Available VIQU have partnered with a rapidly growing, technology-driven retail organisation that is investing heavily in its digital and eCommerce platforms. With a strong focus on Azure cloud and modern engineering practices, they are looking for a Lead Azure DevOps Engineer to take ownership of their DevOps function and drive cloud-first transformation. This is a key leadership role where you'll shape DevOps strategy, tooling, and best practice across a high-performing engineering team supporting large-scale, customer-facing platforms. Key Responsibilities: Lead the design and evolution of Azure-based cloud infrastructure Own and develop Infrastructure as Code using Terraform Build, maintain, and optimise CI/CD pipelines with Azure DevOps Collaborate with development teams working across .NET, Node.js, and React Drive containerisation strategy using Docker and Kubernetes Implement robust monitoring, logging, and alerting solutions Embed DevSecOps principles across the development life cycle Key Requirements: Strong experience within Azure cloud environments Proven expertise with Terraform and Infrastructure as Code Hands-on experience with Azure DevOps CI/CD pipelines Experience with Docker and Kubernetes in production Exposure to modern development stacks (.NET, Node.js, React) Strong understanding of monitoring and observability tooling Previous experience in a Lead or Senior DevOps role This is a fantastic opportunity to take real ownership in a business undergoing significant digital transformation, with the chance to influence architecture, tooling, and engineering standards at scale. Apply now to speak with VIQU IT in confidence. Or contact Aaron Chiverton on (see below). Know someone great? Refer them and receive up to £1,000 if successful (terms apply).
Lead Application Security Engineer Bristol or London - 3 days a week on site £100,000 + great benefits An impressive financial services business is looking to hire a Lead Application Security Engineer to support this team with the risk and remediation activities. This business is going through a big technology transformation programme that is estimated to take 3 -5 years. The successful Lead Application Security Engineer will be part of this journey and have great technical exposure and the ability to rapidly progress. Working closely in one of transformation projects, the successful Lead Application Security Engineer will work closely with the wider security and technology teams to define the strategy and roadmap of technology changes moving forward. This is very much a play-manager role with the Lead Application Security Engineer being hands on day to day but also providing support and guidance to the rest of AppSec team Lead Application Security Engineer- Duties and Responsibilities The successful Lead Cloud Security Engineer will have responsibilities covering: Team Leadership Support the existing team, providing mentoring and fostering a collaborative team environment Take pragmatic risk-based approach to supporting the wider technology teams with the SDLC Foster strong relationships with engineering, architecture, platform and platform management to provide practical risk appropriate guidance Set the priorities for the AppSec team to make sure that the delivery of the AppSec services is impactful Application Security Technical Authority Act as the SME and for application security in the business and ensure that security controls are adopted early into the CI/CD pipelines Own and run the DAST, SAST and other AppSec tooling to ensure effective coverage across all in scope applications Create, roll out and maintain secure development practices and standards including threat modelling, secure coding practices for all applications and APIs Collaborate with the Vulnerability Engineering Lead to support the identifications, triages, and remediation programs in alignment with risk appetite, appropriate prioritisation and agreed SLAs Lead Application Security Engineer - Your Background The ideal Lead Application Security Engineer will have: Experience in a similar role, in both responsibility and scale Proven experience in Software Security Development or Application Security Proven experience in leading/coaching a team Hands on experience with implementing and operating AppSec tooling eg SAT and DAST, secret managements, and SCA Extensive experiences of integrating security into the CI/CD pipeline eg using AWS DevOps or GitHub Strong history of secure coding practices, threat modelling and vulnerability management in production Strong understanding of modern software development practices If this sounds like the role for you, hit the apply button NOW! We invite individuals from underrepresented groups to apply for any of our roles and are committed to supporting accessibility needs. Please click the apply button now or contact Abigail Moss for more information
22/04/2026
Full time
Lead Application Security Engineer Bristol or London - 3 days a week on site £100,000 + great benefits An impressive financial services business is looking to hire a Lead Application Security Engineer to support this team with the risk and remediation activities. This business is going through a big technology transformation programme that is estimated to take 3 -5 years. The successful Lead Application Security Engineer will be part of this journey and have great technical exposure and the ability to rapidly progress. Working closely in one of transformation projects, the successful Lead Application Security Engineer will work closely with the wider security and technology teams to define the strategy and roadmap of technology changes moving forward. This is very much a play-manager role with the Lead Application Security Engineer being hands on day to day but also providing support and guidance to the rest of AppSec team Lead Application Security Engineer- Duties and Responsibilities The successful Lead Cloud Security Engineer will have responsibilities covering: Team Leadership Support the existing team, providing mentoring and fostering a collaborative team environment Take pragmatic risk-based approach to supporting the wider technology teams with the SDLC Foster strong relationships with engineering, architecture, platform and platform management to provide practical risk appropriate guidance Set the priorities for the AppSec team to make sure that the delivery of the AppSec services is impactful Application Security Technical Authority Act as the SME and for application security in the business and ensure that security controls are adopted early into the CI/CD pipelines Own and run the DAST, SAST and other AppSec tooling to ensure effective coverage across all in scope applications Create, roll out and maintain secure development practices and standards including threat modelling, secure coding practices for all applications and APIs Collaborate with the Vulnerability Engineering Lead to support the identifications, triages, and remediation programs in alignment with risk appetite, appropriate prioritisation and agreed SLAs Lead Application Security Engineer - Your Background The ideal Lead Application Security Engineer will have: Experience in a similar role, in both responsibility and scale Proven experience in Software Security Development or Application Security Proven experience in leading/coaching a team Hands on experience with implementing and operating AppSec tooling eg SAT and DAST, secret managements, and SCA Extensive experiences of integrating security into the CI/CD pipeline eg using AWS DevOps or GitHub Strong history of secure coding practices, threat modelling and vulnerability management in production Strong understanding of modern software development practices If this sounds like the role for you, hit the apply button NOW! We invite individuals from underrepresented groups to apply for any of our roles and are committed to supporting accessibility needs. Please click the apply button now or contact Abigail Moss for more information
Lead Application Security Engineer Bristol or London - 3 days a week on site £100,000 + great benefits An impressive financial services business is looking to hire a Lead Application Security Engineer to support this team with the risk and remediation activities. This business is going through a big technology transformation programme that is estimated to take 3 -5 years. The successful Lead Application Security Engineer will be part of this journey and have great technical exposure and the ability to rapidly progress. Working closely in one of transformation projects, the successful Lead Application Security Engineer will work closely with the wider security and technology teams to define the strategy and roadmap of technology changes moving forward. This is very much a play-manager role with the Lead Application Security Engineer being hands on day to day but also providing support and guidance to the rest of AppSec team Lead Application Security Engineer- Duties and Responsibilities The successful Lead Cloud Security Engineer will have responsibilities covering: Team Leadership Support the existing team, providing mentoring and fostering a collaborative team environment Take pragmatic risk-based approach to supporting the wider technology teams with the SDLC Foster strong relationships with engineering, architecture, platform and platform management to provide practical risk appropriate guidance Set the priorities for the AppSec team to make sure that the delivery of the AppSec services is impactful Application Security Technical Authority Act as the SME and for application security in the business and ensure that security controls are adopted early into the CI/CD pipelines Own and run the DAST, SAST and other AppSec tooling to ensure effective coverage across all in scope applications Create, roll out and maintain secure development practices and standards including threat modelling, secure coding practices for all applications and APIs Collaborate with the Vulnerability Engineering Lead to support the identifications, triages, and remediation programs in alignment with risk appetite, appropriate prioritisation and agreed SLAs Lead Application Security Engineer - Your Background The ideal Lead Application Security Engineer will have: Experience in a similar role, in both responsibility and scale Proven experience in Software Security Development or Application Security Proven experience in leading/coaching a team Hands on experience with implementing and operating AppSec tooling eg SAT and DAST, secret managements, and SCA Extensive experiences of integrating security into the CI/CD pipeline eg using AWS DevOps or GitHub Strong history of secure coding practices, threat modelling and vulnerability management in production Strong understanding of modern software development practices If this sounds like the role for you, hit the apply button NOW! We invite individuals from underrepresented groups to apply for any of our roles and are committed to supporting accessibility needs. Please click the apply button now or contact Abigail Moss for more information
22/04/2026
Full time
Lead Application Security Engineer Bristol or London - 3 days a week on site £100,000 + great benefits An impressive financial services business is looking to hire a Lead Application Security Engineer to support this team with the risk and remediation activities. This business is going through a big technology transformation programme that is estimated to take 3 -5 years. The successful Lead Application Security Engineer will be part of this journey and have great technical exposure and the ability to rapidly progress. Working closely in one of transformation projects, the successful Lead Application Security Engineer will work closely with the wider security and technology teams to define the strategy and roadmap of technology changes moving forward. This is very much a play-manager role with the Lead Application Security Engineer being hands on day to day but also providing support and guidance to the rest of AppSec team Lead Application Security Engineer- Duties and Responsibilities The successful Lead Cloud Security Engineer will have responsibilities covering: Team Leadership Support the existing team, providing mentoring and fostering a collaborative team environment Take pragmatic risk-based approach to supporting the wider technology teams with the SDLC Foster strong relationships with engineering, architecture, platform and platform management to provide practical risk appropriate guidance Set the priorities for the AppSec team to make sure that the delivery of the AppSec services is impactful Application Security Technical Authority Act as the SME and for application security in the business and ensure that security controls are adopted early into the CI/CD pipelines Own and run the DAST, SAST and other AppSec tooling to ensure effective coverage across all in scope applications Create, roll out and maintain secure development practices and standards including threat modelling, secure coding practices for all applications and APIs Collaborate with the Vulnerability Engineering Lead to support the identifications, triages, and remediation programs in alignment with risk appetite, appropriate prioritisation and agreed SLAs Lead Application Security Engineer - Your Background The ideal Lead Application Security Engineer will have: Experience in a similar role, in both responsibility and scale Proven experience in Software Security Development or Application Security Proven experience in leading/coaching a team Hands on experience with implementing and operating AppSec tooling eg SAT and DAST, secret managements, and SCA Extensive experiences of integrating security into the CI/CD pipeline eg using AWS DevOps or GitHub Strong history of secure coding practices, threat modelling and vulnerability management in production Strong understanding of modern software development practices If this sounds like the role for you, hit the apply button NOW! We invite individuals from underrepresented groups to apply for any of our roles and are committed to supporting accessibility needs. Please click the apply button now or contact Abigail Moss for more information
SC Cleared Observability Consultant: Dynatrace, Splunk, Cloud, ITSM, Clearance - (RL8136) Our Global Enterprise client is looking for an SC Cleared Enterprise Observability Consultant with an in-depth understanding of Observability platforms and technologies ranging between Vendor Specific products eg Dynatrace, Splunk, Grafana, Cribl etc. & Open-Source Observability projects eg Open Telemetry, Prometheus, Grafana OSS etc. You will be responsible for providing Observability platform delivery expertise to deliver advisory, design & implementation services that meets our customers business requirements within their overall observability strategy. The role will also involve staying at the forefront of new technologies and new vendors, working within the Enterprise Observability Practice. Start Date: 5th May 2026 Duration: 115 days (initially) Pay Rate: £347 p/d (PLEASE NOTE: Employer NI is paid for by the client) Total Daily Earnings: £425 p/d (includes rolled up holiday) IR35 Status: Inside Location: Hybrid (some travelling involved) Clearance: SC Clearance is highly desirable Responsibilities: Observability Strategy & Advisory Lead discovery workshops to assess observability maturity and define tailored roadmaps aligned to business and IT objectives Assess current monitoring and observability maturity for Enterprise Organisations & recommend tooling strategies, often leveraging platforms like Dynatrace for full-stack visibility Translate business and technical requirements into actionable observability use cases to support change management and enablement initiatives Advise on tools, platforms, and best practices (eg, OpenTelemetry, SIEM vs Observability, Telemetry Management, SRE principles) Architecture & Solution Design Design end-to-end observability architectures, including Logs, metrics, traces, profiles etc., Distributed tracing frameworks/APM tooling, Infrastructure & cloud monitoring, Synthetic and real user monitoring Create telemetry data pipelines and instrumentation strategies Ensure scalable, secure, and cost-efficient observability patterns Tooling Implementation Deploy and configure observability platforms such as Dynatrace, Splunk, Grafana Cloud, Cribl, Elastic Implement OpenTelemetry collectors, agents, and SDK instrumentation strategies Build dashboards, alerts, and automation workflows Integrate Observability platforms with ITSM, AIOps, Event Management platforms Troubleshooting & Performance Engineering Analyse application, infrastructure, and network performance issues. Lead root cause analysis and performance optimisation initiatives. Enable proactive detection through anomaly detection and alert tuning. Technical Skills: 10+ years in consulting, enterprise design, and implementation roles Expertise in observability frameworks, telemetry pipelines, and service mesh integrations. Deep understanding of observability pillars: metrics, logs, traces, and user experience. Expert Level Familiarity with Products such as Dynatrace, Splunk, Grafana Cloud, Cribl (experience with at least two product sets) Strong understanding of Observability platform architecture, including Telemetry Storage, OpenTelemetry support, and cloud integrations. Experience with Dynatrace/Splunk/Grafana APIs, tagging strategies, and problem detection workflows. Proficiency in Scripting (Python, Bash) and automation tools (Terraform, Ansible). Strong stakeholder engagement and communication skills. Desirable: Professional Level Certifications in Observability products/OpenTelemetry Associate Certification/Prometheus Associate Certification Familiarity with DevOps and Platform engineering ways of working with associated tools (CI/CD, git, automation etc.) Working level understanding of Cloud/Cloud Native Observability technologies (AWS CloudWatch, Azure Monitor, eBPF, Prometheus etc.) Good understanding of networking principles related to Observability protocols (Syslog, SNMP, OTLP etc.) Experience integrating Observability platforms with ITSM and alerting platforms Cloud/CNCF certifications To apply for this SC Cleared Observability Consultant contract job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
22/04/2026
Contractor
SC Cleared Observability Consultant: Dynatrace, Splunk, Cloud, ITSM, Clearance - (RL8136) Our Global Enterprise client is looking for an SC Cleared Enterprise Observability Consultant with an in-depth understanding of Observability platforms and technologies ranging between Vendor Specific products eg Dynatrace, Splunk, Grafana, Cribl etc. & Open-Source Observability projects eg Open Telemetry, Prometheus, Grafana OSS etc. You will be responsible for providing Observability platform delivery expertise to deliver advisory, design & implementation services that meets our customers business requirements within their overall observability strategy. The role will also involve staying at the forefront of new technologies and new vendors, working within the Enterprise Observability Practice. Start Date: 5th May 2026 Duration: 115 days (initially) Pay Rate: £347 p/d (PLEASE NOTE: Employer NI is paid for by the client) Total Daily Earnings: £425 p/d (includes rolled up holiday) IR35 Status: Inside Location: Hybrid (some travelling involved) Clearance: SC Clearance is highly desirable Responsibilities: Observability Strategy & Advisory Lead discovery workshops to assess observability maturity and define tailored roadmaps aligned to business and IT objectives Assess current monitoring and observability maturity for Enterprise Organisations & recommend tooling strategies, often leveraging platforms like Dynatrace for full-stack visibility Translate business and technical requirements into actionable observability use cases to support change management and enablement initiatives Advise on tools, platforms, and best practices (eg, OpenTelemetry, SIEM vs Observability, Telemetry Management, SRE principles) Architecture & Solution Design Design end-to-end observability architectures, including Logs, metrics, traces, profiles etc., Distributed tracing frameworks/APM tooling, Infrastructure & cloud monitoring, Synthetic and real user monitoring Create telemetry data pipelines and instrumentation strategies Ensure scalable, secure, and cost-efficient observability patterns Tooling Implementation Deploy and configure observability platforms such as Dynatrace, Splunk, Grafana Cloud, Cribl, Elastic Implement OpenTelemetry collectors, agents, and SDK instrumentation strategies Build dashboards, alerts, and automation workflows Integrate Observability platforms with ITSM, AIOps, Event Management platforms Troubleshooting & Performance Engineering Analyse application, infrastructure, and network performance issues. Lead root cause analysis and performance optimisation initiatives. Enable proactive detection through anomaly detection and alert tuning. Technical Skills: 10+ years in consulting, enterprise design, and implementation roles Expertise in observability frameworks, telemetry pipelines, and service mesh integrations. Deep understanding of observability pillars: metrics, logs, traces, and user experience. Expert Level Familiarity with Products such as Dynatrace, Splunk, Grafana Cloud, Cribl (experience with at least two product sets) Strong understanding of Observability platform architecture, including Telemetry Storage, OpenTelemetry support, and cloud integrations. Experience with Dynatrace/Splunk/Grafana APIs, tagging strategies, and problem detection workflows. Proficiency in Scripting (Python, Bash) and automation tools (Terraform, Ansible). Strong stakeholder engagement and communication skills. Desirable: Professional Level Certifications in Observability products/OpenTelemetry Associate Certification/Prometheus Associate Certification Familiarity with DevOps and Platform engineering ways of working with associated tools (CI/CD, git, automation etc.) Working level understanding of Cloud/Cloud Native Observability technologies (AWS CloudWatch, Azure Monitor, eBPF, Prometheus etc.) Good understanding of networking principles related to Observability protocols (Syslog, SNMP, OTLP etc.) Experience integrating Observability platforms with ITSM and alerting platforms Cloud/CNCF certifications To apply for this SC Cleared Observability Consultant contract job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
DevOps Engineer SC cleared Permanent Flexible AWS Terraform SC Cleared At Peregrine, we re always seeking Specialist Talent that have the ideal mix of skills, experience, and attitude, to place with our vast array of clients. From Business Analysts in large government organisations to Software Developers in the private sector we are always in search of the best talent to place, now. The role: We are seeking an SC cleared DevOps Engineer to work as a forward deployed engineer, embedded within the Cyber Capability Unit. The role will support the design, build and deployment of AI powered solutions that strengthen cyber security and fraud prevention capabilities. You will work closely with engineers, product owners and stakeholders to understand operational needs, develop prototypes and deploy secure, reliable solutions within approved platforms and environments. This role directly supports the Cyber Resilience Centre s mission and contributes to the wider security strategy by delivering practical, governed AI solutions that provide measurable operational value. Responsibilities: Cloud and Platform Integration Design and deploy solutions in AWS cloud environments Use infrastructure as code to ensure repeatable and compliant deployments Ensure all solutions meet organisational governance, security and compliance standards CI/CD and Automation Configure, manage and maintain GitLab CI pipelines Automate testing, build and deployment of infrastructure, applications and services Promote best practice DevOps ways of working across environments Testing and Quality Implement unit, integration and performance testing for all components Ensure solutions are reliable, reproducible and stable across releases Support continuous improvement of testing practices Monitoring and Incident Response Implement observability and monitoring tooling Track system performance and detect anomalies Support incident response, troubleshooting and root cause analysis in live environments Collaboration and Delivery Work closely with engineers, analysts and stakeholders Translate requirements into working technical solutions Support deployment, handover and ongoing optimisation of delivered capabilities Skills & Experience: Active SC clearance Strong experience deploying and operating solutions in AWS Infrastructure as code using Terraform CI/CD pipeline development using GitLab CI Experience with monitoring, logging and alerting tools Understanding of secure DevOps practices in regulated environments Experience working with large data stores or big data platforms Desirable skills: Experience supporting AI or data driven platforms Knowledge of cyber security or fraud prevention domains Experience working within government or critical national infrastructure environments About Peregrine We build workforces that deliver tech and change programmes at leading UK organisations. By combining data science from Peregrine Intelligence, our industry-accredited Peregrine Academy, and market-leading attraction and diversity initiatives, we bridge capability gaps at all levels in public and private sector organisations. We work closely with our clients to understand their challenges and deliver flexible, long-term solutions that make a real difference. When you join Peregrine, you become part of a team that s focused on growth, both yours, our clients , and the sectors we support. You ll also get access to a full range of benefits alongside your salary. How Specialist Talent Works As a permanent employee at Peregrine, you ll be part of our Specialist Talent team. That means you ll work on-site or remotely with our clients, supporting them on complex, high-impact projects in Data, Digital and Business Transformation. You ll get the variety and challenge of consultancy work, with the stability and support of a permanent role. You re not a contractor - you re a valued member of our team, with access to all the same benefits, learning opportunities, and community. Find out more: peregrine.global or check out our LinkedIn page: peregrin e- resourcing
21/04/2026
Full time
DevOps Engineer SC cleared Permanent Flexible AWS Terraform SC Cleared At Peregrine, we re always seeking Specialist Talent that have the ideal mix of skills, experience, and attitude, to place with our vast array of clients. From Business Analysts in large government organisations to Software Developers in the private sector we are always in search of the best talent to place, now. The role: We are seeking an SC cleared DevOps Engineer to work as a forward deployed engineer, embedded within the Cyber Capability Unit. The role will support the design, build and deployment of AI powered solutions that strengthen cyber security and fraud prevention capabilities. You will work closely with engineers, product owners and stakeholders to understand operational needs, develop prototypes and deploy secure, reliable solutions within approved platforms and environments. This role directly supports the Cyber Resilience Centre s mission and contributes to the wider security strategy by delivering practical, governed AI solutions that provide measurable operational value. Responsibilities: Cloud and Platform Integration Design and deploy solutions in AWS cloud environments Use infrastructure as code to ensure repeatable and compliant deployments Ensure all solutions meet organisational governance, security and compliance standards CI/CD and Automation Configure, manage and maintain GitLab CI pipelines Automate testing, build and deployment of infrastructure, applications and services Promote best practice DevOps ways of working across environments Testing and Quality Implement unit, integration and performance testing for all components Ensure solutions are reliable, reproducible and stable across releases Support continuous improvement of testing practices Monitoring and Incident Response Implement observability and monitoring tooling Track system performance and detect anomalies Support incident response, troubleshooting and root cause analysis in live environments Collaboration and Delivery Work closely with engineers, analysts and stakeholders Translate requirements into working technical solutions Support deployment, handover and ongoing optimisation of delivered capabilities Skills & Experience: Active SC clearance Strong experience deploying and operating solutions in AWS Infrastructure as code using Terraform CI/CD pipeline development using GitLab CI Experience with monitoring, logging and alerting tools Understanding of secure DevOps practices in regulated environments Experience working with large data stores or big data platforms Desirable skills: Experience supporting AI or data driven platforms Knowledge of cyber security or fraud prevention domains Experience working within government or critical national infrastructure environments About Peregrine We build workforces that deliver tech and change programmes at leading UK organisations. By combining data science from Peregrine Intelligence, our industry-accredited Peregrine Academy, and market-leading attraction and diversity initiatives, we bridge capability gaps at all levels in public and private sector organisations. We work closely with our clients to understand their challenges and deliver flexible, long-term solutions that make a real difference. When you join Peregrine, you become part of a team that s focused on growth, both yours, our clients , and the sectors we support. You ll also get access to a full range of benefits alongside your salary. How Specialist Talent Works As a permanent employee at Peregrine, you ll be part of our Specialist Talent team. That means you ll work on-site or remotely with our clients, supporting them on complex, high-impact projects in Data, Digital and Business Transformation. You ll get the variety and challenge of consultancy work, with the stability and support of a permanent role. You re not a contractor - you re a valued member of our team, with access to all the same benefits, learning opportunities, and community. Find out more: peregrine.global or check out our LinkedIn page: peregrin e- resourcing
Senior Java Engineer Modern Stack TDD & CI/CD Complex Systems Remote First (office visit once a month) - you must live within 1hr 30 of South Manchester 60,000 - 70,000 + Bonus + Excellent Benefits my client is not able to provide sponsorship We're working with a long-established tech company who are continuing to modernise a complex platform within a highly regulated domain. They've built a strong engineering culture around Agile and XP practices, and they're now looking for a Senior Java Engineer to join one of their Scrum teams. This is a role for someone who is genuinely hands-on, enjoys solving tricky problems, and cares about building software the right way - clean code, testing, collaboration, and continuous improvement. The Opportunity You'll join a cross-functional team working on large-scale systems that have real-world impact. Engineering standards are taken seriously here, but it's not dogmatic - it's practical, delivery-focused, and built around doing things sustainably. The Senior role is almost entirely hands-on , but they're looking for someone with the experience and maturity to: mentor other engineers lead by example contribute to good engineering practices help the team deliver reliably Tech Stack & Practices The core platform is Java-based, supported by a modern mix of tooling: Java, Spring Boot TDD / automated testing CI/CD and modern delivery pipelines AWS (including serverless approaches in places) Docker, Kubernetes Infrastructure as Code (Terraform, etc.) You don't need to tick every box - strong Java + good engineering habits are the priority. Exposure to AWS and DevOps tooling is a nice bonus. What They're Looking For Strong experience building backend systems with Java & Spring Boot Comfortable working with TDD and modern engineering practices Experience delivering production software in Agile teams Someone who enjoys mentoring and helping others grow Collaborative mindset - pairing, code reviews, shared ownership Bonus points for AWS, Docker/Kubernetes, Terraform, etc. What's In It For You? Remote-first working with minimal office travel Strong salary, bonus and excellent benefits A genuinely good engineering culture (not just "Agile" on paper) Meaningful work, complex systems, and long-term platform thinking Plenty of room to learn and grow Interested? Apply now or get in touch for more info - even if you don't have a CV ready, we're happy to chat. Cathcart Technology is acting as an Employment Agency in relation to this vacancy.
21/04/2026
Full time
Senior Java Engineer Modern Stack TDD & CI/CD Complex Systems Remote First (office visit once a month) - you must live within 1hr 30 of South Manchester 60,000 - 70,000 + Bonus + Excellent Benefits my client is not able to provide sponsorship We're working with a long-established tech company who are continuing to modernise a complex platform within a highly regulated domain. They've built a strong engineering culture around Agile and XP practices, and they're now looking for a Senior Java Engineer to join one of their Scrum teams. This is a role for someone who is genuinely hands-on, enjoys solving tricky problems, and cares about building software the right way - clean code, testing, collaboration, and continuous improvement. The Opportunity You'll join a cross-functional team working on large-scale systems that have real-world impact. Engineering standards are taken seriously here, but it's not dogmatic - it's practical, delivery-focused, and built around doing things sustainably. The Senior role is almost entirely hands-on , but they're looking for someone with the experience and maturity to: mentor other engineers lead by example contribute to good engineering practices help the team deliver reliably Tech Stack & Practices The core platform is Java-based, supported by a modern mix of tooling: Java, Spring Boot TDD / automated testing CI/CD and modern delivery pipelines AWS (including serverless approaches in places) Docker, Kubernetes Infrastructure as Code (Terraform, etc.) You don't need to tick every box - strong Java + good engineering habits are the priority. Exposure to AWS and DevOps tooling is a nice bonus. What They're Looking For Strong experience building backend systems with Java & Spring Boot Comfortable working with TDD and modern engineering practices Experience delivering production software in Agile teams Someone who enjoys mentoring and helping others grow Collaborative mindset - pairing, code reviews, shared ownership Bonus points for AWS, Docker/Kubernetes, Terraform, etc. What's In It For You? Remote-first working with minimal office travel Strong salary, bonus and excellent benefits A genuinely good engineering culture (not just "Agile" on paper) Meaningful work, complex systems, and long-term platform thinking Plenty of room to learn and grow Interested? Apply now or get in touch for more info - even if you don't have a CV ready, we're happy to chat. Cathcart Technology is acting as an Employment Agency in relation to this vacancy.
We are working with a global healthcare and insurance organisation who are making a real difference to people's lives. We require an experienced Senior Data Platform Engineer to join the AI and Data Platform teams. 100,000 + Bonus + Excellent Benefits Fully remote with occasional travel to one of their offices. You will contribute to the design and operation of a scalable, secure enterprise data platform supporting advanced analytics and business intelligence in a healthcare and insurance setting. You'll work with high autonomy, mentor junior engineers, and drive technical excellence while ensuring compliance and performance. This is a key role in shaping a robust, automated data platform that powers better patient care and smarter insurance services. Please note: this is a Platform Engineering role rather than a Data Engineering position. We welcome applications from data engineers who also bring strong platform engineering experience - for example, working with IaC, Terraform, or similar tooling. Role: Contribute to the design and delivery of robust, automated, and scalable Azure and Snowflake data platform components. Develop and maintain infrastructure-as-code using Terraform, ensuring consistency and reusability across environments. Build and optimise CI/CD pipelines using Azure DevOps and GitHub Actions to support rapid, reliable deployments. Implement observability practices including logging, metrics, and alerting using observability tools. Collaborate with the Lead Engineer and Architects to align implementation with platform standards and patterns. Provide technical guidance and mentorship to mid-level engineers, promoting best practices in automation and. monitoring Key Skills / Qualifications needed for this role: Extensive experience in platform engineering, with a strong emphasis on Azure-based data solutions. Expert-level knowledge of Azure and/or Snowflake services, including Data Factory, Data Lake, Azure ML, and Power BI/Fabric. Proven experience with infrastructure-as-code using Terraform and building CI/CD pipelines via Azure DevOps and GitHub Actions. Strong grasp of observability practices, including logging, metrics, alerting, and performance optimisation. Deep understanding of cloud security, with experience applying secure-by-design principles in Azure and/or Snowflake (e.g., network isolation, IAM, data protection). Proficiency in scripting and automation using PowerShell, Bash, or Python. Collaborative mindset, with a proven track record of working effectively across engineering, data science, and business teams. Clear communicator, capable of documenting technical designs, contributing to platform standards, and presenting solutions to stakeholders. Leadership experience, including mentoring junior engineers and fostering a culture of continuous improvement and knowledge sharing - highly desirable. This is majority remote based although you will need to attend the office in either London or Manchester when needed. This company look after their employees and you can expect a large bonus plus some excellent benefits. We are interviewing currently so apply now for immediate consideration for the Senior Data Platform Engineer position or contact Stuart Barnes at ITSS Recruitment for further information.
21/04/2026
Full time
We are working with a global healthcare and insurance organisation who are making a real difference to people's lives. We require an experienced Senior Data Platform Engineer to join the AI and Data Platform teams. 100,000 + Bonus + Excellent Benefits Fully remote with occasional travel to one of their offices. You will contribute to the design and operation of a scalable, secure enterprise data platform supporting advanced analytics and business intelligence in a healthcare and insurance setting. You'll work with high autonomy, mentor junior engineers, and drive technical excellence while ensuring compliance and performance. This is a key role in shaping a robust, automated data platform that powers better patient care and smarter insurance services. Please note: this is a Platform Engineering role rather than a Data Engineering position. We welcome applications from data engineers who also bring strong platform engineering experience - for example, working with IaC, Terraform, or similar tooling. Role: Contribute to the design and delivery of robust, automated, and scalable Azure and Snowflake data platform components. Develop and maintain infrastructure-as-code using Terraform, ensuring consistency and reusability across environments. Build and optimise CI/CD pipelines using Azure DevOps and GitHub Actions to support rapid, reliable deployments. Implement observability practices including logging, metrics, and alerting using observability tools. Collaborate with the Lead Engineer and Architects to align implementation with platform standards and patterns. Provide technical guidance and mentorship to mid-level engineers, promoting best practices in automation and. monitoring Key Skills / Qualifications needed for this role: Extensive experience in platform engineering, with a strong emphasis on Azure-based data solutions. Expert-level knowledge of Azure and/or Snowflake services, including Data Factory, Data Lake, Azure ML, and Power BI/Fabric. Proven experience with infrastructure-as-code using Terraform and building CI/CD pipelines via Azure DevOps and GitHub Actions. Strong grasp of observability practices, including logging, metrics, alerting, and performance optimisation. Deep understanding of cloud security, with experience applying secure-by-design principles in Azure and/or Snowflake (e.g., network isolation, IAM, data protection). Proficiency in scripting and automation using PowerShell, Bash, or Python. Collaborative mindset, with a proven track record of working effectively across engineering, data science, and business teams. Clear communicator, capable of documenting technical designs, contributing to platform standards, and presenting solutions to stakeholders. Leadership experience, including mentoring junior engineers and fostering a culture of continuous improvement and knowledge sharing - highly desirable. This is majority remote based although you will need to attend the office in either London or Manchester when needed. This company look after their employees and you can expect a large bonus plus some excellent benefits. We are interviewing currently so apply now for immediate consideration for the Senior Data Platform Engineer position or contact Stuart Barnes at ITSS Recruitment for further information.
Job Title: Lead Sailpoint Engineer Location:Reading, Havant Duration: 6 months Description: Job Summary: We are seeking an Identity Engineer with hands on experience of Sailpoint ISC or extensive experience in delivering Identity Governance technologies, Active Directory, EntraID and automation such as Beanshell/Java/Powershell. Experience in a DevOps environment and toolset is also an advantage. Experience in the implementation and configuration of Identity Governance and Administration technologies, specifically SailPoint is preferred. The ideal candidate will contribute to the design, implementation, and maintenance of the Identity Governance and Administration solution, enhancing security posture across the environment. The immediate focus will be on performing immediate discovery and deployment tasks to build out the Sailpoint IGA solution within client group, assisting the Lead Engineer and Architect against the initial scope with the opportunity to contribute to the enhancement of the solution an expanding the scope of IGA across the group. Key Responsibilities: . Design and Implementation: Develop, document and implement Identity Governance solutions using Sailpoint and other automation to provide a comprehensive IGA solution. Identify best practice and experience of how to get a product deployed into an enterprise environment. . Management and Maintenance: Advise and update the day-to-day operations of the IGA environment and help develop the operational model, ensuring optimal performance, security, and compliance. . Troubleshooting: Diagnose and resolve identity and access-related issues, providing technical support and guidance to internal teams. . S ecurity and Compliance: Ensure that identity solutions meet security and compliance standards, implementing and enforcing security policies and procedures. . Documentation: Create and maintain comprehensive documentation of configurations, processes, and best practices. . Operational process creation and handover: Identity operational processes and work with operational handover teams to deploy changes into a production environment in a fully supported manner. Identity experience: . Proven experience with Sailpoint Identity Security Cloud. . Strong understanding of identity life cycle management and security principles. . Hands-on experience with configuration, connectors, identity merging, developing workflows and integrating Sailpoint to source and target connected platforms. . Experience with identity governance and administration tools such as Sailpoint. . Proficiency in PowerShell Scripting and automation using API's and infrastructure as code. (Terraform/Github) . Excellent troubleshooting and analytical skills. Guidant, Carbon60, Lorien & SRG - The Impellam Group Portfolio are acting as an Employment Business in relation to this vacancy.
21/04/2026
Contractor
Job Title: Lead Sailpoint Engineer Location:Reading, Havant Duration: 6 months Description: Job Summary: We are seeking an Identity Engineer with hands on experience of Sailpoint ISC or extensive experience in delivering Identity Governance technologies, Active Directory, EntraID and automation such as Beanshell/Java/Powershell. Experience in a DevOps environment and toolset is also an advantage. Experience in the implementation and configuration of Identity Governance and Administration technologies, specifically SailPoint is preferred. The ideal candidate will contribute to the design, implementation, and maintenance of the Identity Governance and Administration solution, enhancing security posture across the environment. The immediate focus will be on performing immediate discovery and deployment tasks to build out the Sailpoint IGA solution within client group, assisting the Lead Engineer and Architect against the initial scope with the opportunity to contribute to the enhancement of the solution an expanding the scope of IGA across the group. Key Responsibilities: . Design and Implementation: Develop, document and implement Identity Governance solutions using Sailpoint and other automation to provide a comprehensive IGA solution. Identify best practice and experience of how to get a product deployed into an enterprise environment. . Management and Maintenance: Advise and update the day-to-day operations of the IGA environment and help develop the operational model, ensuring optimal performance, security, and compliance. . Troubleshooting: Diagnose and resolve identity and access-related issues, providing technical support and guidance to internal teams. . S ecurity and Compliance: Ensure that identity solutions meet security and compliance standards, implementing and enforcing security policies and procedures. . Documentation: Create and maintain comprehensive documentation of configurations, processes, and best practices. . Operational process creation and handover: Identity operational processes and work with operational handover teams to deploy changes into a production environment in a fully supported manner. Identity experience: . Proven experience with Sailpoint Identity Security Cloud. . Strong understanding of identity life cycle management and security principles. . Hands-on experience with configuration, connectors, identity merging, developing workflows and integrating Sailpoint to source and target connected platforms. . Experience with identity governance and administration tools such as Sailpoint. . Proficiency in PowerShell Scripting and automation using API's and infrastructure as code. (Terraform/Github) . Excellent troubleshooting and analytical skills. Guidant, Carbon60, Lorien & SRG - The Impellam Group Portfolio are acting as an Employment Business in relation to this vacancy.
DevOps SME (AWS) 6 Months Contract £60 to £80 per hour (Inside IR35) Remote Working Active SC Clearance is needed A top tier pioneering firm is looking for an experienced devOps SME to support a critical initiative for a UK Public Sector end-client. This is a pivotal role within a high-performing Agile environment, focusing on the delivery of a secure, scalable, and automated DevOps toolchain. As an SME, you will not only be responsible for technical execution but will also provide the architectural foresight needed to maintain service stability and drive process optimisation. You will bridge the gap between complex engineering and business impact, ensuring that all cloud-native solutions are robust, secure, and performant. Key Responsibilities Create and manage automated scripts (using tools like Terraform or Ansible) so that complex computer systems can be set up quickly, reliably, and identically every time. Watch for signs that the system is slowing down or running out of space, and fix these issues before they cause a crash or failure. Ensure that security checks are a fundamental part of the build process from the very beginning, rather than an afterthought, to keep data and systems safe. Build and look after the automated "conveyor belt" that tests and moves new software from the developer's desk to the live environment quickly and without errors. Act as a mentor to the wider team, guiding DevOps strategy and influencing the adoption of modern standards across the delivery environment. Essential Skills Hands on experience with AWS Solid experience with terraform and Ansible IaC CICD - Advanced experience with Jenkins, GitLab CI, or GitHub Actions. Familiarity with monitoring and logging stacks, including Prometheus, Grafana, and ELK Strong proficiency in Docker and Kubernetes for application packaging and orchestration. Proven track record of operating as a technical lead or SME within UK Public Sector projects Candidates must hold active SC Clearance 6 Months Contract Inside IR35 Remote Working £60 to £80 per hour (Inside IR35) If you are an experienced devOps engineer with active SC clearance searching for a new challenging role then this could be the perfect opportunity for you. If the above seems of interest then please apply directly to the AD or send your CV to Randstad Technologies is acting as an Employment Business in relation to this vacancy.
21/04/2026
Contractor
DevOps SME (AWS) 6 Months Contract £60 to £80 per hour (Inside IR35) Remote Working Active SC Clearance is needed A top tier pioneering firm is looking for an experienced devOps SME to support a critical initiative for a UK Public Sector end-client. This is a pivotal role within a high-performing Agile environment, focusing on the delivery of a secure, scalable, and automated DevOps toolchain. As an SME, you will not only be responsible for technical execution but will also provide the architectural foresight needed to maintain service stability and drive process optimisation. You will bridge the gap between complex engineering and business impact, ensuring that all cloud-native solutions are robust, secure, and performant. Key Responsibilities Create and manage automated scripts (using tools like Terraform or Ansible) so that complex computer systems can be set up quickly, reliably, and identically every time. Watch for signs that the system is slowing down or running out of space, and fix these issues before they cause a crash or failure. Ensure that security checks are a fundamental part of the build process from the very beginning, rather than an afterthought, to keep data and systems safe. Build and look after the automated "conveyor belt" that tests and moves new software from the developer's desk to the live environment quickly and without errors. Act as a mentor to the wider team, guiding DevOps strategy and influencing the adoption of modern standards across the delivery environment. Essential Skills Hands on experience with AWS Solid experience with terraform and Ansible IaC CICD - Advanced experience with Jenkins, GitLab CI, or GitHub Actions. Familiarity with monitoring and logging stacks, including Prometheus, Grafana, and ELK Strong proficiency in Docker and Kubernetes for application packaging and orchestration. Proven track record of operating as a technical lead or SME within UK Public Sector projects Candidates must hold active SC Clearance 6 Months Contract Inside IR35 Remote Working £60 to £80 per hour (Inside IR35) If you are an experienced devOps engineer with active SC clearance searching for a new challenging role then this could be the perfect opportunity for you. If the above seems of interest then please apply directly to the AD or send your CV to Randstad Technologies is acting as an Employment Business in relation to this vacancy.
DevOps Engineer Imagine a role where your expertise sets technical direction, influences engineering culture, and shapes how modern cloud platforms are built and operated. This is an opportunity to join a forward-thinking organisation where your experience, judgement, and leadership will have a lasting impact. As a DevOps Engineer , you will play a key role in defining and advancing the DevOps strategy across the organisation. Acting as a technical authority, you will partner closely with engineering, product, and architecture teams-both onshore and offshore-to implement best-in-class DevOps practices and evolve a modern, serverless AWS platform. You will have significant influence over tooling, architecture, and ways of working, helping to mature the DevOps function and embedding reliability, security, and automation at scale. Key Responsibilities Provide technical leadership and subject matter expertise across DevOps and cloud engineering Influence architectural and technology decisions, particularly within AWS serverless ecosystems Design, build and evolve robust CI/CD pipelines to support rapid, reliable software delivery Partner with development teams to ensure solutions are scalable, resilient, and production-ready Champion reliability engineering practices, including monitoring, alerting, and incident response Drive high availability and operational excellence through proactive troubleshooting and optimisation Define and enforce Infrastructure as Code (IaC) and Immutable Infrastructure standards Lead and contribute to design reviews, architectural discussions and code reviews Establish strong security practices, ensuring systems and data are protected by design Mentor and support engineers, fostering collaboration, quality, and continuous improvement Stay ahead of emerging DevOps and cloud technologies, introducing improvements where valuable What We're Looking For Significant experience in DevOps or Platform Engineering roles within cloud-native environments Strong software engineering background, preferably with hands-on development experience Ability to balance strategic thinking with hands-on delivery A pragmatic, collaborative approach with excellent communication and stakeholder-management skills A platform-engineering mindset, with deep understanding of trade-offs and designing for failure Strong, hands-on AWS experience, including: Lambda, DynamoDB, AWS SAM Solid networking and security knowledge, including VPCs, security groups and VPNs Technologies You'll Work With AWS Cloud Services & AWS Developer Tools JavaScript / TypeScript & Node.js SQL Git Docker & ECS Serverless Framework Developer security platforms This is a senior-level opportunity to shape platforms, influence engineering standards, and play a central role in delivering high-quality, cloud-native solutions at scale. If you're ready to lead, influence, and build the future of DevOps, we'd love to hear from you.
21/04/2026
Full time
DevOps Engineer Imagine a role where your expertise sets technical direction, influences engineering culture, and shapes how modern cloud platforms are built and operated. This is an opportunity to join a forward-thinking organisation where your experience, judgement, and leadership will have a lasting impact. As a DevOps Engineer , you will play a key role in defining and advancing the DevOps strategy across the organisation. Acting as a technical authority, you will partner closely with engineering, product, and architecture teams-both onshore and offshore-to implement best-in-class DevOps practices and evolve a modern, serverless AWS platform. You will have significant influence over tooling, architecture, and ways of working, helping to mature the DevOps function and embedding reliability, security, and automation at scale. Key Responsibilities Provide technical leadership and subject matter expertise across DevOps and cloud engineering Influence architectural and technology decisions, particularly within AWS serverless ecosystems Design, build and evolve robust CI/CD pipelines to support rapid, reliable software delivery Partner with development teams to ensure solutions are scalable, resilient, and production-ready Champion reliability engineering practices, including monitoring, alerting, and incident response Drive high availability and operational excellence through proactive troubleshooting and optimisation Define and enforce Infrastructure as Code (IaC) and Immutable Infrastructure standards Lead and contribute to design reviews, architectural discussions and code reviews Establish strong security practices, ensuring systems and data are protected by design Mentor and support engineers, fostering collaboration, quality, and continuous improvement Stay ahead of emerging DevOps and cloud technologies, introducing improvements where valuable What We're Looking For Significant experience in DevOps or Platform Engineering roles within cloud-native environments Strong software engineering background, preferably with hands-on development experience Ability to balance strategic thinking with hands-on delivery A pragmatic, collaborative approach with excellent communication and stakeholder-management skills A platform-engineering mindset, with deep understanding of trade-offs and designing for failure Strong, hands-on AWS experience, including: Lambda, DynamoDB, AWS SAM Solid networking and security knowledge, including VPCs, security groups and VPNs Technologies You'll Work With AWS Cloud Services & AWS Developer Tools JavaScript / TypeScript & Node.js SQL Git Docker & ECS Serverless Framework Developer security platforms This is a senior-level opportunity to shape platforms, influence engineering standards, and play a central role in delivering high-quality, cloud-native solutions at scale. If you're ready to lead, influence, and build the future of DevOps, we'd love to hear from you.
About Us: Solirius Reply delivers technical consultancy and application delivery to our clients in order to solve real world problems and allow our clients to respond to an ever-changing technical landscape. We partner closely with our clients, embedding our consultants into their businesses in order to provide a bespoke service, allowing us to truly understand our clients' needs. It is this close collaboration with our clients that has enabled us to grow rapidly in recent years and will drive our ambitious future growth plans. We currently have over 400 consultants working with a variety of key clients from both the public and private sectors such as the Ministry of Justice, Department for Education, FCDOS, UEFA, International Olympic Committee and Mercedes Benz; with plans to increase our client base further in the near future. We operate as a flat organisation and believe in trusting and supporting our team to operate independently. We pride ourselves on being specialists at what we do, making the most of our consultants' expertise in their fields, to provide a best-in-class service to our clients. All our consultants have the opportunity to work on a range of different projects, providing a broad range of knowledge on which to develop their careers and progress in the direction they choose. About You: You are a motivated and adaptable professional with a strong analytical mindset and a passion for working with data to solve real-world problems. You enjoy working in collaborative, agile teams and take pride in delivering high-quality, data-driven solutions that make a tangible impact. With strong communication skills and a consultative approach, you're comfortable engaging with clients, understanding their needs, and translating them into robust data architectures and platforms. You understand and align with Solirius Reply Values. The Role: We are looking for experienced Data Architects to work on our projects with our public sector clients, helping to deliver Solirius' services to the highest standard. The role will involve working with multiple business and technical stakeholders, helping them to design data solutions and architectures which will then be implemented by the delivery teams. As a Data Architect, you will be expected to operate with a high degree of autonomy, using your experience and judgement to resolve complex data and architectural challenges in alignment with client needs. You will proactively manage escalations and take ownership of delivering effective, scalable, and secure data solutions. Successful candidates will also be expected to take an active role in practice development: developing new data services to help with Solirius' business development - creating and updating data architecture artefacts, liaising with other Solirius practices to maintain practice profile and contributing to the development of more junior practice members. In addition to technical leadership, you will play a key role in identifying and shaping new business opportunities, working closely with stakeholders across client organisations to understand strategic objectives and deliver value-driven, data-led outcomes. You must be a strong and confident communicator, capable of influencing both technical and non-technical audiences. You will lead and mentor technical teams, build consensus across diverse stakeholder groups, and foster collaborative, cross-functional working environments. Your role will require innovative thinking, applying best practices and emerging data technologies to solve complex business problems in creative and pragmatic ways. Key Responsibilities: Design end-to-end data architectures that meet business, technical, and security requirements. Translate business and analytical requirements into scalable, secure, and cost-effective data platforms. Ensure alignment with enterprise data architecture, data strategy, and governance standards. Lead data architecture reviews and technical design workshops with stakeholders and delivery teams. Support Agile delivery teams, defining MVP data architecture and providing ongoing technical direction. Collaborate with stakeholders across business, product, and IT to gain buy-in and drive data-driven decisions. Define data models, data flows, metadata standards, and integration patterns. Contribute to data engineering, automation, and infrastructure-as-code practices for delivery at scale. Ensure data security, privacy, compliance, and risk management are Embedded in the solution. Produce clear data architecture documentation using standards such as conceptual, logical and physical models, C4, Archimate, and cloud-native diagrams, including high-level designs and relevant data artefacts. Key Experience: Extensive experience in stakeholder engagement and senior-level communication, including C-suite. Proven experience in client-facing and/or consultancy environments. Demonstrated ability to translate complex business and analytical requirements into scalable and resilient data architectures. Strong track record in designing and delivering end-to-end data platforms in enterprise settings. Expertise in conceptual, logical, and physical data modelling. Deep understanding of data platforms, data integration, and analytics architectures. Ability to design solutions that meet both functional and non-functional requirements, including performance, scalability, security, and compliance. Broad experience with data governance, data quality, master data management, and regulatory requirements. Key Skills: Advanced knowledge of cloud data platforms: Azure, AWS, and Google Cloud Platform (GCP). Strong experience with data warehousing, data lakes, and lakehouse architectures. Hands-on experience with SQL and at least one programming language such as Python, Scala, or Java. ETL/ELT design and data pipeline orchestration. Streaming and event-driven data architectures. Agile and DevOps delivery methodologies applied to data platforms (DataOps). Expertise in data architecture and modelling (conceptual, logical, and physical). Familiarity with modern integration technologies and patterns (eg, API-driven, event streaming, service mesh). What We Offer: Competitive Salary 25 Days Annual Leave + Bank Holidays Flexibility to work from home 10 days allocated for development training per year Generous Discretionary Bonus Statutory & Contributory Pension Private Healthcare Cover Discounted Gym Membership Enhanced Parental Leave Paid Fertility Leave Cycle to Work and Electric Vehicle schemes Access to Employee Assistance Programme (EAP) Annual Away Days Monthly Company Socials Equality & Diversity: Solirius Reply is an equal opportunity employer. We are committed to creating a work environment that supports, celebrates, encourages and respects all individuals and in which all processes are based on merit, competence and business needs. We do not discriminate on the basis of race, religion, gender, sexuality, age, disability, ethnicity, marital status or any other protected characteristics. Should you require further assistance or require any reasonable adjustments be put in place to better support your application process, please do not hesitate to raise this with us.
21/04/2026
Full time
About Us: Solirius Reply delivers technical consultancy and application delivery to our clients in order to solve real world problems and allow our clients to respond to an ever-changing technical landscape. We partner closely with our clients, embedding our consultants into their businesses in order to provide a bespoke service, allowing us to truly understand our clients' needs. It is this close collaboration with our clients that has enabled us to grow rapidly in recent years and will drive our ambitious future growth plans. We currently have over 400 consultants working with a variety of key clients from both the public and private sectors such as the Ministry of Justice, Department for Education, FCDOS, UEFA, International Olympic Committee and Mercedes Benz; with plans to increase our client base further in the near future. We operate as a flat organisation and believe in trusting and supporting our team to operate independently. We pride ourselves on being specialists at what we do, making the most of our consultants' expertise in their fields, to provide a best-in-class service to our clients. All our consultants have the opportunity to work on a range of different projects, providing a broad range of knowledge on which to develop their careers and progress in the direction they choose. About You: You are a motivated and adaptable professional with a strong analytical mindset and a passion for working with data to solve real-world problems. You enjoy working in collaborative, agile teams and take pride in delivering high-quality, data-driven solutions that make a tangible impact. With strong communication skills and a consultative approach, you're comfortable engaging with clients, understanding their needs, and translating them into robust data architectures and platforms. You understand and align with Solirius Reply Values. The Role: We are looking for experienced Data Architects to work on our projects with our public sector clients, helping to deliver Solirius' services to the highest standard. The role will involve working with multiple business and technical stakeholders, helping them to design data solutions and architectures which will then be implemented by the delivery teams. As a Data Architect, you will be expected to operate with a high degree of autonomy, using your experience and judgement to resolve complex data and architectural challenges in alignment with client needs. You will proactively manage escalations and take ownership of delivering effective, scalable, and secure data solutions. Successful candidates will also be expected to take an active role in practice development: developing new data services to help with Solirius' business development - creating and updating data architecture artefacts, liaising with other Solirius practices to maintain practice profile and contributing to the development of more junior practice members. In addition to technical leadership, you will play a key role in identifying and shaping new business opportunities, working closely with stakeholders across client organisations to understand strategic objectives and deliver value-driven, data-led outcomes. You must be a strong and confident communicator, capable of influencing both technical and non-technical audiences. You will lead and mentor technical teams, build consensus across diverse stakeholder groups, and foster collaborative, cross-functional working environments. Your role will require innovative thinking, applying best practices and emerging data technologies to solve complex business problems in creative and pragmatic ways. Key Responsibilities: Design end-to-end data architectures that meet business, technical, and security requirements. Translate business and analytical requirements into scalable, secure, and cost-effective data platforms. Ensure alignment with enterprise data architecture, data strategy, and governance standards. Lead data architecture reviews and technical design workshops with stakeholders and delivery teams. Support Agile delivery teams, defining MVP data architecture and providing ongoing technical direction. Collaborate with stakeholders across business, product, and IT to gain buy-in and drive data-driven decisions. Define data models, data flows, metadata standards, and integration patterns. Contribute to data engineering, automation, and infrastructure-as-code practices for delivery at scale. Ensure data security, privacy, compliance, and risk management are Embedded in the solution. Produce clear data architecture documentation using standards such as conceptual, logical and physical models, C4, Archimate, and cloud-native diagrams, including high-level designs and relevant data artefacts. Key Experience: Extensive experience in stakeholder engagement and senior-level communication, including C-suite. Proven experience in client-facing and/or consultancy environments. Demonstrated ability to translate complex business and analytical requirements into scalable and resilient data architectures. Strong track record in designing and delivering end-to-end data platforms in enterprise settings. Expertise in conceptual, logical, and physical data modelling. Deep understanding of data platforms, data integration, and analytics architectures. Ability to design solutions that meet both functional and non-functional requirements, including performance, scalability, security, and compliance. Broad experience with data governance, data quality, master data management, and regulatory requirements. Key Skills: Advanced knowledge of cloud data platforms: Azure, AWS, and Google Cloud Platform (GCP). Strong experience with data warehousing, data lakes, and lakehouse architectures. Hands-on experience with SQL and at least one programming language such as Python, Scala, or Java. ETL/ELT design and data pipeline orchestration. Streaming and event-driven data architectures. Agile and DevOps delivery methodologies applied to data platforms (DataOps). Expertise in data architecture and modelling (conceptual, logical, and physical). Familiarity with modern integration technologies and patterns (eg, API-driven, event streaming, service mesh). What We Offer: Competitive Salary 25 Days Annual Leave + Bank Holidays Flexibility to work from home 10 days allocated for development training per year Generous Discretionary Bonus Statutory & Contributory Pension Private Healthcare Cover Discounted Gym Membership Enhanced Parental Leave Paid Fertility Leave Cycle to Work and Electric Vehicle schemes Access to Employee Assistance Programme (EAP) Annual Away Days Monthly Company Socials Equality & Diversity: Solirius Reply is an equal opportunity employer. We are committed to creating a work environment that supports, celebrates, encourages and respects all individuals and in which all processes are based on merit, competence and business needs. We do not discriminate on the basis of race, religion, gender, sexuality, age, disability, ethnicity, marital status or any other protected characteristics. Should you require further assistance or require any reasonable adjustments be put in place to better support your application process, please do not hesitate to raise this with us.
Senior/Lead Data Platform & Cloud Engineer Hybrid/London- Twice a week This role is ideal for a cloud or platform engineer with solid DevOps and infrastructure skills, who can work on data migration, platform setup, and modern AWS based development. The main focus is on DevOps, infrastructure, and platform engineering to support secure, scalable, and automated data migration and ingestion. Required Experience & Skills 5+ years in data/platform engineering, including leadership experience Strong AWS experience: S3, Lambda, IAM, VPC/networking, DMS Proven experience with Infrastructure-as-Code (Terraform) and CI/CD pipelines Strong Python and SQL skills Understanding of data platforms (eg Snowflake/CDP) and ingestion pipelines Exposure to AI/ML enablement frameworks (.eg AWS SageMaker) and supporting infrastructure for model training, deployment Experience with dbt/ELT patterns (beneficial, not core focus) Strong stakeholder communication and technical leadership skills
21/04/2026
Full time
Senior/Lead Data Platform & Cloud Engineer Hybrid/London- Twice a week This role is ideal for a cloud or platform engineer with solid DevOps and infrastructure skills, who can work on data migration, platform setup, and modern AWS based development. The main focus is on DevOps, infrastructure, and platform engineering to support secure, scalable, and automated data migration and ingestion. Required Experience & Skills 5+ years in data/platform engineering, including leadership experience Strong AWS experience: S3, Lambda, IAM, VPC/networking, DMS Proven experience with Infrastructure-as-Code (Terraform) and CI/CD pipelines Strong Python and SQL skills Understanding of data platforms (eg Snowflake/CDP) and ingestion pipelines Exposure to AI/ML enablement frameworks (.eg AWS SageMaker) and supporting infrastructure for model training, deployment Experience with dbt/ELT patterns (beneficial, not core focus) Strong stakeholder communication and technical leadership skills
Job Title : Senior AWS Data engineer (LDW Data Warehouse Discovery) - Gold Max Supplier Rate: £430 Clearance Required : SC - holding SC with governing body Duration : 6 months Location: Hybrid role: requires attendance for occasional workshops (typically a couple of days per month) at one of our sites - Telford or Hove Job Description: The role falls within the Data Contract Delivery Area. The group provides a wide range of data and analytics solutions in support of our client's business priorities: maximise revenues, bear down on fraud, and cloud migration. This role involves migrating data from Legacy on-premises systems (primarily Oracle and Informatica) to a new AWS cloud-native architecture. You will be part of an Agile software delivery team working closely with other engineers and supported by project managers, business analysts, and architects. With additional client and key stakeholder interaction as required. We are looking for strong AWS Senior Data Engineers who can design and deliver cloud transformation projects. Your work will be to: As part of a cloud transformation team, supporting the technical lead with design and client interactions, and supporting junior engineers with their development. Design, Develop and Test Data Pipelines: Create robust pipelines to ingest, process, and transform data, ensuring it is ready for analytics and reporting. Implement ETL/ELT Processes: Develop and Test Extract, Transform, Load (ETL) or Extract, Load, Transform (ELT) workflows to seamlessly move data from source systems to Data Warehouses/Data Lakes/Lake Houses using Open Source and AWS tools. Adopt DevOps Practices: Utilise DevOps methodologies and tools for continuous integration and deployment (CI/CD). Must-have skills: Proficiency with Core AWS Tools (AWS Glue, Lambda, S3, Redshift) Programming Skills (Python) SQL and Data Storage Technologies: Some knowledge of Data Warehouse, Database technologies, and technologies (AWS Redshift, AWS RDS). AWS Data Lakes: Some experience with AWS data lakes on AWS S3 to store and process both structured and unstructured data sets. Nice-to-have skills: Knowledge of Open Table Formats (Iceberg/Delta). AWS Tools: Experience with Amazon CloudWatch, SNS, Athena, DynamoDB, EMR, Kinesis. Data modelling Job scheduling/orchestration Data virtualisation tools (Denodo) ALM Tooling (Jira, Confluence) CI/CD toolsets (GitLab, Terraform) Reporting tools (Business Objects, Power BI, Pentaho BA) Data Analytics toolset (SAS Viya) Observability tools (Grafana, Dynatrace) Experience: You should have experience as a senior data engineer delivering within large scale data analytics solutions and the ability to operate at all stages of the software engineering life cycle, as well as some experience in the following Awareness of DevOps culture and modern engineering practices Experience of Agile Scrum based delivery Proactive in nature, personal drive, enthusiasm, willingness to learn Excellent communications skills including stakeholder management Developing solutions within the given architecture and adhering to specified NFRs Supporting other engineers within your team Continually looking for ways to improve Hybrid role: requires attendance for occasional workshops (typically a couple of days per month) at one of our sites - Telford or Hove Security Clearance: candidates must hold active SC Clearance with a UK government body
21/04/2026
Contractor
Job Title : Senior AWS Data engineer (LDW Data Warehouse Discovery) - Gold Max Supplier Rate: £430 Clearance Required : SC - holding SC with governing body Duration : 6 months Location: Hybrid role: requires attendance for occasional workshops (typically a couple of days per month) at one of our sites - Telford or Hove Job Description: The role falls within the Data Contract Delivery Area. The group provides a wide range of data and analytics solutions in support of our client's business priorities: maximise revenues, bear down on fraud, and cloud migration. This role involves migrating data from Legacy on-premises systems (primarily Oracle and Informatica) to a new AWS cloud-native architecture. You will be part of an Agile software delivery team working closely with other engineers and supported by project managers, business analysts, and architects. With additional client and key stakeholder interaction as required. We are looking for strong AWS Senior Data Engineers who can design and deliver cloud transformation projects. Your work will be to: As part of a cloud transformation team, supporting the technical lead with design and client interactions, and supporting junior engineers with their development. Design, Develop and Test Data Pipelines: Create robust pipelines to ingest, process, and transform data, ensuring it is ready for analytics and reporting. Implement ETL/ELT Processes: Develop and Test Extract, Transform, Load (ETL) or Extract, Load, Transform (ELT) workflows to seamlessly move data from source systems to Data Warehouses/Data Lakes/Lake Houses using Open Source and AWS tools. Adopt DevOps Practices: Utilise DevOps methodologies and tools for continuous integration and deployment (CI/CD). Must-have skills: Proficiency with Core AWS Tools (AWS Glue, Lambda, S3, Redshift) Programming Skills (Python) SQL and Data Storage Technologies: Some knowledge of Data Warehouse, Database technologies, and technologies (AWS Redshift, AWS RDS). AWS Data Lakes: Some experience with AWS data lakes on AWS S3 to store and process both structured and unstructured data sets. Nice-to-have skills: Knowledge of Open Table Formats (Iceberg/Delta). AWS Tools: Experience with Amazon CloudWatch, SNS, Athena, DynamoDB, EMR, Kinesis. Data modelling Job scheduling/orchestration Data virtualisation tools (Denodo) ALM Tooling (Jira, Confluence) CI/CD toolsets (GitLab, Terraform) Reporting tools (Business Objects, Power BI, Pentaho BA) Data Analytics toolset (SAS Viya) Observability tools (Grafana, Dynatrace) Experience: You should have experience as a senior data engineer delivering within large scale data analytics solutions and the ability to operate at all stages of the software engineering life cycle, as well as some experience in the following Awareness of DevOps culture and modern engineering practices Experience of Agile Scrum based delivery Proactive in nature, personal drive, enthusiasm, willingness to learn Excellent communications skills including stakeholder management Developing solutions within the given architecture and adhering to specified NFRs Supporting other engineers within your team Continually looking for ways to improve Hybrid role: requires attendance for occasional workshops (typically a couple of days per month) at one of our sites - Telford or Hove Security Clearance: candidates must hold active SC Clearance with a UK government body