Software Engineer - Power BI
Title: Software Engineer - Power BI
Contract Type: Fixed Term Contract (12 Months)
Salary: Salary starting from £49,502 pa (Regional) and from £57,094 pa (Inside London) depending on experience and location.
Reporting Office: London, Stratford or Manchester, Trafford
Persona: Agile (20-40% in the office per week)
Closing Date: 26th April 2026
Interviews will take place on: 1st stage is a virtual interview on 5th May 2026 followed by 2nd stage in-person interview on 13th May 2026
Benefits include: Excellent pension plan (up to 6% double contribution), 28 days Annual Leave rising to 31 days with length of service + Bank Holidays, Westfield Health Cash Plan, non-contributory life assurance, up to 21 hours volunteering paid days, lifestyle benefits, Employee Assistance Programme and many more …
Early applications are encouraged as we reserve the right to close the advertisement and interview earlier than stated.
Join our Software Engineering Team at L&Q:
We are building the next generation of data-driven reporting and analytics solutions — and this role puts you at the centre of it. You will develop and manage Power BI reports, dashboards, and data models that turn complex information into clear insights for the organisation.
Working as part of a collaborative agile squad, you will design and maintain enterprise-level BI solutions using Power BI, SQL, DAX, KQL and Power Query (M).
You will help shape our reporting standards, data models, and best practices — ensuring our data is trusted, consistent, and ready for self-service analytics across the business.
Building Power BI CI/CD pipelines in Azure DevOps, improving security, governance and implementing Azure monitoring capabilities using Power BI.
You will report to the Lead Software Engineer and work closely with Data Engineers, Analysts, Architects, and Product Owners. It is a hands-on, creative role that blends technical skill with problem-solving and collaboration.
If this sounds like you, we would love for you to apply!
Your impact in the role will be to:
Design, develop, and maintain Power BI reports, dashboards, and datasets using best practices in DAX, Power Query (M), KQL and data modelling.
Building Power BI CI/CD pipelines in Azure DevOps for version controlling.
Build and optimise reusable data models to support enterprise-level reporting and self-service analytics.
Develop SQL queries, stored procedures, and views to support analytical and operational reporting needs.
Implement and manage Power BI Service administration, including workspace permissions, data refreshes, and row-level security.
Collaborate with engineers, architects, and analysts to design efficient data solutions that integrate with the wider enterprise data platform.
Write clean, scalable, and well-documented code while applying engineering best practices through peer reviews and agile ceremonies.
Troubleshoot issues, analyse root causes, and deliver effective technical solutions.
Communicate complex technical concepts clearly to both technical and non-technical stakeholders.
Stay up to date with emerging BI technologies, tools, and practices to continuously improve reporting capabilities.
Share knowledge and support capability growth across the teams.
By creating robust Power BI solutions and collaborating closely with technical and business teams, you will help drive a culture of data-led insight across the organisation.
What you'll bring:
Strong hands-on experience designing and developing Power BI reports, dashboards, and data models.
Advanced knowledge of DAX, Power Query (M), KQL and Power BI Service administration.
Advanced Knowledge of AzureDevOps (CI/CD, version control, automated deployment) for Power BI.
Strong SQL expertise, including writing complex queries, stored procedures, and CTEs, with experience in query optimization, performance tuning, and designing data models for relational databases.
Good understanding of relational databases, data warehousing concepts, and dimensional modelling (Kimball or similar).
Experience collaborating within agile scrum delivery teams, throughout SDLC and contributing to iterative development cycles. Methodologies. Familiarity with best practices for credential management in a highly secure environment using DevSecOps.
Effective communication and documentation skills, with the ability to present technical concepts to non-technical audiences.
Passion for continuous learning and staying current with Power BI, Azure, and wider data technology trends.
Desirable:
Exposure to enterprise-level reporting governance, security, and performance optimisation practices.
Familiarity with Azure Data Services such as Azure SQL, Data Factory, Log Analytics, Azure Monitor and Synapse Analytics.
Integration with Microsoft tools (SharePoint, Power Apps, Teams)
Exposure to RESTful APIs
Understanding of data integration design, data repositories, and master data management (MDM) tools such as Semarchy.
Exposure to Unit 4, NEC Housing Management Systems (advantageous).
About L&Q:
We’re one of the UK’s leading housing associations and developers. We were founded on a simple belief: high quality housing is vital for people’s health, happiness and security. Everyone deserves a quality home that gives them the chance to live a better life.
250,000 people call our properties ‘home’, and we’re proud to serve diverse communities across London, the South East and North West of England.
At L&Q, people are at the heart of our business and our success depends on employing the best people and getting the best from them. The foundation of everything that we are is built on our corporate values and behavioural framework , which outlines our core expectations and should be demonstrated at all times, and all levels, when representing L&Q.
L&Q strongly believe a diverse and inclusive workforce is important, and inclusion is part of our core values and everyday working practices. We make hiring decisions based on your experiences, skills and merits and we are recognised externally for our commitment to inclusion. We are a Stonewall Diversity Champion, a Disability Confident (Committed) employer and have signed the Time to Change Employer Pledge to demonstrate our commitment to end mental health discrimination in the workplace.
At L&Q, sustainability is at the heart of what we do. We recognise the responsibility we hold as one of the UK’s largest housing associations.
#TJ
16/04/2026
Contractor
Software Engineer - Power BI
Title: Software Engineer - Power BI
Contract Type: Fixed Term Contract (12 Months)
Salary: Salary starting from £49,502 pa (Regional) and from £57,094 pa (Inside London) depending on experience and location.
Reporting Office: London, Stratford or Manchester, Trafford
Persona: Agile (20-40% in the office per week)
Closing Date: 26th April 2026
Interviews will take place on: 1st stage is a virtual interview on 5th May 2026 followed by 2nd stage in-person interview on 13th May 2026
Benefits include: Excellent pension plan (up to 6% double contribution), 28 days Annual Leave rising to 31 days with length of service + Bank Holidays, Westfield Health Cash Plan, non-contributory life assurance, up to 21 hours volunteering paid days, lifestyle benefits, Employee Assistance Programme and many more …
Early applications are encouraged as we reserve the right to close the advertisement and interview earlier than stated.
Join our Software Engineering Team at L&Q:
We are building the next generation of data-driven reporting and analytics solutions — and this role puts you at the centre of it. You will develop and manage Power BI reports, dashboards, and data models that turn complex information into clear insights for the organisation.
Working as part of a collaborative agile squad, you will design and maintain enterprise-level BI solutions using Power BI, SQL, DAX, KQL and Power Query (M).
You will help shape our reporting standards, data models, and best practices — ensuring our data is trusted, consistent, and ready for self-service analytics across the business.
Building Power BI CI/CD pipelines in Azure DevOps, improving security, governance and implementing Azure monitoring capabilities using Power BI.
You will report to the Lead Software Engineer and work closely with Data Engineers, Analysts, Architects, and Product Owners. It is a hands-on, creative role that blends technical skill with problem-solving and collaboration.
If this sounds like you, we would love for you to apply!
Your impact in the role will be to:
Design, develop, and maintain Power BI reports, dashboards, and datasets using best practices in DAX, Power Query (M), KQL and data modelling.
Building Power BI CI/CD pipelines in Azure DevOps for version controlling.
Build and optimise reusable data models to support enterprise-level reporting and self-service analytics.
Develop SQL queries, stored procedures, and views to support analytical and operational reporting needs.
Implement and manage Power BI Service administration, including workspace permissions, data refreshes, and row-level security.
Collaborate with engineers, architects, and analysts to design efficient data solutions that integrate with the wider enterprise data platform.
Write clean, scalable, and well-documented code while applying engineering best practices through peer reviews and agile ceremonies.
Troubleshoot issues, analyse root causes, and deliver effective technical solutions.
Communicate complex technical concepts clearly to both technical and non-technical stakeholders.
Stay up to date with emerging BI technologies, tools, and practices to continuously improve reporting capabilities.
Share knowledge and support capability growth across the teams.
By creating robust Power BI solutions and collaborating closely with technical and business teams, you will help drive a culture of data-led insight across the organisation.
What you'll bring:
Strong hands-on experience designing and developing Power BI reports, dashboards, and data models.
Advanced knowledge of DAX, Power Query (M), KQL and Power BI Service administration.
Advanced Knowledge of AzureDevOps (CI/CD, version control, automated deployment) for Power BI.
Strong SQL expertise, including writing complex queries, stored procedures, and CTEs, with experience in query optimization, performance tuning, and designing data models for relational databases.
Good understanding of relational databases, data warehousing concepts, and dimensional modelling (Kimball or similar).
Experience collaborating within agile scrum delivery teams, throughout SDLC and contributing to iterative development cycles. Methodologies. Familiarity with best practices for credential management in a highly secure environment using DevSecOps.
Effective communication and documentation skills, with the ability to present technical concepts to non-technical audiences.
Passion for continuous learning and staying current with Power BI, Azure, and wider data technology trends.
Desirable:
Exposure to enterprise-level reporting governance, security, and performance optimisation practices.
Familiarity with Azure Data Services such as Azure SQL, Data Factory, Log Analytics, Azure Monitor and Synapse Analytics.
Integration with Microsoft tools (SharePoint, Power Apps, Teams)
Exposure to RESTful APIs
Understanding of data integration design, data repositories, and master data management (MDM) tools such as Semarchy.
Exposure to Unit 4, NEC Housing Management Systems (advantageous).
About L&Q:
We’re one of the UK’s leading housing associations and developers. We were founded on a simple belief: high quality housing is vital for people’s health, happiness and security. Everyone deserves a quality home that gives them the chance to live a better life.
250,000 people call our properties ‘home’, and we’re proud to serve diverse communities across London, the South East and North West of England.
At L&Q, people are at the heart of our business and our success depends on employing the best people and getting the best from them. The foundation of everything that we are is built on our corporate values and behavioural framework , which outlines our core expectations and should be demonstrated at all times, and all levels, when representing L&Q.
L&Q strongly believe a diverse and inclusive workforce is important, and inclusion is part of our core values and everyday working practices. We make hiring decisions based on your experiences, skills and merits and we are recognised externally for our commitment to inclusion. We are a Stonewall Diversity Champion, a Disability Confident (Committed) employer and have signed the Time to Change Employer Pledge to demonstrate our commitment to end mental health discrimination in the workplace.
At L&Q, sustainability is at the heart of what we do. We recognise the responsibility we hold as one of the UK’s largest housing associations.
#TJ
Overview We're looking for a Cloud Digital Product Manager to own and deliver cloud-based digital products end-to-end. You'll define product vision and roadmap, manage the backlog, and work closely with engineering and stakeholders to deliver secure, scalable cloud solutions. Responsibilities Own product vision, strategy, and roadmap Prioritise and manage the product backlog Translate business and user needs into epics and user stories Work closely with cloud engineers, architects, and DevOps teams Lead discovery, delivery, and continuous improvement Manage stakeholders and track product outcomes Required Experience Proven experience as a Digital or Technical Product Manager Strong Agile delivery background Solid understanding of AWS, Azure, and/or GCP Experience with cloud-native, API-driven and microservices architectures Excellent communication and stakeholder management skills Reasonable Adjustments: Respect and equality are core values to us. We are proud of the diverse and inclusive community we have built, and we welcome applications from people of all backgrounds and perspectives. Our success is driven by our people, united by the spirit of partnership to deliver the best resourcing solutions for our clients. If you need any help or adjustments during the recruitment process for any reason , please let us know when you apply or talk to the recruiters directly so we can support you.
22/04/2026
Contractor
Overview We're looking for a Cloud Digital Product Manager to own and deliver cloud-based digital products end-to-end. You'll define product vision and roadmap, manage the backlog, and work closely with engineering and stakeholders to deliver secure, scalable cloud solutions. Responsibilities Own product vision, strategy, and roadmap Prioritise and manage the product backlog Translate business and user needs into epics and user stories Work closely with cloud engineers, architects, and DevOps teams Lead discovery, delivery, and continuous improvement Manage stakeholders and track product outcomes Required Experience Proven experience as a Digital or Technical Product Manager Strong Agile delivery background Solid understanding of AWS, Azure, and/or GCP Experience with cloud-native, API-driven and microservices architectures Excellent communication and stakeholder management skills Reasonable Adjustments: Respect and equality are core values to us. We are proud of the diverse and inclusive community we have built, and we welcome applications from people of all backgrounds and perspectives. Our success is driven by our people, united by the spirit of partnership to deliver the best resourcing solutions for our clients. If you need any help or adjustments during the recruitment process for any reason , please let us know when you apply or talk to the recruiters directly so we can support you.
Method Resourcing Solutions Ltd
Cardiff, South Glamorgan
Engineering Lead | Content Management Systems (CMS) | AI-Enabled Engineering | Hybrid (1 day per week into Cardiff) | £70,000-£75,000 + benefits Method Resourcing have partnered exclusively with a global organisation looking to hire a Software Development Manager into a growing team delivering innovative, enterprise-scale technology. This role has been created to bring structure, technical leadership, and modern engineering practices into a team working across AI-enabled development and composable CMS architecture. The focus is on building high-quality, scalable solutions while shaping a collaborative and transparent engineering culture. This is a leadership-first role, but not a distant one. They're looking for someone who can lead engineers, support product managers, and stay close enough to the code to challenge decisions, unblock delivery, and guide teams effectively. The role: As Software Development Manager, you'll take ownership of an evolving engineering function. You'll work closely with existing contractors to support knowledge transfer, while defining standards, ways of working, and culture for the future permanent team. You'll also play a key role in AI-driven development - reviewing AI-generated code, assessing quality and risks, and understanding how AI can be applied responsibly within enterprise systems. The technical environment centres around a composable, API-driven CMS, fully decoupled from the Front End and delivered via APIs with caching to keep applications platform-agnostic. You'll manage a small team of four engineers initially, with responsibility for recruiting and onboarding additional permanent hires as the team grows. What my client is looking for: Strong understanding of composable CMS and API-driven content delivery (Sanity, Sitecore, or similar) - essential Experience leading and managing software engineers in a commercial environment Strong knowledge of modern engineering practices including cloud platforms (AWS, Azure or GCP), CI/CD, DevOps, observability, security and reliability Full-stack technical fluency with experience in modern web development (JavaScript, React, Angular, Node.js or similar) Hands-on engineering background with the ability to support technical challenges when required Experience working with or around AI-enabled development, including code quality, standards, and risk awareness Confident communicator able to translate technical detail for product, stakeholders, and senior leadership Comfortable building teams and shaping engineering culture from the ground up Candidates don't need to tick every box - there's openness to upskilling where the fundamentals and mindset are right. Benefits include: 10% employer pension contribution 25 days annual leave plus public holidays Life assurance at 4× annual salary Flexi-time Paid volunteering leave Strong continuous professional development support - 100% funding up to $2,800 per year & 75% funding up to $5,000 per year Flexible benefits allowance of 1.5% of salary Health and wellbeing schemes Cashback and retail discounts Employee Assistance Programme Free on-site car parking Working pattern: Hybrid role with around 1 day per week in the central Cardiff office. If this sounds of interest, please apply or contact (see below) for more information. Engineering Lead | Content Management Systems (CMS) | AI-Enabled Engineering | Hybrid (1 day per week into Cardiff) | £70,000-£75,000 + benefits RSG Plc is acting as an Employment Agency in relation to this vacancy.
22/04/2026
Full time
Engineering Lead | Content Management Systems (CMS) | AI-Enabled Engineering | Hybrid (1 day per week into Cardiff) | £70,000-£75,000 + benefits Method Resourcing have partnered exclusively with a global organisation looking to hire a Software Development Manager into a growing team delivering innovative, enterprise-scale technology. This role has been created to bring structure, technical leadership, and modern engineering practices into a team working across AI-enabled development and composable CMS architecture. The focus is on building high-quality, scalable solutions while shaping a collaborative and transparent engineering culture. This is a leadership-first role, but not a distant one. They're looking for someone who can lead engineers, support product managers, and stay close enough to the code to challenge decisions, unblock delivery, and guide teams effectively. The role: As Software Development Manager, you'll take ownership of an evolving engineering function. You'll work closely with existing contractors to support knowledge transfer, while defining standards, ways of working, and culture for the future permanent team. You'll also play a key role in AI-driven development - reviewing AI-generated code, assessing quality and risks, and understanding how AI can be applied responsibly within enterprise systems. The technical environment centres around a composable, API-driven CMS, fully decoupled from the Front End and delivered via APIs with caching to keep applications platform-agnostic. You'll manage a small team of four engineers initially, with responsibility for recruiting and onboarding additional permanent hires as the team grows. What my client is looking for: Strong understanding of composable CMS and API-driven content delivery (Sanity, Sitecore, or similar) - essential Experience leading and managing software engineers in a commercial environment Strong knowledge of modern engineering practices including cloud platforms (AWS, Azure or GCP), CI/CD, DevOps, observability, security and reliability Full-stack technical fluency with experience in modern web development (JavaScript, React, Angular, Node.js or similar) Hands-on engineering background with the ability to support technical challenges when required Experience working with or around AI-enabled development, including code quality, standards, and risk awareness Confident communicator able to translate technical detail for product, stakeholders, and senior leadership Comfortable building teams and shaping engineering culture from the ground up Candidates don't need to tick every box - there's openness to upskilling where the fundamentals and mindset are right. Benefits include: 10% employer pension contribution 25 days annual leave plus public holidays Life assurance at 4× annual salary Flexi-time Paid volunteering leave Strong continuous professional development support - 100% funding up to $2,800 per year & 75% funding up to $5,000 per year Flexible benefits allowance of 1.5% of salary Health and wellbeing schemes Cashback and retail discounts Employee Assistance Programme Free on-site car parking Working pattern: Hybrid role with around 1 day per week in the central Cardiff office. If this sounds of interest, please apply or contact (see below) for more information. Engineering Lead | Content Management Systems (CMS) | AI-Enabled Engineering | Hybrid (1 day per week into Cardiff) | £70,000-£75,000 + benefits RSG Plc is acting as an Employment Agency in relation to this vacancy.
We're sourcing a Cloud Security Engineer (Azure / Terraform) for a 6-month UK-based contract (fully remote). This role is ideal for a hands-on Azure / Terraform contractor who can quickly take ownership and deliver secure, production-grade infrastructure through code.You'll be embedded in a modern Azure environment, leading security engineering initiatives with a strong focus on Terraform-first delivery. Expect to spend the majority of your time writing clean, reusable modules, embedding security controls, and pushing everything through CI/CD.Key deliverables: Build and enforce secure Azure infrastructure using Terraform (modular, scalable, production-ready) Implement perimeter security with Azure Front Door + WAF (OWASP, bot protection, rule tuning) Define governance using Azure Policy as code (networking, firewall, compliance controls) Secure AKS workloads with container scanning and runtime protections Design and roll out Conditional Access and identity protections in Microsoft Entra ID (P2) Harden Azure DevOps pipelines using managed identities and least-privilege principles Drive risk visibility and remediation via Microsoft Defender for Cloud You'll suit this contract if you're a strong coder with Terraform, security-focused by default, and comfortable delivering autonomously without hand-holding.
22/04/2026
Contractor
We're sourcing a Cloud Security Engineer (Azure / Terraform) for a 6-month UK-based contract (fully remote). This role is ideal for a hands-on Azure / Terraform contractor who can quickly take ownership and deliver secure, production-grade infrastructure through code.You'll be embedded in a modern Azure environment, leading security engineering initiatives with a strong focus on Terraform-first delivery. Expect to spend the majority of your time writing clean, reusable modules, embedding security controls, and pushing everything through CI/CD.Key deliverables: Build and enforce secure Azure infrastructure using Terraform (modular, scalable, production-ready) Implement perimeter security with Azure Front Door + WAF (OWASP, bot protection, rule tuning) Define governance using Azure Policy as code (networking, firewall, compliance controls) Secure AKS workloads with container scanning and runtime protections Design and roll out Conditional Access and identity protections in Microsoft Entra ID (P2) Harden Azure DevOps pipelines using managed identities and least-privilege principles Drive risk visibility and remediation via Microsoft Defender for Cloud You'll suit this contract if you're a strong coder with Terraform, security-focused by default, and comfortable delivering autonomously without hand-holding.
Junior Data Engineer - Public Sector Contract: Initial 7 months (extension possible) Rate: £310 per day, Inside IR35 Location: Remote with travel to Waterloo (2-3 days per month) Security Clearance: SC-eligible (5 years UK residency required) I am working with a key consultancy delivering a major UK public sector programme and are looking for a Junior Data Engineer / Scientist to join a mixed delivery team building and operating secure, reliable data platforms that support critical public services.This role is designed for someone early in their data career who wants to develop strong engineering fundamentals in a real production environment. The role - a junior, generalist data engineering position. This is an engineering-led role, not a specialist or senior position. The team is ideally looking for a generalist in their first few years within data engineering or data science, who is building breadth across data platforms, pipelines and operations. You'll focus on: Designing, building and maintaining data pipelines Supporting the operation of data lakes and data warehouses Implementing and improving ETL / ELT processes Using Python and SQL to transform, validate and move data Working with analysts and developers to turn data requirements into technical solutions Monitoring data quality, documenting data models and lineage, and resolving issues Automating data workflows and operational tasks Participating in Agile delivery, sprint work and collaboration Supporting incidents and helping improve platform reliability over time Working within public sector data governance, security and privacy standards This role offers exposure to how data platforms are built, operated and supported in a regulated environment - forming the foundations of a long-term data engineering career. What this role is not To avoid misalignment, it's important to be clear about what this role is not focused on: Not a Data Analyst role Not a Power BI / dashboard developer role Not an insight, reporting or MI position Not a modelling, ML or research-focused role Not an LLM, AI or advanced data science role While you may work alongside analysts and data scientists, this role does not centre on: Building dashboards Producing insights or reports Statistical modelling Predictive or machine learning solutions The emphasis is on data engineering foundations and platform delivery. Ideal candidate profile This role is best suited to someone who: Is in their first few years of a data engineering or data science career Wants to build core engineering skills rather than specialise immediately Has hands-on experience with SQL and Python Understands basic data modelling and ETL concepts Is comfortable learning through delivery in a production environment Is interested in how data platforms work end-to-end, including operations and support Is keen to grow within public sector data platforms Who this role is unlikely to sui t This role is unlikely to be appropriate for candidates who: Are very senior data engineers or architects Have primarily worked in advanced ML, AI, or research-focused roles Are specialised Power BI, reporting or MI developers Are looking for a role centred on analysis, insights or modelling Are seeking leadership, ownership of platform strategy, or advanced optimisation work Applications that demonstrate significant seniority or deep specialisation rather than junior-to-mid generalist experience may not be progressed. Required skills and experience Your CV should clearly demonstrate: A degree in a technical discipline (Computer Science, Data Science, Mathematics or similar) Hands-on experience with SQL Experience using Python, Java or Bash Understanding of ETL processes and data modelling fundamentals Experience with version control (e.g. Git) Comfort working in Agile / DevOps environments Awareness of data security and privacy Eligibility for UK SC clearance Nice to have (but not essential) Exposure to AWS, Azure or GCP Familiarity with tools such as Airflow, dbt, Spark Awareness of CI/CD pipelines or containerisation Experience in public sector or regulated environments Important note for applicants This role is deliberately positioned as a junior, generalist data engineering role. Please ensure your CV clearly demonstrates hands-on data engineering fundamentals, rather than senior leadership, advanced AI/ML work, or analytics-only experience. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at hays.co.uk
22/04/2026
Contractor
Junior Data Engineer - Public Sector Contract: Initial 7 months (extension possible) Rate: £310 per day, Inside IR35 Location: Remote with travel to Waterloo (2-3 days per month) Security Clearance: SC-eligible (5 years UK residency required) I am working with a key consultancy delivering a major UK public sector programme and are looking for a Junior Data Engineer / Scientist to join a mixed delivery team building and operating secure, reliable data platforms that support critical public services.This role is designed for someone early in their data career who wants to develop strong engineering fundamentals in a real production environment. The role - a junior, generalist data engineering position. This is an engineering-led role, not a specialist or senior position. The team is ideally looking for a generalist in their first few years within data engineering or data science, who is building breadth across data platforms, pipelines and operations. You'll focus on: Designing, building and maintaining data pipelines Supporting the operation of data lakes and data warehouses Implementing and improving ETL / ELT processes Using Python and SQL to transform, validate and move data Working with analysts and developers to turn data requirements into technical solutions Monitoring data quality, documenting data models and lineage, and resolving issues Automating data workflows and operational tasks Participating in Agile delivery, sprint work and collaboration Supporting incidents and helping improve platform reliability over time Working within public sector data governance, security and privacy standards This role offers exposure to how data platforms are built, operated and supported in a regulated environment - forming the foundations of a long-term data engineering career. What this role is not To avoid misalignment, it's important to be clear about what this role is not focused on: Not a Data Analyst role Not a Power BI / dashboard developer role Not an insight, reporting or MI position Not a modelling, ML or research-focused role Not an LLM, AI or advanced data science role While you may work alongside analysts and data scientists, this role does not centre on: Building dashboards Producing insights or reports Statistical modelling Predictive or machine learning solutions The emphasis is on data engineering foundations and platform delivery. Ideal candidate profile This role is best suited to someone who: Is in their first few years of a data engineering or data science career Wants to build core engineering skills rather than specialise immediately Has hands-on experience with SQL and Python Understands basic data modelling and ETL concepts Is comfortable learning through delivery in a production environment Is interested in how data platforms work end-to-end, including operations and support Is keen to grow within public sector data platforms Who this role is unlikely to sui t This role is unlikely to be appropriate for candidates who: Are very senior data engineers or architects Have primarily worked in advanced ML, AI, or research-focused roles Are specialised Power BI, reporting or MI developers Are looking for a role centred on analysis, insights or modelling Are seeking leadership, ownership of platform strategy, or advanced optimisation work Applications that demonstrate significant seniority or deep specialisation rather than junior-to-mid generalist experience may not be progressed. Required skills and experience Your CV should clearly demonstrate: A degree in a technical discipline (Computer Science, Data Science, Mathematics or similar) Hands-on experience with SQL Experience using Python, Java or Bash Understanding of ETL processes and data modelling fundamentals Experience with version control (e.g. Git) Comfort working in Agile / DevOps environments Awareness of data security and privacy Eligibility for UK SC clearance Nice to have (but not essential) Exposure to AWS, Azure or GCP Familiarity with tools such as Airflow, dbt, Spark Awareness of CI/CD pipelines or containerisation Experience in public sector or regulated environments Important note for applicants This role is deliberately positioned as a junior, generalist data engineering role. Please ensure your CV clearly demonstrates hands-on data engineering fundamentals, rather than senior leadership, advanced AI/ML work, or analytics-only experience. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at hays.co.uk
Cloud Platform Engineering Lead - Azure/Platform Ownership/IaC A leading London Market Insurance organisation is seeking an experienced Cloud Platform Engineering Lead to take full ownership of its cloud platform strategy and engineering capability. This is a newly created, high impact position within a modern, forward thinking technology environment, offering the autonomy to shape cloud best practice and drive platform excellence across the organisation. You'll work closely with the Enterprise Architect, senior technology stakeholders, and cross functional engineering teams to design, build, and evolve a secure, scalable, and automated Azure cloud estate. This is a hands on technical leadership role - not people management - ideal for someone who thrives on ownership, architectural thinking, and deep technical problem solving. You'll be a great fit for this role if: You have extensive experience designing and operating cloud environments, ideally within the insurance sector You bring years of hands on cloud engineering expertise, primarily across Azure You've built and owned cloud platforms using Infrastructure as Code, including life cycle management of Terraform or similar tooling You have a strong background in platform engineering, cloud architecture, automation, and modern DevOps practices You have deep technical understanding This is a rare opportunity to take end to end ownership of a critical cloud domain, influence engineering standards, and help shape the future of cloud capability within a growing and ambitious organisation. This is a permanent opportunity paying £90k-£110k + Excellent Bonus requiring 3 days a week onsite in central London. Cloud Platform Engineering Lead - Azure/Platform Ownership/IaC
22/04/2026
Full time
Cloud Platform Engineering Lead - Azure/Platform Ownership/IaC A leading London Market Insurance organisation is seeking an experienced Cloud Platform Engineering Lead to take full ownership of its cloud platform strategy and engineering capability. This is a newly created, high impact position within a modern, forward thinking technology environment, offering the autonomy to shape cloud best practice and drive platform excellence across the organisation. You'll work closely with the Enterprise Architect, senior technology stakeholders, and cross functional engineering teams to design, build, and evolve a secure, scalable, and automated Azure cloud estate. This is a hands on technical leadership role - not people management - ideal for someone who thrives on ownership, architectural thinking, and deep technical problem solving. You'll be a great fit for this role if: You have extensive experience designing and operating cloud environments, ideally within the insurance sector You bring years of hands on cloud engineering expertise, primarily across Azure You've built and owned cloud platforms using Infrastructure as Code, including life cycle management of Terraform or similar tooling You have a strong background in platform engineering, cloud architecture, automation, and modern DevOps practices You have deep technical understanding This is a rare opportunity to take end to end ownership of a critical cloud domain, influence engineering standards, and help shape the future of cloud capability within a growing and ambitious organisation. This is a permanent opportunity paying £90k-£110k + Excellent Bonus requiring 3 days a week onsite in central London. Cloud Platform Engineering Lead - Azure/Platform Ownership/IaC
NetDevOps Engineer - SC Cleared - £650 to £700 PD - Inside IR 35 An enterprise leading government corporation is hiring a NetDevOps Engineer who has the ability to design, build and evolve a large-scale enterprise network platform. The chosen engineer will have a strong focus on modern, automate and cloud integrated infrastructure. Our client is seeking someone to be based in Blackpool on a hybrid basis. As our client continues to modernise, they require a consultant who has strong capabilites in networking automation using Python, Ansible, Terraform and CI/CD experience with a background in Network & Infrastructure Engineering. Key Responsibilities Design, implement, and enhance enterprise network infrastructure across data centre, hybrid, and cloud environments Engineer and implement network traffic flows to support business-critical services Build and maintain secure hybrid connectivity across Azure, AWS, and OCI Implement and manage Palo Alto Firewall policies across on-prem and cloud environments, aligned to Zero Trust principles Design and operate high-availability network services, including routing, segmentation, and resilience Develop and maintain network automation using tools such as Python, Ansible, and Infrastructure as Code Collaborate with architecture and platform teams to ensure solutions align with engineering standards and strategic direction Contribute immediately to delivery work, demonstrating the ability to operate with minimal ramp up Document designs and changes clearly and consistently, supporting maintainability and knowledge sharing Nice to Have Aruba Central/ClearPass SD WAN technologies SaaS and cloud-delivered WLAN/WiFi solutions Prior experience modernising Legacy network environments One stage interview, MS teams to start ASAP.
22/04/2026
Contractor
NetDevOps Engineer - SC Cleared - £650 to £700 PD - Inside IR 35 An enterprise leading government corporation is hiring a NetDevOps Engineer who has the ability to design, build and evolve a large-scale enterprise network platform. The chosen engineer will have a strong focus on modern, automate and cloud integrated infrastructure. Our client is seeking someone to be based in Blackpool on a hybrid basis. As our client continues to modernise, they require a consultant who has strong capabilites in networking automation using Python, Ansible, Terraform and CI/CD experience with a background in Network & Infrastructure Engineering. Key Responsibilities Design, implement, and enhance enterprise network infrastructure across data centre, hybrid, and cloud environments Engineer and implement network traffic flows to support business-critical services Build and maintain secure hybrid connectivity across Azure, AWS, and OCI Implement and manage Palo Alto Firewall policies across on-prem and cloud environments, aligned to Zero Trust principles Design and operate high-availability network services, including routing, segmentation, and resilience Develop and maintain network automation using tools such as Python, Ansible, and Infrastructure as Code Collaborate with architecture and platform teams to ensure solutions align with engineering standards and strategic direction Contribute immediately to delivery work, demonstrating the ability to operate with minimal ramp up Document designs and changes clearly and consistently, supporting maintainability and knowledge sharing Nice to Have Aruba Central/ClearPass SD WAN technologies SaaS and cloud-delivered WLAN/WiFi solutions Prior experience modernising Legacy network environments One stage interview, MS teams to start ASAP.
SC Cleared Observability Consultant: Dynatrace, Splunk, Cloud, ITSM, Clearance - (RL8136) Our Global Enterprise client is looking for an SC Cleared Enterprise Observability Consultant with an in-depth understanding of Observability platforms and technologies ranging between Vendor Specific products eg Dynatrace, Splunk, Grafana, Cribl etc. & Open-Source Observability projects eg Open Telemetry, Prometheus, Grafana OSS etc. You will be responsible for providing Observability platform delivery expertise to deliver advisory, design & implementation services that meets our customers business requirements within their overall observability strategy. The role will also involve staying at the forefront of new technologies and new vendors, working within the Enterprise Observability Practice. Start Date: 5th May 2026 Duration: 115 days (initially) Pay Rate: £347 p/d (PLEASE NOTE: Employer NI is paid for by the client) Total Daily Earnings: £425 p/d (includes rolled up holiday) IR35 Status: Inside Location: Hybrid (some travelling involved) Clearance: SC Clearance is highly desirable Responsibilities: Observability Strategy & Advisory Lead discovery workshops to assess observability maturity and define tailored roadmaps aligned to business and IT objectives Assess current monitoring and observability maturity for Enterprise Organisations & recommend tooling strategies, often leveraging platforms like Dynatrace for full-stack visibility Translate business and technical requirements into actionable observability use cases to support change management and enablement initiatives Advise on tools, platforms, and best practices (eg, OpenTelemetry, SIEM vs Observability, Telemetry Management, SRE principles) Architecture & Solution Design Design end-to-end observability architectures, including Logs, metrics, traces, profiles etc., Distributed tracing frameworks/APM tooling, Infrastructure & cloud monitoring, Synthetic and real user monitoring Create telemetry data pipelines and instrumentation strategies Ensure scalable, secure, and cost-efficient observability patterns Tooling Implementation Deploy and configure observability platforms such as Dynatrace, Splunk, Grafana Cloud, Cribl, Elastic Implement OpenTelemetry collectors, agents, and SDK instrumentation strategies Build dashboards, alerts, and automation workflows Integrate Observability platforms with ITSM, AIOps, Event Management platforms Troubleshooting & Performance Engineering Analyse application, infrastructure, and network performance issues. Lead root cause analysis and performance optimisation initiatives. Enable proactive detection through anomaly detection and alert tuning. Technical Skills: 10+ years in consulting, enterprise design, and implementation roles Expertise in observability frameworks, telemetry pipelines, and service mesh integrations. Deep understanding of observability pillars: metrics, logs, traces, and user experience. Expert Level Familiarity with Products such as Dynatrace, Splunk, Grafana Cloud, Cribl (experience with at least two product sets) Strong understanding of Observability platform architecture, including Telemetry Storage, OpenTelemetry support, and cloud integrations. Experience with Dynatrace/Splunk/Grafana APIs, tagging strategies, and problem detection workflows. Proficiency in Scripting (Python, Bash) and automation tools (Terraform, Ansible). Strong stakeholder engagement and communication skills. Desirable: Professional Level Certifications in Observability products/OpenTelemetry Associate Certification/Prometheus Associate Certification Familiarity with DevOps and Platform engineering ways of working with associated tools (CI/CD, git, automation etc.) Working level understanding of Cloud/Cloud Native Observability technologies (AWS CloudWatch, Azure Monitor, eBPF, Prometheus etc.) Good understanding of networking principles related to Observability protocols (Syslog, SNMP, OTLP etc.) Experience integrating Observability platforms with ITSM and alerting platforms Cloud/CNCF certifications To apply for this SC Cleared Observability Consultant contract job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
22/04/2026
Contractor
SC Cleared Observability Consultant: Dynatrace, Splunk, Cloud, ITSM, Clearance - (RL8136) Our Global Enterprise client is looking for an SC Cleared Enterprise Observability Consultant with an in-depth understanding of Observability platforms and technologies ranging between Vendor Specific products eg Dynatrace, Splunk, Grafana, Cribl etc. & Open-Source Observability projects eg Open Telemetry, Prometheus, Grafana OSS etc. You will be responsible for providing Observability platform delivery expertise to deliver advisory, design & implementation services that meets our customers business requirements within their overall observability strategy. The role will also involve staying at the forefront of new technologies and new vendors, working within the Enterprise Observability Practice. Start Date: 5th May 2026 Duration: 115 days (initially) Pay Rate: £347 p/d (PLEASE NOTE: Employer NI is paid for by the client) Total Daily Earnings: £425 p/d (includes rolled up holiday) IR35 Status: Inside Location: Hybrid (some travelling involved) Clearance: SC Clearance is highly desirable Responsibilities: Observability Strategy & Advisory Lead discovery workshops to assess observability maturity and define tailored roadmaps aligned to business and IT objectives Assess current monitoring and observability maturity for Enterprise Organisations & recommend tooling strategies, often leveraging platforms like Dynatrace for full-stack visibility Translate business and technical requirements into actionable observability use cases to support change management and enablement initiatives Advise on tools, platforms, and best practices (eg, OpenTelemetry, SIEM vs Observability, Telemetry Management, SRE principles) Architecture & Solution Design Design end-to-end observability architectures, including Logs, metrics, traces, profiles etc., Distributed tracing frameworks/APM tooling, Infrastructure & cloud monitoring, Synthetic and real user monitoring Create telemetry data pipelines and instrumentation strategies Ensure scalable, secure, and cost-efficient observability patterns Tooling Implementation Deploy and configure observability platforms such as Dynatrace, Splunk, Grafana Cloud, Cribl, Elastic Implement OpenTelemetry collectors, agents, and SDK instrumentation strategies Build dashboards, alerts, and automation workflows Integrate Observability platforms with ITSM, AIOps, Event Management platforms Troubleshooting & Performance Engineering Analyse application, infrastructure, and network performance issues. Lead root cause analysis and performance optimisation initiatives. Enable proactive detection through anomaly detection and alert tuning. Technical Skills: 10+ years in consulting, enterprise design, and implementation roles Expertise in observability frameworks, telemetry pipelines, and service mesh integrations. Deep understanding of observability pillars: metrics, logs, traces, and user experience. Expert Level Familiarity with Products such as Dynatrace, Splunk, Grafana Cloud, Cribl (experience with at least two product sets) Strong understanding of Observability platform architecture, including Telemetry Storage, OpenTelemetry support, and cloud integrations. Experience with Dynatrace/Splunk/Grafana APIs, tagging strategies, and problem detection workflows. Proficiency in Scripting (Python, Bash) and automation tools (Terraform, Ansible). Strong stakeholder engagement and communication skills. Desirable: Professional Level Certifications in Observability products/OpenTelemetry Associate Certification/Prometheus Associate Certification Familiarity with DevOps and Platform engineering ways of working with associated tools (CI/CD, git, automation etc.) Working level understanding of Cloud/Cloud Native Observability technologies (AWS CloudWatch, Azure Monitor, eBPF, Prometheus etc.) Good understanding of networking principles related to Observability protocols (Syslog, SNMP, OTLP etc.) Experience integrating Observability platforms with ITSM and alerting platforms Cloud/CNCF certifications To apply for this SC Cleared Observability Consultant contract job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
We are working with a global healthcare and insurance organisation who are making a real difference to people's lives. We require an experienced Senior Data Platform Engineer to join the AI and Data Platform teams. 100,000 + Bonus + Excellent Benefits Fully remote with occasional travel to one of their offices. You will contribute to the design and operation of a scalable, secure enterprise data platform supporting advanced analytics and business intelligence in a healthcare and insurance setting. You'll work with high autonomy, mentor junior engineers, and drive technical excellence while ensuring compliance and performance. This is a key role in shaping a robust, automated data platform that powers better patient care and smarter insurance services. Please note: this is a Platform Engineering role rather than a Data Engineering position. We welcome applications from data engineers who also bring strong platform engineering experience - for example, working with IaC, Terraform, or similar tooling. Role: Contribute to the design and delivery of robust, automated, and scalable Azure and Snowflake data platform components. Develop and maintain infrastructure-as-code using Terraform, ensuring consistency and reusability across environments. Build and optimise CI/CD pipelines using Azure DevOps and GitHub Actions to support rapid, reliable deployments. Implement observability practices including logging, metrics, and alerting using observability tools. Collaborate with the Lead Engineer and Architects to align implementation with platform standards and patterns. Provide technical guidance and mentorship to mid-level engineers, promoting best practices in automation and. monitoring Key Skills / Qualifications needed for this role: Extensive experience in platform engineering, with a strong emphasis on Azure-based data solutions. Expert-level knowledge of Azure and/or Snowflake services, including Data Factory, Data Lake, Azure ML, and Power BI/Fabric. Proven experience with infrastructure-as-code using Terraform and building CI/CD pipelines via Azure DevOps and GitHub Actions. Strong grasp of observability practices, including logging, metrics, alerting, and performance optimisation. Deep understanding of cloud security, with experience applying secure-by-design principles in Azure and/or Snowflake (e.g., network isolation, IAM, data protection). Proficiency in scripting and automation using PowerShell, Bash, or Python. Collaborative mindset, with a proven track record of working effectively across engineering, data science, and business teams. Clear communicator, capable of documenting technical designs, contributing to platform standards, and presenting solutions to stakeholders. Leadership experience, including mentoring junior engineers and fostering a culture of continuous improvement and knowledge sharing - highly desirable. This is majority remote based although you will need to attend the office in either London or Manchester when needed. This company look after their employees and you can expect a large bonus plus some excellent benefits. We are interviewing currently so apply now for immediate consideration for the Senior Data Platform Engineer position or contact Stuart Barnes at ITSS Recruitment for further information.
21/04/2026
Full time
We are working with a global healthcare and insurance organisation who are making a real difference to people's lives. We require an experienced Senior Data Platform Engineer to join the AI and Data Platform teams. 100,000 + Bonus + Excellent Benefits Fully remote with occasional travel to one of their offices. You will contribute to the design and operation of a scalable, secure enterprise data platform supporting advanced analytics and business intelligence in a healthcare and insurance setting. You'll work with high autonomy, mentor junior engineers, and drive technical excellence while ensuring compliance and performance. This is a key role in shaping a robust, automated data platform that powers better patient care and smarter insurance services. Please note: this is a Platform Engineering role rather than a Data Engineering position. We welcome applications from data engineers who also bring strong platform engineering experience - for example, working with IaC, Terraform, or similar tooling. Role: Contribute to the design and delivery of robust, automated, and scalable Azure and Snowflake data platform components. Develop and maintain infrastructure-as-code using Terraform, ensuring consistency and reusability across environments. Build and optimise CI/CD pipelines using Azure DevOps and GitHub Actions to support rapid, reliable deployments. Implement observability practices including logging, metrics, and alerting using observability tools. Collaborate with the Lead Engineer and Architects to align implementation with platform standards and patterns. Provide technical guidance and mentorship to mid-level engineers, promoting best practices in automation and. monitoring Key Skills / Qualifications needed for this role: Extensive experience in platform engineering, with a strong emphasis on Azure-based data solutions. Expert-level knowledge of Azure and/or Snowflake services, including Data Factory, Data Lake, Azure ML, and Power BI/Fabric. Proven experience with infrastructure-as-code using Terraform and building CI/CD pipelines via Azure DevOps and GitHub Actions. Strong grasp of observability practices, including logging, metrics, alerting, and performance optimisation. Deep understanding of cloud security, with experience applying secure-by-design principles in Azure and/or Snowflake (e.g., network isolation, IAM, data protection). Proficiency in scripting and automation using PowerShell, Bash, or Python. Collaborative mindset, with a proven track record of working effectively across engineering, data science, and business teams. Clear communicator, capable of documenting technical designs, contributing to platform standards, and presenting solutions to stakeholders. Leadership experience, including mentoring junior engineers and fostering a culture of continuous improvement and knowledge sharing - highly desirable. This is majority remote based although you will need to attend the office in either London or Manchester when needed. This company look after their employees and you can expect a large bonus plus some excellent benefits. We are interviewing currently so apply now for immediate consideration for the Senior Data Platform Engineer position or contact Stuart Barnes at ITSS Recruitment for further information.
About Us: Solirius Reply delivers technical consultancy and application delivery to our clients in order to solve real world problems and allow our clients to respond to an ever-changing technical landscape. We partner closely with our clients, embedding our consultants into their businesses in order to provide a bespoke service, allowing us to truly understand our clients' needs. It is this close collaboration with our clients that has enabled us to grow rapidly in recent years and will drive our ambitious future growth plans. We currently have over 400 consultants working with a variety of key clients from both the public and private sectors such as the Ministry of Justice, Department for Education, FCDOS, UEFA, International Olympic Committee and Mercedes Benz; with plans to increase our client base further in the near future. We operate as a flat organisation and believe in trusting and supporting our team to operate independently. We pride ourselves on being specialists at what we do, making the most of our consultants' expertise in their fields, to provide a best-in-class service to our clients. All our consultants have the opportunity to work on a range of different projects, providing a broad range of knowledge on which to develop their careers and progress in the direction they choose. About You: You are a motivated and adaptable professional with a strong analytical mindset and a passion for working with data to solve real-world problems. You enjoy working in collaborative, agile teams and take pride in delivering high-quality, data-driven solutions that make a tangible impact. With strong communication skills and a consultative approach, you're comfortable engaging with clients, understanding their needs, and translating them into robust data architectures and platforms. You understand and align with Solirius Reply Values. The Role: We are looking for experienced Data Architects to work on our projects with our public sector clients, helping to deliver Solirius' services to the highest standard. The role will involve working with multiple business and technical stakeholders, helping them to design data solutions and architectures which will then be implemented by the delivery teams. As a Data Architect, you will be expected to operate with a high degree of autonomy, using your experience and judgement to resolve complex data and architectural challenges in alignment with client needs. You will proactively manage escalations and take ownership of delivering effective, scalable, and secure data solutions. Successful candidates will also be expected to take an active role in practice development: developing new data services to help with Solirius' business development - creating and updating data architecture artefacts, liaising with other Solirius practices to maintain practice profile and contributing to the development of more junior practice members. In addition to technical leadership, you will play a key role in identifying and shaping new business opportunities, working closely with stakeholders across client organisations to understand strategic objectives and deliver value-driven, data-led outcomes. You must be a strong and confident communicator, capable of influencing both technical and non-technical audiences. You will lead and mentor technical teams, build consensus across diverse stakeholder groups, and foster collaborative, cross-functional working environments. Your role will require innovative thinking, applying best practices and emerging data technologies to solve complex business problems in creative and pragmatic ways. Key Responsibilities: Design end-to-end data architectures that meet business, technical, and security requirements. Translate business and analytical requirements into scalable, secure, and cost-effective data platforms. Ensure alignment with enterprise data architecture, data strategy, and governance standards. Lead data architecture reviews and technical design workshops with stakeholders and delivery teams. Support Agile delivery teams, defining MVP data architecture and providing ongoing technical direction. Collaborate with stakeholders across business, product, and IT to gain buy-in and drive data-driven decisions. Define data models, data flows, metadata standards, and integration patterns. Contribute to data engineering, automation, and infrastructure-as-code practices for delivery at scale. Ensure data security, privacy, compliance, and risk management are Embedded in the solution. Produce clear data architecture documentation using standards such as conceptual, logical and physical models, C4, Archimate, and cloud-native diagrams, including high-level designs and relevant data artefacts. Key Experience: Extensive experience in stakeholder engagement and senior-level communication, including C-suite. Proven experience in client-facing and/or consultancy environments. Demonstrated ability to translate complex business and analytical requirements into scalable and resilient data architectures. Strong track record in designing and delivering end-to-end data platforms in enterprise settings. Expertise in conceptual, logical, and physical data modelling. Deep understanding of data platforms, data integration, and analytics architectures. Ability to design solutions that meet both functional and non-functional requirements, including performance, scalability, security, and compliance. Broad experience with data governance, data quality, master data management, and regulatory requirements. Key Skills: Advanced knowledge of cloud data platforms: Azure, AWS, and Google Cloud Platform (GCP). Strong experience with data warehousing, data lakes, and lakehouse architectures. Hands-on experience with SQL and at least one programming language such as Python, Scala, or Java. ETL/ELT design and data pipeline orchestration. Streaming and event-driven data architectures. Agile and DevOps delivery methodologies applied to data platforms (DataOps). Expertise in data architecture and modelling (conceptual, logical, and physical). Familiarity with modern integration technologies and patterns (eg, API-driven, event streaming, service mesh). What We Offer: Competitive Salary 25 Days Annual Leave + Bank Holidays Flexibility to work from home 10 days allocated for development training per year Generous Discretionary Bonus Statutory & Contributory Pension Private Healthcare Cover Discounted Gym Membership Enhanced Parental Leave Paid Fertility Leave Cycle to Work and Electric Vehicle schemes Access to Employee Assistance Programme (EAP) Annual Away Days Monthly Company Socials Equality & Diversity: Solirius Reply is an equal opportunity employer. We are committed to creating a work environment that supports, celebrates, encourages and respects all individuals and in which all processes are based on merit, competence and business needs. We do not discriminate on the basis of race, religion, gender, sexuality, age, disability, ethnicity, marital status or any other protected characteristics. Should you require further assistance or require any reasonable adjustments be put in place to better support your application process, please do not hesitate to raise this with us.
21/04/2026
Full time
About Us: Solirius Reply delivers technical consultancy and application delivery to our clients in order to solve real world problems and allow our clients to respond to an ever-changing technical landscape. We partner closely with our clients, embedding our consultants into their businesses in order to provide a bespoke service, allowing us to truly understand our clients' needs. It is this close collaboration with our clients that has enabled us to grow rapidly in recent years and will drive our ambitious future growth plans. We currently have over 400 consultants working with a variety of key clients from both the public and private sectors such as the Ministry of Justice, Department for Education, FCDOS, UEFA, International Olympic Committee and Mercedes Benz; with plans to increase our client base further in the near future. We operate as a flat organisation and believe in trusting and supporting our team to operate independently. We pride ourselves on being specialists at what we do, making the most of our consultants' expertise in their fields, to provide a best-in-class service to our clients. All our consultants have the opportunity to work on a range of different projects, providing a broad range of knowledge on which to develop their careers and progress in the direction they choose. About You: You are a motivated and adaptable professional with a strong analytical mindset and a passion for working with data to solve real-world problems. You enjoy working in collaborative, agile teams and take pride in delivering high-quality, data-driven solutions that make a tangible impact. With strong communication skills and a consultative approach, you're comfortable engaging with clients, understanding their needs, and translating them into robust data architectures and platforms. You understand and align with Solirius Reply Values. The Role: We are looking for experienced Data Architects to work on our projects with our public sector clients, helping to deliver Solirius' services to the highest standard. The role will involve working with multiple business and technical stakeholders, helping them to design data solutions and architectures which will then be implemented by the delivery teams. As a Data Architect, you will be expected to operate with a high degree of autonomy, using your experience and judgement to resolve complex data and architectural challenges in alignment with client needs. You will proactively manage escalations and take ownership of delivering effective, scalable, and secure data solutions. Successful candidates will also be expected to take an active role in practice development: developing new data services to help with Solirius' business development - creating and updating data architecture artefacts, liaising with other Solirius practices to maintain practice profile and contributing to the development of more junior practice members. In addition to technical leadership, you will play a key role in identifying and shaping new business opportunities, working closely with stakeholders across client organisations to understand strategic objectives and deliver value-driven, data-led outcomes. You must be a strong and confident communicator, capable of influencing both technical and non-technical audiences. You will lead and mentor technical teams, build consensus across diverse stakeholder groups, and foster collaborative, cross-functional working environments. Your role will require innovative thinking, applying best practices and emerging data technologies to solve complex business problems in creative and pragmatic ways. Key Responsibilities: Design end-to-end data architectures that meet business, technical, and security requirements. Translate business and analytical requirements into scalable, secure, and cost-effective data platforms. Ensure alignment with enterprise data architecture, data strategy, and governance standards. Lead data architecture reviews and technical design workshops with stakeholders and delivery teams. Support Agile delivery teams, defining MVP data architecture and providing ongoing technical direction. Collaborate with stakeholders across business, product, and IT to gain buy-in and drive data-driven decisions. Define data models, data flows, metadata standards, and integration patterns. Contribute to data engineering, automation, and infrastructure-as-code practices for delivery at scale. Ensure data security, privacy, compliance, and risk management are Embedded in the solution. Produce clear data architecture documentation using standards such as conceptual, logical and physical models, C4, Archimate, and cloud-native diagrams, including high-level designs and relevant data artefacts. Key Experience: Extensive experience in stakeholder engagement and senior-level communication, including C-suite. Proven experience in client-facing and/or consultancy environments. Demonstrated ability to translate complex business and analytical requirements into scalable and resilient data architectures. Strong track record in designing and delivering end-to-end data platforms in enterprise settings. Expertise in conceptual, logical, and physical data modelling. Deep understanding of data platforms, data integration, and analytics architectures. Ability to design solutions that meet both functional and non-functional requirements, including performance, scalability, security, and compliance. Broad experience with data governance, data quality, master data management, and regulatory requirements. Key Skills: Advanced knowledge of cloud data platforms: Azure, AWS, and Google Cloud Platform (GCP). Strong experience with data warehousing, data lakes, and lakehouse architectures. Hands-on experience with SQL and at least one programming language such as Python, Scala, or Java. ETL/ELT design and data pipeline orchestration. Streaming and event-driven data architectures. Agile and DevOps delivery methodologies applied to data platforms (DataOps). Expertise in data architecture and modelling (conceptual, logical, and physical). Familiarity with modern integration technologies and patterns (eg, API-driven, event streaming, service mesh). What We Offer: Competitive Salary 25 Days Annual Leave + Bank Holidays Flexibility to work from home 10 days allocated for development training per year Generous Discretionary Bonus Statutory & Contributory Pension Private Healthcare Cover Discounted Gym Membership Enhanced Parental Leave Paid Fertility Leave Cycle to Work and Electric Vehicle schemes Access to Employee Assistance Programme (EAP) Annual Away Days Monthly Company Socials Equality & Diversity: Solirius Reply is an equal opportunity employer. We are committed to creating a work environment that supports, celebrates, encourages and respects all individuals and in which all processes are based on merit, competence and business needs. We do not discriminate on the basis of race, religion, gender, sexuality, age, disability, ethnicity, marital status or any other protected characteristics. Should you require further assistance or require any reasonable adjustments be put in place to better support your application process, please do not hesitate to raise this with us.
Senior Cloud Automation Engineer Manchester (Hybrid- 1 day on site every 2 weeks) £55,486 - £64,631 (exceptional max £75,875) A Senior Automation Engineer is required for our client to help modernise and scale our technology platforms through high-quality, secure automation. In this role, you'll lead the design and delivery of Infrastructure as Code (IaC) and CI/CD solutions across a hybrid, Azure-first environment, working closely with senior stakeholders, architects, and engineering teams. You'll combine hands-on technical expertise with leadership, setting automation standards, shaping strategy, and mentoring others. Responsibilities: Design and deliver enterprise-scale automation using Terraform, Bicep, PowerShell and Python Build and improve CI/CD pipelines with GitHub Actions and Azure DevOps Act as the Azure automation SME, advising on security, resilience, and cost efficiency Influence standards and governance via the Technical Design Authority Support and develop colleagues in an inclusive, collaborative team Experience required: Strong experience delivering automation in Azure and hybrid environments Deep IaC and Scripting expertise Knowledge of on-premise infrastructure technologies including, Server (operating system and hardware), VMware, Exchange, Networks Switches, Firewalls, storge technologies security and SAN. Experience working within regulated or complex enterprise estates Excellent communication and stakeholder engagement skills Leadership or mentoring experience Experience with M365, Power Platform or AI-driven automation is beneficial but not essential.
21/04/2026
Full time
Senior Cloud Automation Engineer Manchester (Hybrid- 1 day on site every 2 weeks) £55,486 - £64,631 (exceptional max £75,875) A Senior Automation Engineer is required for our client to help modernise and scale our technology platforms through high-quality, secure automation. In this role, you'll lead the design and delivery of Infrastructure as Code (IaC) and CI/CD solutions across a hybrid, Azure-first environment, working closely with senior stakeholders, architects, and engineering teams. You'll combine hands-on technical expertise with leadership, setting automation standards, shaping strategy, and mentoring others. Responsibilities: Design and deliver enterprise-scale automation using Terraform, Bicep, PowerShell and Python Build and improve CI/CD pipelines with GitHub Actions and Azure DevOps Act as the Azure automation SME, advising on security, resilience, and cost efficiency Influence standards and governance via the Technical Design Authority Support and develop colleagues in an inclusive, collaborative team Experience required: Strong experience delivering automation in Azure and hybrid environments Deep IaC and Scripting expertise Knowledge of on-premise infrastructure technologies including, Server (operating system and hardware), VMware, Exchange, Networks Switches, Firewalls, storge technologies security and SAN. Experience working within regulated or complex enterprise estates Excellent communication and stakeholder engagement skills Leadership or mentoring experience Experience with M365, Power Platform or AI-driven automation is beneficial but not essential.
Job Title: Dynamics 365 Developer & Support Engineer (Lead) Location: Hybrid - London (2 to 3 days on site) Salary: £87,000 per annum + benefits Contract Type: Permanent Overview We're recruiting a Dynamics 365 Developer & Support Engineer (Lead) to take ownership of the design, development, and support of Microsoft Dynamics 365 and Power Platform solutions. This is a senior, hands-on role blending development leadership with day-to-day technical support. You'll shape solution design, drive integrations, and ensure robust, secure, and high-performing systems across a complex Microsoft ecosystem. Key Responsibilities Development Design, develop, and enhance Dynamics 365 CE applications aligned with best practice. Build and configure workflows, plugins, automations, and integrations across Power Platform (Power Apps, Power Automate, Dataverse, Power BI) . Integrate Dynamics 365 with Azure Logic Apps , API Management , and other enterprise services. Implement and manage CI/CD pipelines and GIT version control . Collaborate closely with Product Managers and business users in an Agile SCRUM environment. Ensure quality assurance and compliance with OWASP Top 10 and security standards. Support Provide 2nd and 3rd line support across Dynamics CRM applications. Manage incidents, service requests, and changes following ITIL processes. Monitor CRM performance and proactively resolve operational issues Skills & Experience Required Essential Proven technical expertise in Dynamics 365 CE configuration and customisation. Hands-on with Power Platform including Power Apps, Power Automate, Power BI, and Dataverse . Strong integration skills with Azure Logic Apps , REST/SOAP APIs , and KingswaySoft . Experience with CI/CD , DevOps , and GIT version control. Proficient in SQL , SSIS , and Azure Data Factory (ADF) . Working knowledge of Agile/SCRUM and OWASP principles. Excellent stakeholder management, communication, and problem-solving skills. Desirable Exposure to Copilot and AI-driven tools . ITIL certification or experience working in ITIL environments. Performance tuning and data migration expertise. *Rates depend on experience and client requirements
21/04/2026
Full time
Job Title: Dynamics 365 Developer & Support Engineer (Lead) Location: Hybrid - London (2 to 3 days on site) Salary: £87,000 per annum + benefits Contract Type: Permanent Overview We're recruiting a Dynamics 365 Developer & Support Engineer (Lead) to take ownership of the design, development, and support of Microsoft Dynamics 365 and Power Platform solutions. This is a senior, hands-on role blending development leadership with day-to-day technical support. You'll shape solution design, drive integrations, and ensure robust, secure, and high-performing systems across a complex Microsoft ecosystem. Key Responsibilities Development Design, develop, and enhance Dynamics 365 CE applications aligned with best practice. Build and configure workflows, plugins, automations, and integrations across Power Platform (Power Apps, Power Automate, Dataverse, Power BI) . Integrate Dynamics 365 with Azure Logic Apps , API Management , and other enterprise services. Implement and manage CI/CD pipelines and GIT version control . Collaborate closely with Product Managers and business users in an Agile SCRUM environment. Ensure quality assurance and compliance with OWASP Top 10 and security standards. Support Provide 2nd and 3rd line support across Dynamics CRM applications. Manage incidents, service requests, and changes following ITIL processes. Monitor CRM performance and proactively resolve operational issues Skills & Experience Required Essential Proven technical expertise in Dynamics 365 CE configuration and customisation. Hands-on with Power Platform including Power Apps, Power Automate, Power BI, and Dataverse . Strong integration skills with Azure Logic Apps , REST/SOAP APIs , and KingswaySoft . Experience with CI/CD , DevOps , and GIT version control. Proficient in SQL , SSIS , and Azure Data Factory (ADF) . Working knowledge of Agile/SCRUM and OWASP principles. Excellent stakeholder management, communication, and problem-solving skills. Desirable Exposure to Copilot and AI-driven tools . ITIL certification or experience working in ITIL environments. Performance tuning and data migration expertise. *Rates depend on experience and client requirements
Applications Service Lead | Insurance | London (Hybrid) | £75,000 We're hiring an Applications Service Lead to take ownership of core underwriting and claims systems within a London Market environment. This role sits between the business, vendors and engineering teams, focused on keeping applications stable, resolving incidents effectively and improving service performance. Key focus areas: BAU support across insurance platforms (PAS, claims, document systems) Major incident coordination and issue resolution Working with vendors and internal teams to investigate and fix problems Managing releases, defects and change via Azure DevOps Ensuring applications are performing as expected in a live environment We're looking for someone who: Has experience supporting or owning insurance applications Is comfortable working with vendors and offshore teams Can understand issues at application level (data, integrations, system behaviour) Has worked with tools like ServiceNow and Azure DevOps London Market experience is preferred. Exposure to platforms such as Eclipse, Sequel or Guidewire is beneficial but not essential. If you want to own key systems and work close to both technology and the business, get in touch.
21/04/2026
Full time
Applications Service Lead | Insurance | London (Hybrid) | £75,000 We're hiring an Applications Service Lead to take ownership of core underwriting and claims systems within a London Market environment. This role sits between the business, vendors and engineering teams, focused on keeping applications stable, resolving incidents effectively and improving service performance. Key focus areas: BAU support across insurance platforms (PAS, claims, document systems) Major incident coordination and issue resolution Working with vendors and internal teams to investigate and fix problems Managing releases, defects and change via Azure DevOps Ensuring applications are performing as expected in a live environment We're looking for someone who: Has experience supporting or owning insurance applications Is comfortable working with vendors and offshore teams Can understand issues at application level (data, integrations, system behaviour) Has worked with tools like ServiceNow and Azure DevOps London Market experience is preferred. Exposure to platforms such as Eclipse, Sequel or Guidewire is beneficial but not essential. If you want to own key systems and work close to both technology and the business, get in touch.
Lead Data Platform Engineer - Databricks - IAC - Terraform - Azure Data Factory - Data Lakehouse The Data Platform Engineer designs, develops, automates, and maintains secure, scalable, and compliant data platforms that enable the firm to efficiently manage, analyse, and utilise data. The role ensures that data solutions are robust and reliable while meeting regulatory obligations and safeguarding client confidentiality. Key Responsibilities Design and architect scalable, secure, and compliant data platforms and solutions, producing technical documentation and securing approvals through governance bodies such as Architecture Review Boards. Build and deliver robust data solutions using Databricks, PySpark, Spark SQL, Azure Data Factory, and Azure services. Develop APIs and write efficient Python, PySpark, and SQL code to support data integration, processing, and automation. Implement and manage CI/CD pipelines and automated deployments using Azure DevOps to enable reliable releases across environments. Develop and maintain infrastructure-as-code (eg, Terraform, ARM) to provision and manage cloud resources, including ADF pipelines, Databricks assets, and Unity Catalog components. Monitor, troubleshoot, and optimise data platform performance, reliability, and costs, identifying bottlenecks and recommending improvements. Create dashboards and observability tools to report on platform performance, usage, incidents, and operational KPIs. Knowledge, Skills & Experience Degree in Computer Science, Data Engineering, or a related field. Proven experience designing and building cloud-based data platforms, ideally within Azure. Strong hands-on expertise with Databricks, PySpark, Spark SQL, and Azure Data Factory. Solid understanding of Data Lakehouse architecture and modern data platform design. Proficiency in Python for data engineering, automation, and data processing. Experience developing and integrating REST APIs for data services. Strong DevOps experience, including CI/CD, automated testing, and release management for data platforms. Experience with Infrastructure as Code tools such as Terraform or ARM templates. Knowledge of data modelling, ETL/ELT pipelines, and data warehousing concepts. Familiarity with monitoring, logging, and alerting tools (eg, Azure Monitor). Desirable Experience with additional Azure services (eg, Fabric, Azure Functions, Logic Apps). Knowledge of cloud cost optimisation for data platforms. Understanding of data governance and regulatory compliance (eg, GDPR). Experience working in regulated or professional services environments.
21/04/2026
Full time
Lead Data Platform Engineer - Databricks - IAC - Terraform - Azure Data Factory - Data Lakehouse The Data Platform Engineer designs, develops, automates, and maintains secure, scalable, and compliant data platforms that enable the firm to efficiently manage, analyse, and utilise data. The role ensures that data solutions are robust and reliable while meeting regulatory obligations and safeguarding client confidentiality. Key Responsibilities Design and architect scalable, secure, and compliant data platforms and solutions, producing technical documentation and securing approvals through governance bodies such as Architecture Review Boards. Build and deliver robust data solutions using Databricks, PySpark, Spark SQL, Azure Data Factory, and Azure services. Develop APIs and write efficient Python, PySpark, and SQL code to support data integration, processing, and automation. Implement and manage CI/CD pipelines and automated deployments using Azure DevOps to enable reliable releases across environments. Develop and maintain infrastructure-as-code (eg, Terraform, ARM) to provision and manage cloud resources, including ADF pipelines, Databricks assets, and Unity Catalog components. Monitor, troubleshoot, and optimise data platform performance, reliability, and costs, identifying bottlenecks and recommending improvements. Create dashboards and observability tools to report on platform performance, usage, incidents, and operational KPIs. Knowledge, Skills & Experience Degree in Computer Science, Data Engineering, or a related field. Proven experience designing and building cloud-based data platforms, ideally within Azure. Strong hands-on expertise with Databricks, PySpark, Spark SQL, and Azure Data Factory. Solid understanding of Data Lakehouse architecture and modern data platform design. Proficiency in Python for data engineering, automation, and data processing. Experience developing and integrating REST APIs for data services. Strong DevOps experience, including CI/CD, automated testing, and release management for data platforms. Experience with Infrastructure as Code tools such as Terraform or ARM templates. Knowledge of data modelling, ETL/ELT pipelines, and data warehousing concepts. Familiarity with monitoring, logging, and alerting tools (eg, Azure Monitor). Desirable Experience with additional Azure services (eg, Fabric, Azure Functions, Logic Apps). Knowledge of cloud cost optimisation for data platforms. Understanding of data governance and regulatory compliance (eg, GDPR). Experience working in regulated or professional services environments.
DevSecOps/Container Security Engineer (Containerisation/Kubernetes) Sheffield (3 days per week onsite) Contract We're working with a leading financial services client to hire a Container Security Engineer with a strong focus on Containerisation and Kubernetes security. This role will play a key part in advancing secure container adoption across a large-scale enterprise environment. Key Responsibilities Support and enhance the container security programme, defining standards and best practices Provide hands-on expertise and guidance on Kubernetes and container security Integrate security tools into container life cycles to identify risks early in development Conduct security assessments across container platforms, pipelines, and workloads Implement observability and monitoring to detect vulnerabilities and security risks Collaborate with engineering, security, and risk teams to strengthen DevSecOps practices Support incident response and SOC activities related to container environments Ensure compliance with industry security standards (eg NIST, CIS, PCI-DSS) Key Requirements Strong experience with Kubernetes and container platforms (essential) Hands-on experience across cloud environments (AWS, GCP, Azure) Proven background in DevOps/DevSecOps, including CI/CD pipeline integration Experience with automation tools (Terraform, CloudFormation, Helm) Knowledge of container security tooling (eg scanners, CNAPP) Programming experience (eg Python or Java) Solid understanding of security principles within containerised environments Strong communication and stakeholder engagement skills This is an excellent opportunity to work at scale, driving secure container and Kubernetes practices within a complex, enterprise setting.
21/04/2026
Contractor
DevSecOps/Container Security Engineer (Containerisation/Kubernetes) Sheffield (3 days per week onsite) Contract We're working with a leading financial services client to hire a Container Security Engineer with a strong focus on Containerisation and Kubernetes security. This role will play a key part in advancing secure container adoption across a large-scale enterprise environment. Key Responsibilities Support and enhance the container security programme, defining standards and best practices Provide hands-on expertise and guidance on Kubernetes and container security Integrate security tools into container life cycles to identify risks early in development Conduct security assessments across container platforms, pipelines, and workloads Implement observability and monitoring to detect vulnerabilities and security risks Collaborate with engineering, security, and risk teams to strengthen DevSecOps practices Support incident response and SOC activities related to container environments Ensure compliance with industry security standards (eg NIST, CIS, PCI-DSS) Key Requirements Strong experience with Kubernetes and container platforms (essential) Hands-on experience across cloud environments (AWS, GCP, Azure) Proven background in DevOps/DevSecOps, including CI/CD pipeline integration Experience with automation tools (Terraform, CloudFormation, Helm) Knowledge of container security tooling (eg scanners, CNAPP) Programming experience (eg Python or Java) Solid understanding of security principles within containerised environments Strong communication and stakeholder engagement skills This is an excellent opportunity to work at scale, driving secure container and Kubernetes practices within a complex, enterprise setting.
Job Title: Senior Full Stack Developer (React & C#.NET) Contract Length: 12 Months Location: London - 3 Days a week Daily Rate: Circa 650/Day (Inside IR35) About the Role Join our dynamic team as a Senior Full Stack Developer, where your expertise in React JS and C# .NET Core will shine! We are looking for a tech-savvy professional with a passion for building scalable applications within the financial sector. This is an exciting opportunity to work with cloud-native architectures and cutting-edge technologies. If you thrive in a collaborative environment and are eager to make an impact, we want to hear from you! Technical Skills Front-end: Strong proficiency in React.js, JavaScript/TypeScript, Redux/RTK, and modern UI patterns, develop framework is essential Back-end: Expertise in C# .NET Core, REST APIs, and microservices-based development is essential Python (pyspark): Experience with data pipelines and analytics is a plus. Power BI: Familiarity with developing dashboards and data models. Architecture: Knowledge of microservices, API design, and event-driven architectures. DevOps/CI/CD: Experience with Azure DevOps, GitLab, GitHub Actions, or similar tools. Cloud Platforms: Proficient in Azure or AWS (Azure preferred). Databases: Strong skills in SQL Server, PostgreSQL, or other relational databases. containerisation: Experience with Docker and Kubernetes. Key Responsibilities Design & Develop: Create and maintain full stack applications using React JS and C#.NET Core. Build Services: Develop high-performance backend services and RESTful APIs with a focus on scalability and resilience. Create UI Components: Craft responsive and modular UI components using React JS and modern JavaScript/TypeScript patterns. Collaboration: Work closely with business stakeholders and domain experts in Risk, Regulatory Reporting, and Finance. Power BI Expertise: utilise your knowledge of Power BI to develop dashboards and analytical reports. CI/CD Implementation: Collaborate with DevSecOps to optimise CI/CD pipelines, ensuring automated testing and deployment. Architecture Contribution: Contribute to event-driven and distributed system designs using technologies like Kafka or Event Hub. Mentorship: Guide junior developers and foster a collaborative atmosphere within a cross-functional agile team. Soft Skills A strong engineering mindset with a relentless curiosity. Excellent analytical and problem-solving abilities. Exceptional communication skills to interact with both technical and non-technical stakeholders. An agile mindset with experience in Scrum/Agile environments. Ability to work independently and lead technical solutions from start to finish. Why Join Us? Work on innovative projects that make a difference in the financial sector. Collaborate with a talented team in a vibrant environment. Enjoy a competitive daily rate and flexible working arrangements. If you are ready to take your career to the next level and contribute to exciting projects, apply now! We can't wait to meet you! Pontoon is an employment consultancy. We put expertise, energy, and enthusiasm into improving everyone's chance of being part of the workplace. We respect and appreciate people of all ethnicities, generations, religious beliefs, sexual orientations, gender identities, and more. We do this by showcasing their talents, skills, and unique experience in an inclusive environment that helps them thrive. If you require reasonable adjustments at any stage, please let us know and we will be happy to support you.
20/04/2026
Contractor
Job Title: Senior Full Stack Developer (React & C#.NET) Contract Length: 12 Months Location: London - 3 Days a week Daily Rate: Circa 650/Day (Inside IR35) About the Role Join our dynamic team as a Senior Full Stack Developer, where your expertise in React JS and C# .NET Core will shine! We are looking for a tech-savvy professional with a passion for building scalable applications within the financial sector. This is an exciting opportunity to work with cloud-native architectures and cutting-edge technologies. If you thrive in a collaborative environment and are eager to make an impact, we want to hear from you! Technical Skills Front-end: Strong proficiency in React.js, JavaScript/TypeScript, Redux/RTK, and modern UI patterns, develop framework is essential Back-end: Expertise in C# .NET Core, REST APIs, and microservices-based development is essential Python (pyspark): Experience with data pipelines and analytics is a plus. Power BI: Familiarity with developing dashboards and data models. Architecture: Knowledge of microservices, API design, and event-driven architectures. DevOps/CI/CD: Experience with Azure DevOps, GitLab, GitHub Actions, or similar tools. Cloud Platforms: Proficient in Azure or AWS (Azure preferred). Databases: Strong skills in SQL Server, PostgreSQL, or other relational databases. containerisation: Experience with Docker and Kubernetes. Key Responsibilities Design & Develop: Create and maintain full stack applications using React JS and C#.NET Core. Build Services: Develop high-performance backend services and RESTful APIs with a focus on scalability and resilience. Create UI Components: Craft responsive and modular UI components using React JS and modern JavaScript/TypeScript patterns. Collaboration: Work closely with business stakeholders and domain experts in Risk, Regulatory Reporting, and Finance. Power BI Expertise: utilise your knowledge of Power BI to develop dashboards and analytical reports. CI/CD Implementation: Collaborate with DevSecOps to optimise CI/CD pipelines, ensuring automated testing and deployment. Architecture Contribution: Contribute to event-driven and distributed system designs using technologies like Kafka or Event Hub. Mentorship: Guide junior developers and foster a collaborative atmosphere within a cross-functional agile team. Soft Skills A strong engineering mindset with a relentless curiosity. Excellent analytical and problem-solving abilities. Exceptional communication skills to interact with both technical and non-technical stakeholders. An agile mindset with experience in Scrum/Agile environments. Ability to work independently and lead technical solutions from start to finish. Why Join Us? Work on innovative projects that make a difference in the financial sector. Collaborate with a talented team in a vibrant environment. Enjoy a competitive daily rate and flexible working arrangements. If you are ready to take your career to the next level and contribute to exciting projects, apply now! We can't wait to meet you! Pontoon is an employment consultancy. We put expertise, energy, and enthusiasm into improving everyone's chance of being part of the workplace. We respect and appreciate people of all ethnicities, generations, religious beliefs, sexual orientations, gender identities, and more. We do this by showcasing their talents, skills, and unique experience in an inclusive environment that helps them thrive. If you require reasonable adjustments at any stage, please let us know and we will be happy to support you.
The role will be for 18 months on a fixed term basis.Based from our office in Breedon on the Hill, Derbyshire circa 3 days a week, therefore applicants must live, within a commutable distance to this location. BREEDON GROUP PLC is a leading construction materials group operating from over 400 sites across the UK, Ireland and US. We have an opportunity to join our Data & Analytics team, supporting the delivery of seamless data integrations across our platforms and systems. We are seeking an experienced Azure Integration Engineer who will support the design, development and maintenance of data connections that help power our business systems. The Role As we continue to evolve into a data-driven organisation, we recognise the importance of effective and reliable data integrations. We are looking for a motivated and detail-oriented individual with experience in building and maintaining reliable integrations across enterprise systems. You will design, develop, and support integration services that ensure seamless communication between business applications. Working closely with developers and analysts, you'll take ownership of integration workflows, ensure testing and monitoring activities, and troubleshoot production issues independently. Key Responsibilities Develop and maintain scalable data integration workflows using Azure Logic Apps, Azure Functions and other components as required. Build and manage secure APIs leveraging Azure API Management to enable seamless integration between internal systems. Implement reliable, event-driven and message-based integrations using Azure Service Bus and Azure Event Grid to support decoupled architectures. Monitor integration jobs and respond to system alerts as part of daily operations. Take ownership of testing activities to validate data accuracy and system performance. Work with senior team members and business stakeholders to gather requirements and translate business needs into robust integration solutions. Document integration processes and contribute to the maintenance of technical documentation Investigate and resolve integration issues in a structured and timely manner Participate in improvement efforts to enhance existing data workflows and reduce manual tasks Skills, Knowledge & Expertise Preferred Qualifications Bachelor's degree in Computer Science, Information Systems, or a related field Or equivalent experience gained in a professional data or integration environment Experience and Knowledge Minimum of 2 years of experience in an Azure integration-focused role Hands on experience with Azure Integration Tools (Azure Functions, Azure Logic Apps, API Management, Service Bus, Event Grid) Solid understanding of APIs (REST/SOAP) and integration patterns Proficiency in complex C#, SQL and Python and a solid understanding of relational databases and data transformations concepts. Familiarity with data formats such as JSON and XML Exposure to CI/CD pipelines and DevOps practices Some experience or awareness of ETL or middleware tools (e.g., Azure Data Factory, SSIS, Boomi, Kubernetes) Provision and manage infrastructure as code using Terraform is beneficial Skills Attention to detail with a focus on data accuracy and system reliability Eagerness to learn and develop within the integration and data space Logical approach to solving problems and debugging technical issues Effective communication skills, with the ability to collaborate across teams Strong organisational and time management skills Able to work independently with minimal guidance, prioritize tasks, and lead integration activities Personal Attributes Curious, motivated, and committed to continuous learning Adaptable and flexible in a changing environment Team-oriented, with a collaborative and supportive mindset Positive attitude and strong work ethic Dependable and proactive in completing tasks and following through on responsibilities Job Benefits 25 days holiday plus bank holidays Contributory Pension Scheme Free on-site Parking Holiday Buy Scheme Volunteer Scheme Share Save Scheme Life Assurance Enhanced Maternity, Adoption & Paternity Scheme Health & Wellbeing Initiatives Discount Scheme
20/04/2026
Full time
The role will be for 18 months on a fixed term basis.Based from our office in Breedon on the Hill, Derbyshire circa 3 days a week, therefore applicants must live, within a commutable distance to this location. BREEDON GROUP PLC is a leading construction materials group operating from over 400 sites across the UK, Ireland and US. We have an opportunity to join our Data & Analytics team, supporting the delivery of seamless data integrations across our platforms and systems. We are seeking an experienced Azure Integration Engineer who will support the design, development and maintenance of data connections that help power our business systems. The Role As we continue to evolve into a data-driven organisation, we recognise the importance of effective and reliable data integrations. We are looking for a motivated and detail-oriented individual with experience in building and maintaining reliable integrations across enterprise systems. You will design, develop, and support integration services that ensure seamless communication between business applications. Working closely with developers and analysts, you'll take ownership of integration workflows, ensure testing and monitoring activities, and troubleshoot production issues independently. Key Responsibilities Develop and maintain scalable data integration workflows using Azure Logic Apps, Azure Functions and other components as required. Build and manage secure APIs leveraging Azure API Management to enable seamless integration between internal systems. Implement reliable, event-driven and message-based integrations using Azure Service Bus and Azure Event Grid to support decoupled architectures. Monitor integration jobs and respond to system alerts as part of daily operations. Take ownership of testing activities to validate data accuracy and system performance. Work with senior team members and business stakeholders to gather requirements and translate business needs into robust integration solutions. Document integration processes and contribute to the maintenance of technical documentation Investigate and resolve integration issues in a structured and timely manner Participate in improvement efforts to enhance existing data workflows and reduce manual tasks Skills, Knowledge & Expertise Preferred Qualifications Bachelor's degree in Computer Science, Information Systems, or a related field Or equivalent experience gained in a professional data or integration environment Experience and Knowledge Minimum of 2 years of experience in an Azure integration-focused role Hands on experience with Azure Integration Tools (Azure Functions, Azure Logic Apps, API Management, Service Bus, Event Grid) Solid understanding of APIs (REST/SOAP) and integration patterns Proficiency in complex C#, SQL and Python and a solid understanding of relational databases and data transformations concepts. Familiarity with data formats such as JSON and XML Exposure to CI/CD pipelines and DevOps practices Some experience or awareness of ETL or middleware tools (e.g., Azure Data Factory, SSIS, Boomi, Kubernetes) Provision and manage infrastructure as code using Terraform is beneficial Skills Attention to detail with a focus on data accuracy and system reliability Eagerness to learn and develop within the integration and data space Logical approach to solving problems and debugging technical issues Effective communication skills, with the ability to collaborate across teams Strong organisational and time management skills Able to work independently with minimal guidance, prioritize tasks, and lead integration activities Personal Attributes Curious, motivated, and committed to continuous learning Adaptable and flexible in a changing environment Team-oriented, with a collaborative and supportive mindset Positive attitude and strong work ethic Dependable and proactive in completing tasks and following through on responsibilities Job Benefits 25 days holiday plus bank holidays Contributory Pension Scheme Free on-site Parking Holiday Buy Scheme Volunteer Scheme Share Save Scheme Life Assurance Enhanced Maternity, Adoption & Paternity Scheme Health & Wellbeing Initiatives Discount Scheme
Exponential-e Founded in 2002, Exponential-e swiftly established itself as a UK Cloud, Connectivity and Communications pioneer. Throughout our history, a focus upon leveraging leading-edge technology to deliver profitable and innovative services to our clients and prospects has resulted in industry and peer recognition for our ground-breaking approach, a truly world-class ICT services company. We're a company of innovators who think big and achieve bigger! Our people are crucial to the continuing success of our company. From our CEO to our new Graduates, each of our people demonstrates our PRIDE principles which are at the core of everything we do. Job Description Overall purpose of the job: Exponential'e Cloud products and services are continuing to grow but at the same time are evolving. With Public/ Hyperscale Cloud become a more common place and many of Exponential-e's customer adopting a hybrid approach to Cloud, a Senior Cloud Engineer is critical to this growth. As the Senior Cloud Engineer, responsibilities will lie across the Enterprise Cloud products and services delivered. The role is to deliver the following:- Hyperconverge & Native Virtualisation Backup Architecture Enterprise and Service Provider Storage Replication Technologies As an Senior Cloud Engineer within the Cloud Services department, the role will focus on Cloud and Digital technologies, in particular the virtualization space. The role will involve staying at the forefront of new technologies and features to maintain industry leading integrations to future-proof all customer designs. Working with Operational, Delivery and DevOps practices within Exponential-e to ensure that all technologies fall within the Exponential-e ecosystem to aid in customer satisfaction. With a strong background in Enterprise support environments, you will help customers manage change, minimize risk, optimize operations, and support business growth through pro-active and personable interactions. In doing so, you will build long-term customer relations, understanding their business goals and operational capability in order to best align and manage the customer solution, to drive Continuous Deployment and Integration. Key responsibilities for this job: Work with the Cloud Operations Manager to deliver an in-depth overview of our technical architectures and product capabilities that provide our unique value in the market. Work with Cloud Operations and Innovation team to identify systems, processes, and procedure to improve the delivery of cloud services. Ownership of specific technology domains within the Enterprise cloud space. Defining best practise for Cloud adoption, through the creation of an Enterprise Cloud Adoption Framework improving speed of delivery through standardized and robust set of cloud patterns with minimal manual intervention. Provide a broad range of skills to deliver infrastructure solutions that meet our customers' business requirements. Provide operational assistance to Cloud Operational and Delivery teams where necessary. Ensure that new product & services are evaluated and all transition tasks are completed. Assist in the identification of potential sales opportunities; discovering the customer's business requirements and challenges Collaborate with all department within Engineering to ensure that systems are built in a cohesive and secure way. Knowledge & Experience Required: Experience with NetApp Storage All Flash Arrays. Experience with Dell EMC Storage array such as VNX, Unity and PowerVaults. Extensive knowledge of VMware vSphere and virtualization technologies. Understanding configuring and deploying a shared and private cloud infrastructure. Exposure to enterprise cloud environments with multiple hosts and clusters. Experience working with in a Managed Services or Cloud Provider environment. Understanding and working knowledge of ITIL Framework. Familiarity with disaster recovery technologies such as Zerto. Knowledge of Public Clouds such as A Microsoft Azure etc. Fundamental understanding of networking technologies. Be able to work collaboratively and coordinate with other departments/individuals at all levels. Our People Our people are what makes Exponential-e Group the company it is today. This year's employee survey highlighted that 81% of employees who took the survey, would recommend a friend to work for our organisation. Learning and development are fundamental parts of daily life at Exponential-e. From their first day at the company, everyone is provided ample opportunities to develop their skills and broaden their horizons, with our own L&D team running a range of bespoke courses, based on the latest innovations and challenges across the digital landscape. Exponential-e Group is committed to providing equal opportunities in employment and treating all employees with respect and dignity. The company respects and values the diversity of its staff, striving to maintain an environment where there is opportunity for everyone to feel valued, their talents to be utilised and for both personal and organisational aspirations to be met. Every employee plays a vital role in helping to create an inclusive working environment by understanding and harnessing difference in a positive way.
20/04/2026
Full time
Exponential-e Founded in 2002, Exponential-e swiftly established itself as a UK Cloud, Connectivity and Communications pioneer. Throughout our history, a focus upon leveraging leading-edge technology to deliver profitable and innovative services to our clients and prospects has resulted in industry and peer recognition for our ground-breaking approach, a truly world-class ICT services company. We're a company of innovators who think big and achieve bigger! Our people are crucial to the continuing success of our company. From our CEO to our new Graduates, each of our people demonstrates our PRIDE principles which are at the core of everything we do. Job Description Overall purpose of the job: Exponential'e Cloud products and services are continuing to grow but at the same time are evolving. With Public/ Hyperscale Cloud become a more common place and many of Exponential-e's customer adopting a hybrid approach to Cloud, a Senior Cloud Engineer is critical to this growth. As the Senior Cloud Engineer, responsibilities will lie across the Enterprise Cloud products and services delivered. The role is to deliver the following:- Hyperconverge & Native Virtualisation Backup Architecture Enterprise and Service Provider Storage Replication Technologies As an Senior Cloud Engineer within the Cloud Services department, the role will focus on Cloud and Digital technologies, in particular the virtualization space. The role will involve staying at the forefront of new technologies and features to maintain industry leading integrations to future-proof all customer designs. Working with Operational, Delivery and DevOps practices within Exponential-e to ensure that all technologies fall within the Exponential-e ecosystem to aid in customer satisfaction. With a strong background in Enterprise support environments, you will help customers manage change, minimize risk, optimize operations, and support business growth through pro-active and personable interactions. In doing so, you will build long-term customer relations, understanding their business goals and operational capability in order to best align and manage the customer solution, to drive Continuous Deployment and Integration. Key responsibilities for this job: Work with the Cloud Operations Manager to deliver an in-depth overview of our technical architectures and product capabilities that provide our unique value in the market. Work with Cloud Operations and Innovation team to identify systems, processes, and procedure to improve the delivery of cloud services. Ownership of specific technology domains within the Enterprise cloud space. Defining best practise for Cloud adoption, through the creation of an Enterprise Cloud Adoption Framework improving speed of delivery through standardized and robust set of cloud patterns with minimal manual intervention. Provide a broad range of skills to deliver infrastructure solutions that meet our customers' business requirements. Provide operational assistance to Cloud Operational and Delivery teams where necessary. Ensure that new product & services are evaluated and all transition tasks are completed. Assist in the identification of potential sales opportunities; discovering the customer's business requirements and challenges Collaborate with all department within Engineering to ensure that systems are built in a cohesive and secure way. Knowledge & Experience Required: Experience with NetApp Storage All Flash Arrays. Experience with Dell EMC Storage array such as VNX, Unity and PowerVaults. Extensive knowledge of VMware vSphere and virtualization technologies. Understanding configuring and deploying a shared and private cloud infrastructure. Exposure to enterprise cloud environments with multiple hosts and clusters. Experience working with in a Managed Services or Cloud Provider environment. Understanding and working knowledge of ITIL Framework. Familiarity with disaster recovery technologies such as Zerto. Knowledge of Public Clouds such as A Microsoft Azure etc. Fundamental understanding of networking technologies. Be able to work collaboratively and coordinate with other departments/individuals at all levels. Our People Our people are what makes Exponential-e Group the company it is today. This year's employee survey highlighted that 81% of employees who took the survey, would recommend a friend to work for our organisation. Learning and development are fundamental parts of daily life at Exponential-e. From their first day at the company, everyone is provided ample opportunities to develop their skills and broaden their horizons, with our own L&D team running a range of bespoke courses, based on the latest innovations and challenges across the digital landscape. Exponential-e Group is committed to providing equal opportunities in employment and treating all employees with respect and dignity. The company respects and values the diversity of its staff, striving to maintain an environment where there is opportunity for everyone to feel valued, their talents to be utilised and for both personal and organisational aspirations to be met. Every employee plays a vital role in helping to create an inclusive working environment by understanding and harnessing difference in a positive way.
Description We are seeking an experienced Technical Architect to support the design and evolution of large-scale, cloud-based data platforms across, working across our portfolio of clients. The Technical Architect will play a key role in shaping solution design patterns, ensuring alignment with established standards, and supporting a strategic transition and migrations across/to/from AWS & Azure. Key Responsibilities Define and evolve technical architecture patterns for data ingestion, processing, and access. Design scalable, resilient, and cost-efficient data solutions within a Hub and Spoke model. Support the design of new data ingestion pipelines (batch and Real Time). Ensure alignment with organisational architectural standards and governance frameworks. Contribute to target architecture roadmaps. Provide architectural guidance across: Data ingestion (Kafka, APIs, SFTP) Data processing (PySpark, EMR, Glue) Storage (S3 and data lake patterns) Collaborate with DevOps, Data Engineers, and Testers to ensure cohesive delivery. Promote engineering best practices, including CI/CD, infrastructure as code, and observability. Ensure robust handling of schema evolution and upstream data changes. Support onboarding of new data sources and services into the platform. Ensure solutions meet requirements for: Data quality and consistency Performance and scalability Security and compliance Work within defined data modelling ownership boundaries where applicable. Support cloud strategy evolution. Avoid platform lock-in and ensure portable, future-proof designs. Contribute to technical decision-making for future platform direction. Work in blended, cross-functional teams. Provide technical leadership and mentoring to delivery teams. Ensure effective knowledge transfer and capability uplift. Required Skills & Experience Strong experience designing modern cloud-based data platforms. Hands-on architectural experience with: AWS (essential): S3, EMR, Glue Kafka/event streaming architectures Python & PySpark-based data processing Experience designing data ingestion pipelines (batch and Real Time). Proficiency in Infrastructure as Code (Terraform). Experience with GitHub-based workflows and CI/CD pipelines. Experience with data lake and lakehouse architectures. Strong understanding of: Data ingestion patterns Data transformation and curation layers Data access and productisation Ability to design for large-scale datasets. Experience supporting cloud migrations Knowledge and experience with Azure & Microsoft Fabric & Databricks would be beneficial Familiarity with event-driven and streaming-first architectures at scale. Strong stakeholder engagement and cross-team collaboration skills. Ability to operate effectively within existing governance and standards. Pragmatic decision-making balancing delivery pace and technical quality. Clear communicator able to translate complex architecture into actionable guidance. Experience working in large, complex enterprise environments. This role will require the ability to obtain and hold UK SC Clearance
20/04/2026
Full time
Description We are seeking an experienced Technical Architect to support the design and evolution of large-scale, cloud-based data platforms across, working across our portfolio of clients. The Technical Architect will play a key role in shaping solution design patterns, ensuring alignment with established standards, and supporting a strategic transition and migrations across/to/from AWS & Azure. Key Responsibilities Define and evolve technical architecture patterns for data ingestion, processing, and access. Design scalable, resilient, and cost-efficient data solutions within a Hub and Spoke model. Support the design of new data ingestion pipelines (batch and Real Time). Ensure alignment with organisational architectural standards and governance frameworks. Contribute to target architecture roadmaps. Provide architectural guidance across: Data ingestion (Kafka, APIs, SFTP) Data processing (PySpark, EMR, Glue) Storage (S3 and data lake patterns) Collaborate with DevOps, Data Engineers, and Testers to ensure cohesive delivery. Promote engineering best practices, including CI/CD, infrastructure as code, and observability. Ensure robust handling of schema evolution and upstream data changes. Support onboarding of new data sources and services into the platform. Ensure solutions meet requirements for: Data quality and consistency Performance and scalability Security and compliance Work within defined data modelling ownership boundaries where applicable. Support cloud strategy evolution. Avoid platform lock-in and ensure portable, future-proof designs. Contribute to technical decision-making for future platform direction. Work in blended, cross-functional teams. Provide technical leadership and mentoring to delivery teams. Ensure effective knowledge transfer and capability uplift. Required Skills & Experience Strong experience designing modern cloud-based data platforms. Hands-on architectural experience with: AWS (essential): S3, EMR, Glue Kafka/event streaming architectures Python & PySpark-based data processing Experience designing data ingestion pipelines (batch and Real Time). Proficiency in Infrastructure as Code (Terraform). Experience with GitHub-based workflows and CI/CD pipelines. Experience with data lake and lakehouse architectures. Strong understanding of: Data ingestion patterns Data transformation and curation layers Data access and productisation Ability to design for large-scale datasets. Experience supporting cloud migrations Knowledge and experience with Azure & Microsoft Fabric & Databricks would be beneficial Familiarity with event-driven and streaming-first architectures at scale. Strong stakeholder engagement and cross-team collaboration skills. Ability to operate effectively within existing governance and standards. Pragmatic decision-making balancing delivery pace and technical quality. Clear communicator able to translate complex architecture into actionable guidance. Experience working in large, complex enterprise environments. This role will require the ability to obtain and hold UK SC Clearance
Role: Senior Python/GenAI Developer Location: Dublin (OR) Belfast (Hybrid - 3 Days In-Office) Role Type: Permanent/Full-Time (FTE) Our client is looking for a Senior GenAI Application Developer/Engineer to join their global technology hub in Dublin. This is a high-impact, permanent role designed for a Python expert who can move beyond basic AI experimentation and into the engineering of production-grade, autonomous systems. What our client is looking for: The Python Specialist: A developer with 6-10 years of professional experience. You must have "under-the-HOOD" knowledge of Python, specifically for building high-throughput microservices and complex data pipelines using FastAPI, Pandas, and NumPy . The RAG & Agentic Expert: This is the "Critical" requirement. Our client needs someone with deep hands-on experience building Retrieval-Augmented Generation (RAG) pipelines and Agentic frameworks . You should know how to use LangChain or LlamaIndex to create AI that can execute multi-step tasks. The Data Architect: Proficiency in Vector Databases is essential. You should be comfortable designing data persistence layers using PG Vector, Pinecone, Milvus, or Mongo Atlas to handle large amounts of unstructured data. The MLOps Engineer: You don't just write code; you ship it. Our client requires experience deploying GenAI models into production using Kubernetes (or OpenShift) and establishing robust CI/CD pipelines via Jenkins, GitLab, or Azure DevOps. The AI Safety Advocate: A working knowledge of Guardrails is key. You should understand how to assess the performance and safety of GenAI features to ensure they meet the rigorous standards of a global bank. If you are interested then please apply or share your updated CV with your availability and I will give you call back to discuss the role further. Randstad Technologies Ltd is a leading specialist recruitment business for the IT & Engineering industries. Please note that due to a high level of applications, we can only respond to applicants whose skills & qualifications are suitable for this position. No terminology in this advert is intended to discriminate against any of the protected characteristics that fall under the Equality Act 2010. For the purposes of the Conduct Regulations 2003, when advertising permanent vacancies we are acting as an Employment Agency, and when advertising temporary/contract vacancies we are acting as an Employment Business.
20/04/2026
Full time
Role: Senior Python/GenAI Developer Location: Dublin (OR) Belfast (Hybrid - 3 Days In-Office) Role Type: Permanent/Full-Time (FTE) Our client is looking for a Senior GenAI Application Developer/Engineer to join their global technology hub in Dublin. This is a high-impact, permanent role designed for a Python expert who can move beyond basic AI experimentation and into the engineering of production-grade, autonomous systems. What our client is looking for: The Python Specialist: A developer with 6-10 years of professional experience. You must have "under-the-HOOD" knowledge of Python, specifically for building high-throughput microservices and complex data pipelines using FastAPI, Pandas, and NumPy . The RAG & Agentic Expert: This is the "Critical" requirement. Our client needs someone with deep hands-on experience building Retrieval-Augmented Generation (RAG) pipelines and Agentic frameworks . You should know how to use LangChain or LlamaIndex to create AI that can execute multi-step tasks. The Data Architect: Proficiency in Vector Databases is essential. You should be comfortable designing data persistence layers using PG Vector, Pinecone, Milvus, or Mongo Atlas to handle large amounts of unstructured data. The MLOps Engineer: You don't just write code; you ship it. Our client requires experience deploying GenAI models into production using Kubernetes (or OpenShift) and establishing robust CI/CD pipelines via Jenkins, GitLab, or Azure DevOps. The AI Safety Advocate: A working knowledge of Guardrails is key. You should understand how to assess the performance and safety of GenAI features to ensure they meet the rigorous standards of a global bank. If you are interested then please apply or share your updated CV with your availability and I will give you call back to discuss the role further. Randstad Technologies Ltd is a leading specialist recruitment business for the IT & Engineering industries. Please note that due to a high level of applications, we can only respond to applicants whose skills & qualifications are suitable for this position. No terminology in this advert is intended to discriminate against any of the protected characteristics that fall under the Equality Act 2010. For the purposes of the Conduct Regulations 2003, when advertising permanent vacancies we are acting as an Employment Agency, and when advertising temporary/contract vacancies we are acting as an Employment Business.