it job board logo
  • Home
  • Find IT Jobs
  • Register CV
  • Register as Employer
  • Contact us
  • Career Advice
  • Recruiting? Post a job
  • Sign in
  • Sign up
  • Home
  • Find IT Jobs
  • Register CV
  • Register as Employer
  • Contact us
  • Career Advice
Sorry, that job is no longer available. Here are some results that may be similar to the job you were looking for.

82 jobs found

Email me jobs like this
Refine Search
Current Search
associate technical lead python
IT Infrastructure Manager
University of Glasgow Glasgow, UK
College of Medical, Veterinary and Life Sciences School of Psychology & Neuroscience   IT Infrastructure Manager Vacancy Ref: 158172 Salary: Grade 8 £49,320 - £56,921 per annum    This post is full time and open ended (permanent). Relocation assistance will be provided where appropriate.   The University of Glasgow is seeking to appoint a talented and highly motivated IT Infrastructure Manager.   Reporting to the Computing Support Manager, the post holder will take a leading role in managing and maintaining the school’s IT Infrastructure, which is essential for the research, teaching and professional services of Psychology and Neuroscience. Your primary responsibility will be to collaborate with the Computing Support Manager, to manage and maintain the specialised core IT infrastructure, as well as provide user support and system development.   The post holder will work closely with the Computing Support Manager to ensure that infrastructure investments align with departmental needs and strategic priorities, optimizing resource allocation and financial planning. In addition, the successful candidate will be required to oversee the management and development of support staff, fostering a culture of excellence through the implementation of best practices, strategic talent development initiatives, and the execution of annual performance reviews.     For more information about the infrastructure and the scope of the job or for informal enquiries, please contact Raymond Elma, Raymond.Elma@glasgow.ac.uk   Job Purpose Reporting to the Computing Support Manager, you will take a leading role in managing and maintaining the school’s IT Infrastructure, which is essential for the research, teaching and professional services of Psychology and Neuroscience. Your primary responsibility will be to collaborate with the Computing Support Manager, to manage and maintain the specialised core IT infrastructure, as well as provide user support and system development.   Main Duties and Responsibilities Lead in evaluating and enhancing the effectiveness of the School’s IT Infrastructure, maximising service quality, efficiency and continuity. Lead the management of infrastructure, data centres and server hardware across the product lifecycle. Provide and manage core Linux and Microsoft Windows systems to ensure vital DNS, directory, desktop, and storage services remain available, secure and patched. Lead the management of web services and Content Management Systems running Apache, PHP, Tomcat, MySQL/MariaDB, Python.  "Investigate new and emerging technologies through innovative design of complex systems and usage of specialist IT equipment for use in Psychology and Neuroscience teaching and research, to deliver strategic and operational benefits. Manage the allocated portion of the IT budget, ensuring an effective split between end-user and infrastructure management, and regularly present findings and budget impacts at the board level to align with organizational strategy and support informed decision-making." Represent Psychology and Neuroscience at Campus and College IT forums, liaising with staff in Computing Service, and providing specialist advice in areas such as security, data storage and governance to enhance the efficiency and effectiveness of IT provision in the University.  "Manage the allocated portion of the IT budget, ensuring an effective split between end-user and infrastructure management. Collaborate with the Computing Support Manager to contribute to the Computing Support Department's budget from an infrastructure perspective, including costing for upgrades, maintenance, and other related expenses. Work closely to ensure that infrastructure investments align with departmental needs and strategic priorities, optimizing resource allocation and financial planning."  Oversee the management and development of support staff, fostering a culture of excellence through the implementation of best practices, strategic talent development initiatives, and the execution of annual performance reviews.  Lead end-to-end project management with a high degree of autonomy, ensuring successful project delivery from inception to completion. Oversee the creation of comprehensive documentation and provide training to colleagues as needed to support project objectives and knowledge transfer   Partner with the University Central IT to design and implement advanced IT security policies, ensuring alignment with institutional standards and enhancing the overall cybersecurity framework   Knowledge, Qualifications, Skills, and Experience Knowledge/Qualifications Essential: A1 Scottish Credit and Qualification Framework Level 9 (Ordinary Degree, Scottish Vocational Qualification level 4), or equivalent experience of personal development in a similar role or related role. A2 Ability to undertake the duties associated with this level of post A3 Comprehensive, expert current knowledge of IT standards, systems, and provision to support delivery of research and teaching. Desirable: B1 Microsoft Certified: Windows Server Hybrid Administrator Associate certification is highly desirable, with a strong emphasis on proficiency in managing local Active Directory environments. B2 Proficiency in macOS management with JAMF certification is highly desirable. B3 Experience of working in a Higher Education environment.   Skills Essential: C1 Skills in LAMP platforms (Linux, Apache, MySQL, PHP). C2Extensive experience in Linux/Unix administration, including user management (NIS Domain), monitoring, optimizing system performance, system updates, backups (ZFS) and network storage (NFS). C2 Skill in managing and maintaining networking services (DNS, DHCP), which includes diagnosing and troubleshooting network problems. C3 Expertise in Microsoft on prem Active Directory and Window Server 2019 and above. C4 Ability to take a problem/project from conception to completion, interpreting and integrating technical and user needs appropriately. C5 Ability to develop innovative solutions and to influence others to adopt them. C6 Excellent interpersonal and communication (oral and written) skills. C7 Demonstrable people/time/budget/project management skills of an appropriate level. C8 Ability to work effectively with a high level of independence but also within a team. C9 Strong analytical and innovative problem-solving skills. C10 Ability to multitask successfully in a busy role with competing demands C11 Ability to work flexibly and adapt to changing environments. C12 Ability to collaborate with teams within our ogranisation (e.g Information Services Security Team and Network Infrastructure Team)   Desirable: D1 Understanding of cybersecurity principles to protect data and computational resources. D2 Compliance with data privacy regulations and institutional IT policies. D3 Ability to implement and maintain secure access protocols. D4 Support for software installations, updates, and troubleshooting. D5 Ability to provide technical support to faculty and students. D6 Conducting training sessions on best practices for using the computing grid. D7 Expertise in managing and maintaining high-performance computing (HPC) systems, Rocks Clusters or similar. D8 Skills in Enterprise server software and storage technologies such as, Isilon, iDrac, Microsoft failover clusters and VMware VCenter. F9 Proficiency in virtualisation and containerisation technologies (e.g., Docker, singularity).   Experience Essential: E1 Experience in leading a highly specialised infrastructure team. E2 Substantial experience in server management and systems administration in a heterogeneous environment with a mix of Linux, Unix and MS Windows server technologies providing general services, such as backup, mail, DNS, DHCP, printing and user accounts. E4 Installation and administration of enterprise level server hardware and software. Including, server management, virtualisation, and storage management. E5 Significant experience of a higher-level programming or scripting language such as Shell Script, Python  or PowerShell. E6 Experience managing projects in a complex multidisciplinary organisation. E7 Experience of taking responsibility for actions that can have considerable impact on the user community. E8 Experience of negotiating with colleagues.       Desirable: F1 Supporting Research in an academic environment. F2 Supporting MySQL/MariaDB relational database servers. F3 Security with network penetration testing, diagnosis, and patching. F4 Experience of GDPR (General Data Protection Regulation), Caldecott and the processing of personal and medical data. E5 Knowledge of libraries needed for GPU clusters and distributed computing frameworks    
25/11/2024
Full time
College of Medical, Veterinary and Life Sciences School of Psychology & Neuroscience   IT Infrastructure Manager Vacancy Ref: 158172 Salary: Grade 8 £49,320 - £56,921 per annum    This post is full time and open ended (permanent). Relocation assistance will be provided where appropriate.   The University of Glasgow is seeking to appoint a talented and highly motivated IT Infrastructure Manager.   Reporting to the Computing Support Manager, the post holder will take a leading role in managing and maintaining the school’s IT Infrastructure, which is essential for the research, teaching and professional services of Psychology and Neuroscience. Your primary responsibility will be to collaborate with the Computing Support Manager, to manage and maintain the specialised core IT infrastructure, as well as provide user support and system development.   The post holder will work closely with the Computing Support Manager to ensure that infrastructure investments align with departmental needs and strategic priorities, optimizing resource allocation and financial planning. In addition, the successful candidate will be required to oversee the management and development of support staff, fostering a culture of excellence through the implementation of best practices, strategic talent development initiatives, and the execution of annual performance reviews.     For more information about the infrastructure and the scope of the job or for informal enquiries, please contact Raymond Elma, Raymond.Elma@glasgow.ac.uk   Job Purpose Reporting to the Computing Support Manager, you will take a leading role in managing and maintaining the school’s IT Infrastructure, which is essential for the research, teaching and professional services of Psychology and Neuroscience. Your primary responsibility will be to collaborate with the Computing Support Manager, to manage and maintain the specialised core IT infrastructure, as well as provide user support and system development.   Main Duties and Responsibilities Lead in evaluating and enhancing the effectiveness of the School’s IT Infrastructure, maximising service quality, efficiency and continuity. Lead the management of infrastructure, data centres and server hardware across the product lifecycle. Provide and manage core Linux and Microsoft Windows systems to ensure vital DNS, directory, desktop, and storage services remain available, secure and patched. Lead the management of web services and Content Management Systems running Apache, PHP, Tomcat, MySQL/MariaDB, Python.  "Investigate new and emerging technologies through innovative design of complex systems and usage of specialist IT equipment for use in Psychology and Neuroscience teaching and research, to deliver strategic and operational benefits. Manage the allocated portion of the IT budget, ensuring an effective split between end-user and infrastructure management, and regularly present findings and budget impacts at the board level to align with organizational strategy and support informed decision-making." Represent Psychology and Neuroscience at Campus and College IT forums, liaising with staff in Computing Service, and providing specialist advice in areas such as security, data storage and governance to enhance the efficiency and effectiveness of IT provision in the University.  "Manage the allocated portion of the IT budget, ensuring an effective split between end-user and infrastructure management. Collaborate with the Computing Support Manager to contribute to the Computing Support Department's budget from an infrastructure perspective, including costing for upgrades, maintenance, and other related expenses. Work closely to ensure that infrastructure investments align with departmental needs and strategic priorities, optimizing resource allocation and financial planning."  Oversee the management and development of support staff, fostering a culture of excellence through the implementation of best practices, strategic talent development initiatives, and the execution of annual performance reviews.  Lead end-to-end project management with a high degree of autonomy, ensuring successful project delivery from inception to completion. Oversee the creation of comprehensive documentation and provide training to colleagues as needed to support project objectives and knowledge transfer   Partner with the University Central IT to design and implement advanced IT security policies, ensuring alignment with institutional standards and enhancing the overall cybersecurity framework   Knowledge, Qualifications, Skills, and Experience Knowledge/Qualifications Essential: A1 Scottish Credit and Qualification Framework Level 9 (Ordinary Degree, Scottish Vocational Qualification level 4), or equivalent experience of personal development in a similar role or related role. A2 Ability to undertake the duties associated with this level of post A3 Comprehensive, expert current knowledge of IT standards, systems, and provision to support delivery of research and teaching. Desirable: B1 Microsoft Certified: Windows Server Hybrid Administrator Associate certification is highly desirable, with a strong emphasis on proficiency in managing local Active Directory environments. B2 Proficiency in macOS management with JAMF certification is highly desirable. B3 Experience of working in a Higher Education environment.   Skills Essential: C1 Skills in LAMP platforms (Linux, Apache, MySQL, PHP). C2Extensive experience in Linux/Unix administration, including user management (NIS Domain), monitoring, optimizing system performance, system updates, backups (ZFS) and network storage (NFS). C2 Skill in managing and maintaining networking services (DNS, DHCP), which includes diagnosing and troubleshooting network problems. C3 Expertise in Microsoft on prem Active Directory and Window Server 2019 and above. C4 Ability to take a problem/project from conception to completion, interpreting and integrating technical and user needs appropriately. C5 Ability to develop innovative solutions and to influence others to adopt them. C6 Excellent interpersonal and communication (oral and written) skills. C7 Demonstrable people/time/budget/project management skills of an appropriate level. C8 Ability to work effectively with a high level of independence but also within a team. C9 Strong analytical and innovative problem-solving skills. C10 Ability to multitask successfully in a busy role with competing demands C11 Ability to work flexibly and adapt to changing environments. C12 Ability to collaborate with teams within our ogranisation (e.g Information Services Security Team and Network Infrastructure Team)   Desirable: D1 Understanding of cybersecurity principles to protect data and computational resources. D2 Compliance with data privacy regulations and institutional IT policies. D3 Ability to implement and maintain secure access protocols. D4 Support for software installations, updates, and troubleshooting. D5 Ability to provide technical support to faculty and students. D6 Conducting training sessions on best practices for using the computing grid. D7 Expertise in managing and maintaining high-performance computing (HPC) systems, Rocks Clusters or similar. D8 Skills in Enterprise server software and storage technologies such as, Isilon, iDrac, Microsoft failover clusters and VMware VCenter. F9 Proficiency in virtualisation and containerisation technologies (e.g., Docker, singularity).   Experience Essential: E1 Experience in leading a highly specialised infrastructure team. E2 Substantial experience in server management and systems administration in a heterogeneous environment with a mix of Linux, Unix and MS Windows server technologies providing general services, such as backup, mail, DNS, DHCP, printing and user accounts. E4 Installation and administration of enterprise level server hardware and software. Including, server management, virtualisation, and storage management. E5 Significant experience of a higher-level programming or scripting language such as Shell Script, Python  or PowerShell. E6 Experience managing projects in a complex multidisciplinary organisation. E7 Experience of taking responsibility for actions that can have considerable impact on the user community. E8 Experience of negotiating with colleagues.       Desirable: F1 Supporting Research in an academic environment. F2 Supporting MySQL/MariaDB relational database servers. F3 Security with network penetration testing, diagnosis, and patching. F4 Experience of GDPR (General Data Protection Regulation), Caldecott and the processing of personal and medical data. E5 Knowledge of libraries needed for GPU clusters and distributed computing frameworks    
Adecco
Local Gov't Housing Data Analyst (Temp: West London)
Adecco
An exciting opportunity has emerged for a Data Analyst to join the homelessness department at one of Adecco's leading Local Government clients in a temporary role for the next six months, with potential extension beyond this. This is a full time role (36 hours per week, Monday to Friday) working hybridly from our client's West London office 1-2 days each week, and previous experience of working within a local government housing department would be highly desirable. The role will be reporting directly into the Assistant Director Housing Demand/ Programme Director, and the work is analysing data in the service to provide management insight and is core to financial control within housing demand. It will assist in providing accurate budgetary forecasting and analysis of their cohort in temporary accommodation, and those households presenting as homeless, and will enable the effective prioritisation of project work to manage spend within the directorate as well as improve outcomes for residents. There are data quality issues within our client's systems, so this role would need to actively understand the accuracy of the data, cross-compare sources and potentially do other investigatory work to provide a view about reliability, as well as identify ways to data cleanse and resolve some of the issues identified. Other key elements of this role include: Designing, developing, testing and debugging SQL Server Integration Services (SSIS) against BI Power reports Providing technical support to interpret business and service needs enabling new and improved reports Being an expert for the housing business, when discussing the use of Big Data and explaining the stories the data evidences against report outputs. Providing drive optimal, innovative, scalable and high performing solutions for Business Intelligence and Visualisation, as part of a broader Data and Analytics portfolio Working with business & IT partners to understand data, improve the data and deliver informative solution visually which integrates backend data base. Influencing and educating business users to ensure data is accurate and evidences alignment to business deliverables and targets. Guiding and leading solution delivery for Business Intelligence and Visualisation of data Working with functional and technical associates to gather, refine business requirements, provide technical support/consulting, plan and prioritise work, coordinate the estimation and quotation for work to be done by various teams. Building out using SQL and progress databases for Power BI reports Transforming raw data into meaningful insights. An ability to produce interactive and user-friendly dashboards and reports. Performing a wide range of tasks such as reporting, building dashboards, building data models, analysing datasets, and administration of Power BI tools. Must have extensive knowledge and expertise in business intelligence, databases, and technical aspects of BI tools. Experience in data preparation, data gateway, and data warehousing projects Experience working with the Microsoft Business Intelligence Stack (Power BI, SSAS, SSRS, and SSIS) Experience with a self-service tool such as Power BI or Tableau Understanding of SQL, and an ability to produce reports with direct backend data feeds to support updates. Key relationships (both internal & external) in this role will be with: Strategy and Change colleagues, as well as those in other parts of the organisation External organisations and partners such as the NHS Borough-Based partnership, Office for National Statistics, the Greater London Authority, and the London Office of Technology and Innovation External providers/consultancies Local Government networks and employer bodies Councillors The ideal candidate will be somebody who is an expert in understanding and applying a range of modern tools and techniques to analyse data, as well as excellent skills in querying and reporting on datasets through modern tools such as R, Python etc, including creating dashboards and visualisations. Substantial experience of working in data and analysis in a local authority or housing organisation would be highly desirable. Interviews will take place virtually before Christmas 2025, and applicants will ideally be immediately available or on a short notice period (1-2 weeks' maximum). Only applicants who feel they meet the above criteria need apply.
13/12/2025
Seasonal
An exciting opportunity has emerged for a Data Analyst to join the homelessness department at one of Adecco's leading Local Government clients in a temporary role for the next six months, with potential extension beyond this. This is a full time role (36 hours per week, Monday to Friday) working hybridly from our client's West London office 1-2 days each week, and previous experience of working within a local government housing department would be highly desirable. The role will be reporting directly into the Assistant Director Housing Demand/ Programme Director, and the work is analysing data in the service to provide management insight and is core to financial control within housing demand. It will assist in providing accurate budgetary forecasting and analysis of their cohort in temporary accommodation, and those households presenting as homeless, and will enable the effective prioritisation of project work to manage spend within the directorate as well as improve outcomes for residents. There are data quality issues within our client's systems, so this role would need to actively understand the accuracy of the data, cross-compare sources and potentially do other investigatory work to provide a view about reliability, as well as identify ways to data cleanse and resolve some of the issues identified. Other key elements of this role include: Designing, developing, testing and debugging SQL Server Integration Services (SSIS) against BI Power reports Providing technical support to interpret business and service needs enabling new and improved reports Being an expert for the housing business, when discussing the use of Big Data and explaining the stories the data evidences against report outputs. Providing drive optimal, innovative, scalable and high performing solutions for Business Intelligence and Visualisation, as part of a broader Data and Analytics portfolio Working with business & IT partners to understand data, improve the data and deliver informative solution visually which integrates backend data base. Influencing and educating business users to ensure data is accurate and evidences alignment to business deliverables and targets. Guiding and leading solution delivery for Business Intelligence and Visualisation of data Working with functional and technical associates to gather, refine business requirements, provide technical support/consulting, plan and prioritise work, coordinate the estimation and quotation for work to be done by various teams. Building out using SQL and progress databases for Power BI reports Transforming raw data into meaningful insights. An ability to produce interactive and user-friendly dashboards and reports. Performing a wide range of tasks such as reporting, building dashboards, building data models, analysing datasets, and administration of Power BI tools. Must have extensive knowledge and expertise in business intelligence, databases, and technical aspects of BI tools. Experience in data preparation, data gateway, and data warehousing projects Experience working with the Microsoft Business Intelligence Stack (Power BI, SSAS, SSRS, and SSIS) Experience with a self-service tool such as Power BI or Tableau Understanding of SQL, and an ability to produce reports with direct backend data feeds to support updates. Key relationships (both internal & external) in this role will be with: Strategy and Change colleagues, as well as those in other parts of the organisation External organisations and partners such as the NHS Borough-Based partnership, Office for National Statistics, the Greater London Authority, and the London Office of Technology and Innovation External providers/consultancies Local Government networks and employer bodies Councillors The ideal candidate will be somebody who is an expert in understanding and applying a range of modern tools and techniques to analyse data, as well as excellent skills in querying and reporting on datasets through modern tools such as R, Python etc, including creating dashboards and visualisations. Substantial experience of working in data and analysis in a local authority or housing organisation would be highly desirable. Interviews will take place virtually before Christmas 2025, and applicants will ideally be immediately available or on a short notice period (1-2 weeks' maximum). Only applicants who feel they meet the above criteria need apply.
Jonathan Lee Recruitment Ltd
Senior Satcom Systems Engineer - Defence
Jonathan Lee Recruitment Ltd
Senior Satcoms Systems Engineer Permanent Attractive - Aerospace and Defence - WFH/Hybrid/Remote Bedfordshire Due to expansion a Senior Satcom Systems Engineer is required within a leading Telecoms, Satellite, Defence and Space Systems Technology Company. The successful Senior Satcom Systems Engineer will provide technical and commercial expertise to customers. The Senior Satcom Systems Engineer will be responsible for undertaking a variety of communications systems engineering tasks on advanced satellite communications networks and supported communications services. The successful Senior Satcom Systems Engineer will benefit from interesting, varied, and challenging work. In return the requirement is for an excellent breadth of systems engineering knowledge and experience. Ideally the successful Senior Satcom Systems Engineer will be expected to demonstrate experience of many although not necessarily all of the following skills, experience, and responsibilities The Responsibilities for the Senior Satcoms Systems Engineer Experience of working as an integral part of a highly focused team Good customer-facing and communication skills Ability to produce and present clear, concise, and unambiguous presentations to customers Strong analytical skills, the ability to identify key issues and to solve day-to-day challenges Proven ability to develop innovative solutions to defined problems Skills, Experience and Qualifications Required for the Senior Satcoms Systems Engineer Years of relevant experience with SatComs and or Telecommunication Systems Graduate with a good Honours or Master s degree in a relevant subject (e.g. Space System Engineering, Electronics, Communications); equivalent qualifications and/or experience can be considered as an alternative Experience in satcoms systems engineering with a knowledge of system design, ground segment design, satellite operations, and payload engineering Experience in IP networking, protocols, and security of data Thorough understanding of communications systems and associated engineering concepts Strong practical knowledge of systems engineering practices from requirements engineering through design/development and on to VV&T Experience in developing requirements systems design documentation, test plans and procedures and operational procedures Knowledge of software development environments, languages, and methodologies Proven track record of identifying and solving problems Desirable Skills Chartered Engineer or equivalent Practical experience in the commissioning and testing of satellite systems Familiarity with OSS/BSS technology and communications networks Experience in assessment/development of security architectures in communications systems Experience of software development environments, languages (Python, Java), and methodologies (Agile, DevOps) Knowledge of optical, 5G and/or Quantum technologies Security Clearance & British Nationals required due to nature of systems & products involved If you feel you meet the requirements for the role of the Senior Satcom Systems Engineer , then apply directly or contact Peter Heap at Jonathan Lee Recruitment on either (phone number removed) or email suitable CV s to (url removed) Your CV will be forwarded to Jonathan Lee Recruitment, a leading engineering and manufacturing recruitment consultancy established in 1978. The services advertised by Jonathan Lee Recruitment are those of an Employment Agency. In order for your CV to be processed effectively, please ensure your name, email address, phone number and location (post code OR town OR county, as a minimum) are included.
12/12/2025
Full time
Senior Satcoms Systems Engineer Permanent Attractive - Aerospace and Defence - WFH/Hybrid/Remote Bedfordshire Due to expansion a Senior Satcom Systems Engineer is required within a leading Telecoms, Satellite, Defence and Space Systems Technology Company. The successful Senior Satcom Systems Engineer will provide technical and commercial expertise to customers. The Senior Satcom Systems Engineer will be responsible for undertaking a variety of communications systems engineering tasks on advanced satellite communications networks and supported communications services. The successful Senior Satcom Systems Engineer will benefit from interesting, varied, and challenging work. In return the requirement is for an excellent breadth of systems engineering knowledge and experience. Ideally the successful Senior Satcom Systems Engineer will be expected to demonstrate experience of many although not necessarily all of the following skills, experience, and responsibilities The Responsibilities for the Senior Satcoms Systems Engineer Experience of working as an integral part of a highly focused team Good customer-facing and communication skills Ability to produce and present clear, concise, and unambiguous presentations to customers Strong analytical skills, the ability to identify key issues and to solve day-to-day challenges Proven ability to develop innovative solutions to defined problems Skills, Experience and Qualifications Required for the Senior Satcoms Systems Engineer Years of relevant experience with SatComs and or Telecommunication Systems Graduate with a good Honours or Master s degree in a relevant subject (e.g. Space System Engineering, Electronics, Communications); equivalent qualifications and/or experience can be considered as an alternative Experience in satcoms systems engineering with a knowledge of system design, ground segment design, satellite operations, and payload engineering Experience in IP networking, protocols, and security of data Thorough understanding of communications systems and associated engineering concepts Strong practical knowledge of systems engineering practices from requirements engineering through design/development and on to VV&T Experience in developing requirements systems design documentation, test plans and procedures and operational procedures Knowledge of software development environments, languages, and methodologies Proven track record of identifying and solving problems Desirable Skills Chartered Engineer or equivalent Practical experience in the commissioning and testing of satellite systems Familiarity with OSS/BSS technology and communications networks Experience in assessment/development of security architectures in communications systems Experience of software development environments, languages (Python, Java), and methodologies (Agile, DevOps) Knowledge of optical, 5G and/or Quantum technologies Security Clearance & British Nationals required due to nature of systems & products involved If you feel you meet the requirements for the role of the Senior Satcom Systems Engineer , then apply directly or contact Peter Heap at Jonathan Lee Recruitment on either (phone number removed) or email suitable CV s to (url removed) Your CV will be forwarded to Jonathan Lee Recruitment, a leading engineering and manufacturing recruitment consultancy established in 1978. The services advertised by Jonathan Lee Recruitment are those of an Employment Agency. In order for your CV to be processed effectively, please ensure your name, email address, phone number and location (post code OR town OR county, as a minimum) are included.
Senior Design Architect
Adroit People Ltd
Greetings We are HiringEOS design and ArchitectRole. Sheffield, UK 3 days onsite Must Hybrid 6 Months contract Must have : Ansible tower and Ansible Paybook Job Description : EOS design and Architect Role. We are seeking an experienced OpenShift Architecture and Migration Design Specialist to lead the design, planning, and execution of OpenShift architectures and migration strategies. The ideal candidate will have expertise in designing robust, scalable, and secure OpenShift environments, as well as creating and implementing migration plans for transitioning workloads and applications to OpenShift. Experience with VMware and Pure Storage is essential to ensure seamless integration with existing infrastructure. Key Responsibilities: 1. Architecture Design: o Design the target architecture for OpenShift, including cluster topology, networking, and storage solutions. o Define and implement best practices for OpenShift cluster setup, including multi-zone and multi-region deployments. o Ensure the architecture supports high availability, fault tolerance, and disaster recovery. 2. Migration Design and Optimization: o Assess existing infrastructure, applications, and workloads to determine migration readiness. o Develop detailed migration plans, including strategies for containerization, workload transfer, and data migration. o Implement migration processes, ensuring minimal downtime and disruption to business operations. o Identify and mitigate risks associated with the migration process. 3. VMware and Pure Storage Integration design: o Design and implement OpenShift solutions that integrate seamlessly with VMware virtualized environments. o Leverage VMware tools (e.g., vSphere, vCenter, NSX) to optimize OpenShift deployments. o Configure and manage Pure Storage solutions (e.g., FlashArray, FlashBlade) to provide high-performance, scalable storage for OpenShift workloads. o Ensure compatibility and performance optimization between OpenShift, VMware, and Pure Storage. 4. CI/CD Pipelines and DevOps Workflows: o Design and implement CI/CD pipelines tailored for the OpenShift environment. o Integrate DevOps workflows with OpenShift-native tools and third-party solutions. o Automate deployment, scaling, and monitoring processes to streamline application delivery. 5. Scalability and Security: o Ensure the architecture and migration plans are scalable to meet future growth and workload demands. o Implement security best practices, including role-based access control (RBAC), network policies, and encryption. o Conduct regular security assessments and audits to maintain compliance with organizational standards. 6. Collaboration and Documentation: o Work closely with development, DevOps, and operations teams to align architecture and migration plans with business needs. o Provide detailed documentation of the architecture, migration strategies, workflows, and configurations. o Offer technical guidance and training to teams on OpenShift architecture, migration, and best practices. Required Skills and Qualifications: Strong experience in designing and implementing OpenShift architectures and migration strategies. In-depth knowledge of Kubernetes, containerization, and orchestration. Expertise in VMware tools and technologies (e.g., vSphere, vCenter, NSX). Hands-on experience with Pure Storage solutions (e.g., FlashArray, FlashBlade). Expertise in networking concepts (e.g., ingress, load balancing, DNS) and storage solutions (e.g., persistent volumes, dynamic provisioning). Hands-on experience with CI/CD tools (e.g., Jenkins, Github, ArgoCD) and DevOps workflows. Strong understanding of high availability, scalability, and security principles in cloud-native environments. Proven experience in workload and application migration to OpenShift or similar platforms. Proficiency in scripting and automation (e.g., Bash, Python, Ansible, Terraform). Excellent problem-solving and communication skills. Preferred Qualifications: OpenShift certifications (e.g., Red Hat Certified Specialist in OpenShift Administration). Experience with multi-cluster and hybrid cloud OpenShift deployments. Familiarity with monitoring and logging tools (e.g., oTel, Grafana, Splunk stack). Knowledge of OpenShift Operators and Helm charts. Experience with large-scale migration projects. JBRP1_UKTJ
11/12/2025
Full time
Greetings We are HiringEOS design and ArchitectRole. Sheffield, UK 3 days onsite Must Hybrid 6 Months contract Must have : Ansible tower and Ansible Paybook Job Description : EOS design and Architect Role. We are seeking an experienced OpenShift Architecture and Migration Design Specialist to lead the design, planning, and execution of OpenShift architectures and migration strategies. The ideal candidate will have expertise in designing robust, scalable, and secure OpenShift environments, as well as creating and implementing migration plans for transitioning workloads and applications to OpenShift. Experience with VMware and Pure Storage is essential to ensure seamless integration with existing infrastructure. Key Responsibilities: 1. Architecture Design: o Design the target architecture for OpenShift, including cluster topology, networking, and storage solutions. o Define and implement best practices for OpenShift cluster setup, including multi-zone and multi-region deployments. o Ensure the architecture supports high availability, fault tolerance, and disaster recovery. 2. Migration Design and Optimization: o Assess existing infrastructure, applications, and workloads to determine migration readiness. o Develop detailed migration plans, including strategies for containerization, workload transfer, and data migration. o Implement migration processes, ensuring minimal downtime and disruption to business operations. o Identify and mitigate risks associated with the migration process. 3. VMware and Pure Storage Integration design: o Design and implement OpenShift solutions that integrate seamlessly with VMware virtualized environments. o Leverage VMware tools (e.g., vSphere, vCenter, NSX) to optimize OpenShift deployments. o Configure and manage Pure Storage solutions (e.g., FlashArray, FlashBlade) to provide high-performance, scalable storage for OpenShift workloads. o Ensure compatibility and performance optimization between OpenShift, VMware, and Pure Storage. 4. CI/CD Pipelines and DevOps Workflows: o Design and implement CI/CD pipelines tailored for the OpenShift environment. o Integrate DevOps workflows with OpenShift-native tools and third-party solutions. o Automate deployment, scaling, and monitoring processes to streamline application delivery. 5. Scalability and Security: o Ensure the architecture and migration plans are scalable to meet future growth and workload demands. o Implement security best practices, including role-based access control (RBAC), network policies, and encryption. o Conduct regular security assessments and audits to maintain compliance with organizational standards. 6. Collaboration and Documentation: o Work closely with development, DevOps, and operations teams to align architecture and migration plans with business needs. o Provide detailed documentation of the architecture, migration strategies, workflows, and configurations. o Offer technical guidance and training to teams on OpenShift architecture, migration, and best practices. Required Skills and Qualifications: Strong experience in designing and implementing OpenShift architectures and migration strategies. In-depth knowledge of Kubernetes, containerization, and orchestration. Expertise in VMware tools and technologies (e.g., vSphere, vCenter, NSX). Hands-on experience with Pure Storage solutions (e.g., FlashArray, FlashBlade). Expertise in networking concepts (e.g., ingress, load balancing, DNS) and storage solutions (e.g., persistent volumes, dynamic provisioning). Hands-on experience with CI/CD tools (e.g., Jenkins, Github, ArgoCD) and DevOps workflows. Strong understanding of high availability, scalability, and security principles in cloud-native environments. Proven experience in workload and application migration to OpenShift or similar platforms. Proficiency in scripting and automation (e.g., Bash, Python, Ansible, Terraform). Excellent problem-solving and communication skills. Preferred Qualifications: OpenShift certifications (e.g., Red Hat Certified Specialist in OpenShift Administration). Experience with multi-cluster and hybrid cloud OpenShift deployments. Familiarity with monitoring and logging tools (e.g., oTel, Grafana, Splunk stack). Knowledge of OpenShift Operators and Helm charts. Experience with large-scale migration projects. JBRP1_UKTJ
Tenth Revolution Group
Azure Data Engineer - £500 - Hybrid
Tenth Revolution Group Newcastle Upon Tyne, Tyne And Wear
Azure Data Engineer - 500PD - Hybrid We are seeking an Azure Data Engineer with strong experience in Databricks to design, build, and optimize scalable data pipelines and analytics solutions on the Azure cloud platform. The ideal candidate will have hands-on expertise across Azure data services, data modeling, ETL/ELT development, and collaborative engineering practices. Key Responsibilities Design, develop, and maintain scalable data pipelines using Azure Databricks (Python, PySpark, SQL). Build and optimize ETL/ELT workflows that ingest data from various on-prem and cloud-based sources. Work with Azure services including Azure Data Lake Storage, Azure Data Factory, Azure Synapse Analytics, Azure SQL, and Event Hub. Implement data quality validation, monitoring, metadata management, and governance processes. Collaborate closely with data architects, analysts, and business stakeholders to understand data requirements. Optimize Databricks clusters, jobs, and runtimes for performance and cost efficiency. Develop CI/CD workflows for data pipelines using tools such as Azure DevOps or GitHub Actions. Ensure security best practices for data access, data masking, and role-based access control. Produce technical documentation and contribute to data engineering standards and best practices. Required Skills and Experience Proven experience as a Data Engineer working with Azure cloud services. Strong proficiency in Databricks, including PySpark, Spark SQL, notebooks, Delta Lake, and job orchestration. Strong SQL and data modeling skills (e.g., dimensional modeling, data vault). Experience with Azure Data Factory or other orchestration tools. Understanding of data lakehouse architecture and distributed computing principles. Experience with CI/CD pipelines and version control (Git). Knowledge of REST APIs, JSON, and event-driven data processing. Solid understanding of data governance, data lineage, and security controls. Ability to solve complex technical problems and communicate solutions clearly. Preferred Qualifications Industry certifications (e.g., Databricks Data Engineer Associate/Professional, Azure Data Engineer Associate). Experience with Azure Synapse SQL or serverless SQL pools. Familiarity with streaming technologies (e.g., Spark Structured Streaming, Kafka, Event Hub). Experience with infrastructure-as-code (Terraform or Bicep). Background in BI or analytics engineering (Power BI, dbt) is a plus. To apply for this role please submit your CV or contact Dillon Blackburn on (phone number removed) or at (url removed). Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.
10/12/2025
Contractor
Azure Data Engineer - 500PD - Hybrid We are seeking an Azure Data Engineer with strong experience in Databricks to design, build, and optimize scalable data pipelines and analytics solutions on the Azure cloud platform. The ideal candidate will have hands-on expertise across Azure data services, data modeling, ETL/ELT development, and collaborative engineering practices. Key Responsibilities Design, develop, and maintain scalable data pipelines using Azure Databricks (Python, PySpark, SQL). Build and optimize ETL/ELT workflows that ingest data from various on-prem and cloud-based sources. Work with Azure services including Azure Data Lake Storage, Azure Data Factory, Azure Synapse Analytics, Azure SQL, and Event Hub. Implement data quality validation, monitoring, metadata management, and governance processes. Collaborate closely with data architects, analysts, and business stakeholders to understand data requirements. Optimize Databricks clusters, jobs, and runtimes for performance and cost efficiency. Develop CI/CD workflows for data pipelines using tools such as Azure DevOps or GitHub Actions. Ensure security best practices for data access, data masking, and role-based access control. Produce technical documentation and contribute to data engineering standards and best practices. Required Skills and Experience Proven experience as a Data Engineer working with Azure cloud services. Strong proficiency in Databricks, including PySpark, Spark SQL, notebooks, Delta Lake, and job orchestration. Strong SQL and data modeling skills (e.g., dimensional modeling, data vault). Experience with Azure Data Factory or other orchestration tools. Understanding of data lakehouse architecture and distributed computing principles. Experience with CI/CD pipelines and version control (Git). Knowledge of REST APIs, JSON, and event-driven data processing. Solid understanding of data governance, data lineage, and security controls. Ability to solve complex technical problems and communicate solutions clearly. Preferred Qualifications Industry certifications (e.g., Databricks Data Engineer Associate/Professional, Azure Data Engineer Associate). Experience with Azure Synapse SQL or serverless SQL pools. Familiarity with streaming technologies (e.g., Spark Structured Streaming, Kafka, Event Hub). Experience with infrastructure-as-code (Terraform or Bicep). Background in BI or analytics engineering (Power BI, dbt) is a plus. To apply for this role please submit your CV or contact Dillon Blackburn on (phone number removed) or at (url removed). Tenth Revolution Group are the go-to recruiter for Data & AI roles in the UK offering more opportunities across the country than any other recruitment agency. We're the proud sponsor and supporter of SQLBits, Power Platform World Tour, and the London Fabric User Group. We are the global leaders in Data & AI recruitment.
Head Resourcing
Data Engineer
Head Resourcing
Mid-Level Data Engineer (Azure / Databricks) NO VISA REQUIREMENTS Location: Glasgow (3+ days) Reports to: Head of IT My client is undergoing a major transformation of their entire data landscape-migrating from legacy systems and manual reporting into a modern Azure + Databricks Lakehouse. They are building a secure, automated, enterprise-grade platform powered by Lakeflow Declarative Pipelines, Unity Catalog and Azure Data Factory. They are looking for a Mid-Level Data Engineer to help deliver high-quality pipelines and curated datasets used across Finance, Operations, Sales, Customer Care and Logistics. What You'll Do Lakehouse Engineering (Azure + Databricks) Build and maintain scalable ELT pipelines using Lakeflow Declarative Pipelines, PySpark and Spark SQL. Work within a Medallion architecture (Bronze ? Silver ? Gold) to deliver reliable, high-quality datasets. Ingest data from multiple sources including ChargeBee, legacy operational files, SharePoint, SFTP, SQL, REST and GraphQL APIs using Azure Data Factory and metadata-driven patterns. Apply data quality and validation rules using Lakeflow Declarative Pipelines expectations. Curated Layers & Data Modelling Develop clean and conforming Silver & Gold layers aligned to enterprise subject areas. Contribute to dimensional modelling (star schemas), harmonisation logic, SCDs and business marts powering Power BI datasets. Apply governance, lineage and permissioning through Unity Catalog. Orchestration & Observability Use Lakeflow Workflows and ADF to orchestrate and optimise ingestion, transformation and scheduled jobs. Help implement monitoring, alerting, SLAs/SLIs and runbooks to support production reliability. Assist in performance tuning and cost optimisation. DevOps & Platform Engineering Contribute to CI/CD pipelines in Azure DevOps to automate deployment of notebooks, Lakeflow Declarative Pipelines, SQL models and ADF assets. Support secure deployment patterns using private endpoints, managed identities and Key Vault. Participate in code reviews and help improve engineering practices. Collaboration & Delivery Work with BI and Analytics teams to deliver curated datasets that power dashboards across the business. Contribute to architectural discussions and the ongoing data platform roadmap. Tech You'll Use Databricks: Lakeflow Declarative Pipelines, Lakeflow Workflows, Unity Catalog, Delta Lake Azure: ADLS Gen2, Data Factory, Event Hubs (optional), Key Vault, private endpoints Languages: PySpark, Spark SQL, Python, Git DevOps: Azure DevOps Repos & Pipelines, CI/CD Analytics: Power BI, Fabric What We're Looking For Experience Commercial and proven data engineering experience. Hands-on experience delivering solutions on Azure + Databricks . Strong PySpark and Spark SQL skills within distributed compute environments. Experience working in a Lakehouse/Medallion architecture with Delta Lake. Understanding of dimensional modelling (Kimball), including SCD Type 1/2. Exposure to operational concepts such as monitoring, retries, idempotency and backfills. Mindset Keen to grow within a modern Azure Data Platform environment. Comfortable with Git, CI/CD and modern engineering workflows. Able to communicate technical concepts clearly to non-technical stakeholders. Quality-driven, collaborative and proactive. Nice to Have Databricks Certified Data Engineer Associate. Experience with streaming ingestion (Auto Loader, event streams, watermarking). Subscription/entitlement modelling (e.g., ChargeBee). Unity Catalog advanced security (RLS, PII governance). Terraform or Bicep for IaC. Fabric Semantic Models or Direct Lake optimisation experience. Why Join? Opportunity to shape and build a modern enterprise Lakehouse platform. Hands-on work with Azure, Databricks and leading-edge engineering practices. Real progression opportunities within a growing data function. Direct impact across multiple business domains.
10/12/2025
Full time
Mid-Level Data Engineer (Azure / Databricks) NO VISA REQUIREMENTS Location: Glasgow (3+ days) Reports to: Head of IT My client is undergoing a major transformation of their entire data landscape-migrating from legacy systems and manual reporting into a modern Azure + Databricks Lakehouse. They are building a secure, automated, enterprise-grade platform powered by Lakeflow Declarative Pipelines, Unity Catalog and Azure Data Factory. They are looking for a Mid-Level Data Engineer to help deliver high-quality pipelines and curated datasets used across Finance, Operations, Sales, Customer Care and Logistics. What You'll Do Lakehouse Engineering (Azure + Databricks) Build and maintain scalable ELT pipelines using Lakeflow Declarative Pipelines, PySpark and Spark SQL. Work within a Medallion architecture (Bronze ? Silver ? Gold) to deliver reliable, high-quality datasets. Ingest data from multiple sources including ChargeBee, legacy operational files, SharePoint, SFTP, SQL, REST and GraphQL APIs using Azure Data Factory and metadata-driven patterns. Apply data quality and validation rules using Lakeflow Declarative Pipelines expectations. Curated Layers & Data Modelling Develop clean and conforming Silver & Gold layers aligned to enterprise subject areas. Contribute to dimensional modelling (star schemas), harmonisation logic, SCDs and business marts powering Power BI datasets. Apply governance, lineage and permissioning through Unity Catalog. Orchestration & Observability Use Lakeflow Workflows and ADF to orchestrate and optimise ingestion, transformation and scheduled jobs. Help implement monitoring, alerting, SLAs/SLIs and runbooks to support production reliability. Assist in performance tuning and cost optimisation. DevOps & Platform Engineering Contribute to CI/CD pipelines in Azure DevOps to automate deployment of notebooks, Lakeflow Declarative Pipelines, SQL models and ADF assets. Support secure deployment patterns using private endpoints, managed identities and Key Vault. Participate in code reviews and help improve engineering practices. Collaboration & Delivery Work with BI and Analytics teams to deliver curated datasets that power dashboards across the business. Contribute to architectural discussions and the ongoing data platform roadmap. Tech You'll Use Databricks: Lakeflow Declarative Pipelines, Lakeflow Workflows, Unity Catalog, Delta Lake Azure: ADLS Gen2, Data Factory, Event Hubs (optional), Key Vault, private endpoints Languages: PySpark, Spark SQL, Python, Git DevOps: Azure DevOps Repos & Pipelines, CI/CD Analytics: Power BI, Fabric What We're Looking For Experience Commercial and proven data engineering experience. Hands-on experience delivering solutions on Azure + Databricks . Strong PySpark and Spark SQL skills within distributed compute environments. Experience working in a Lakehouse/Medallion architecture with Delta Lake. Understanding of dimensional modelling (Kimball), including SCD Type 1/2. Exposure to operational concepts such as monitoring, retries, idempotency and backfills. Mindset Keen to grow within a modern Azure Data Platform environment. Comfortable with Git, CI/CD and modern engineering workflows. Able to communicate technical concepts clearly to non-technical stakeholders. Quality-driven, collaborative and proactive. Nice to Have Databricks Certified Data Engineer Associate. Experience with streaming ingestion (Auto Loader, event streams, watermarking). Subscription/entitlement modelling (e.g., ChargeBee). Unity Catalog advanced security (RLS, PII governance). Terraform or Bicep for IaC. Fabric Semantic Models or Direct Lake optimisation experience. Why Join? Opportunity to shape and build a modern enterprise Lakehouse platform. Hands-on work with Azure, Databricks and leading-edge engineering practices. Real progression opportunities within a growing data function. Direct impact across multiple business domains.
vertex-it-solutions
IAM Technical Operations Engineer
vertex-it-solutions
Identity and Access Management (IAM) Technical Operations Engineer 1 Years FTC with extensions. 4 days in the office. Identity and Access Management (IAM) Technical Engineer One of our clients within the financial sector is looking to us to provide a dedicated resource to contribute to their mission of innovating their business and creating a superior customer experience. They need a talented Identity and Access Management (IAM) Operations Engineer with CyberArk and Delinea experience or the delivery of the core IAM products and services required to support the enterprise infrastructure and business line applications of our client. In this role you will work as part of a global team that manages and supports the IAM services including Privileged Access Management, Single Sign-on/Multi-Factor Authentication, and Directory Services. You will collaborate and coordinate with other IT leaders, technologists and support staff to provide a secure, resilient, and quality experience to the global user community Contract Term Twelve Months FTC based in London Salary: 60k - 65k Responsibilities and Duties Serve as a multifaceted Operations Engineer for the global IAM department Provide implementation and ongoing support of net-new or enhancements to existing IAM platforms and services Manage daily IAM fulfillment requests and provide consulting services to project initiatives on IAM best practices, processes, and support Participate in the global support of the enterprise IAM services ensuring the required resiliency and service level agreements are met Drive IAM compliance by conducting certifications, audits, and on-going review of operational reporting Identify, manage and escalate, as appropriate, project risks, issues, and roadblocks to timely delivery Contribute to the development and maintenance of IAM strategy and associated roadmaps Qualifications/Experience Required 5+ years Information Security experience, with hands on experiences in enterprise IAM platforms CyberArk and Delinea. Access Management: Single Sign-On, Multi-Factor Authentication, Federation (SAML, OIDC, OAuth) Privileged Access Management: Managing privileged accounts, session management, vaulting Directory Services: User/Group Management, Sites & Services, Access Control Lists Security Concepts: Least Privileged, Zero Trust, Phishing Resistant Authentication ITSM: Incident Management, Change Management, Problem Management Scripting and automation leveraging tools such as PowerShell or Python Ability to manage priorities and report progress on required basis
08/12/2025
Full time
Identity and Access Management (IAM) Technical Operations Engineer 1 Years FTC with extensions. 4 days in the office. Identity and Access Management (IAM) Technical Engineer One of our clients within the financial sector is looking to us to provide a dedicated resource to contribute to their mission of innovating their business and creating a superior customer experience. They need a talented Identity and Access Management (IAM) Operations Engineer with CyberArk and Delinea experience or the delivery of the core IAM products and services required to support the enterprise infrastructure and business line applications of our client. In this role you will work as part of a global team that manages and supports the IAM services including Privileged Access Management, Single Sign-on/Multi-Factor Authentication, and Directory Services. You will collaborate and coordinate with other IT leaders, technologists and support staff to provide a secure, resilient, and quality experience to the global user community Contract Term Twelve Months FTC based in London Salary: 60k - 65k Responsibilities and Duties Serve as a multifaceted Operations Engineer for the global IAM department Provide implementation and ongoing support of net-new or enhancements to existing IAM platforms and services Manage daily IAM fulfillment requests and provide consulting services to project initiatives on IAM best practices, processes, and support Participate in the global support of the enterprise IAM services ensuring the required resiliency and service level agreements are met Drive IAM compliance by conducting certifications, audits, and on-going review of operational reporting Identify, manage and escalate, as appropriate, project risks, issues, and roadblocks to timely delivery Contribute to the development and maintenance of IAM strategy and associated roadmaps Qualifications/Experience Required 5+ years Information Security experience, with hands on experiences in enterprise IAM platforms CyberArk and Delinea. Access Management: Single Sign-On, Multi-Factor Authentication, Federation (SAML, OIDC, OAuth) Privileged Access Management: Managing privileged accounts, session management, vaulting Directory Services: User/Group Management, Sites & Services, Access Control Lists Security Concepts: Least Privileged, Zero Trust, Phishing Resistant Authentication ITSM: Incident Management, Change Management, Problem Management Scripting and automation leveraging tools such as PowerShell or Python Ability to manage priorities and report progress on required basis
Makutu
Data Engineer
Makutu City, Derby
About Us Makutu designs, builds and supports Microsoft Azure cloud data platforms. We are a Microsoft Solutions Partner (Azure Data & AI) and are busy building a talented team with relevant skills to deliver industry leading data platforms for our customers. The Role The Data Engineer role is key to building and growing the in-house technical team at Makutu. The role will provide the successful applicants with the opportunity for significant career development while working with a range of large businesses to whom data is critical to their success. Working as part of the team and with the customer, you'll require excellent written and verbal English language and communication skills. Big growth plans are in place to build a broader and deeper technical capability with a focus on the Microsoft Azure technology stack. The position of Data Engineer is a key role in the wider capability of our team. Occasional visits to our Head Office and customers sites will be required. Key responsibilities: Identify, design, and implement working practices across data pipelines, data architectures, testing and deployment Understand complex business requirements and providing solutions to business problems Understand modern data architecture approaches and associated cloud focused solutions Defining data engineering best practice and sharing across the organisation Collaborating with the wider team on data strategy Skills and experience: A relevant Bachelors degree in Computing, Mathematics, Data Science or similar (ideal but not essential) A Masters degree in Data Science (ideal but not essential) Experience building data pipelines with modern practices including the use of cloud native technologies, DevOps practices, CI/CD pipelines and agile delivery Experience with data modelling, data warehousing, data lake solutions Able to communicate effectively with senior stakeholders. Successful candidates will likely posses Azure certifications such as DP-600 and/or DP-700. Also, applicants will have experience working with some of the following technologies: Power BI Power Apps Blob storage Synapse Azure Data Factory (ADF) IOT Hub SQL Server Azure Data Lake Storage Azure Databricks Purview Power Platform Python
02/12/2025
Full time
About Us Makutu designs, builds and supports Microsoft Azure cloud data platforms. We are a Microsoft Solutions Partner (Azure Data & AI) and are busy building a talented team with relevant skills to deliver industry leading data platforms for our customers. The Role The Data Engineer role is key to building and growing the in-house technical team at Makutu. The role will provide the successful applicants with the opportunity for significant career development while working with a range of large businesses to whom data is critical to their success. Working as part of the team and with the customer, you'll require excellent written and verbal English language and communication skills. Big growth plans are in place to build a broader and deeper technical capability with a focus on the Microsoft Azure technology stack. The position of Data Engineer is a key role in the wider capability of our team. Occasional visits to our Head Office and customers sites will be required. Key responsibilities: Identify, design, and implement working practices across data pipelines, data architectures, testing and deployment Understand complex business requirements and providing solutions to business problems Understand modern data architecture approaches and associated cloud focused solutions Defining data engineering best practice and sharing across the organisation Collaborating with the wider team on data strategy Skills and experience: A relevant Bachelors degree in Computing, Mathematics, Data Science or similar (ideal but not essential) A Masters degree in Data Science (ideal but not essential) Experience building data pipelines with modern practices including the use of cloud native technologies, DevOps practices, CI/CD pipelines and agile delivery Experience with data modelling, data warehousing, data lake solutions Able to communicate effectively with senior stakeholders. Successful candidates will likely posses Azure certifications such as DP-600 and/or DP-700. Also, applicants will have experience working with some of the following technologies: Power BI Power Apps Blob storage Synapse Azure Data Factory (ADF) IOT Hub SQL Server Azure Data Lake Storage Azure Databricks Purview Power Platform Python
Hays Technology
Lead Data and AI Engineer
Hays Technology City, Leeds
Your new company This is a pivotal opportunity to join the Data and Innovation division of a large complex organisation leading the delivery of SAM (Supervisory Analytics and Metrics)-a transformative programme enhancing supervisory decision-making through advanced data and analytics. You will architect and implement cloud-native data solutions aligned with the organisation's enterprise cloud strategy and SAM's Target Operating Model. This is a high-impact role where you'll shape the future of supervisory technology in a collaborative, forward-thinking environment. Your new role Define and implement the data engineering strategy aligned with business and technology goals. Lead development of data ingestion, quality, and metadata pipelines powering SAM's supervisory tools. Deliver scalable, secure, and production-ready data platforms using Azure and Databricks. Collaborate across Technology and DAT to integrate SAM solutions into the organisations Enterprise Data Platform (EDP). Champion CI/CD, DevOps, data governance and federated development within the PRA's Hub & Spoke model. Mentor and coach data engineers on Azure tooling, pipeline management, coding practices, and design principles. Work with data governance teams to maintain a comprehensive data catalogue and ensure compliance with security and privacy regulations. Contribute to Communities of Practice and support their cloud-first strategy. What you'll need to succeed Extensive experience in cloud-based data engineering (preferably Databricks), with a strong background in modernisation and large-scale migration. Expertise in Azure services (API Manager, App Service), Databricks, Spark, Python, SQL, and AI/ML frameworks. Proven track record of leading technical teams and delivering complex data solutions in production. Strong understanding of data governance, security, and compliance in regulated environments. Essential Criteria Proven experience designing and deploying cloud-native data architectures at scale. Proficiency in Python, SQL, PySpark. Demonstrated ability to build secure, scalable, cost-efficient data solutions on Azure. Experience with data security and regulatory compliance tools (e.g. Microsoft Purview, Unity Catalog). Ability to translate strategic goals into technical delivery plans and roadmaps. Desirable Criteria Experience designing and implementing AI/ML-driven solutions within data platforms. Relevant certifications (e.g. Databricks Engineer Professional, Azure Data Engineer Associate, Azure Solutions Architect). Advanced academic qualifications or industry recognition in data engineering and cloud technologies. Experience in DevOps practices using GitHub Actions and automated CI/CD pipelines. What you'll get in return This is a unique opportunity to work on a high-profile programme within a prestigious institution, contributing to the future of supervisory technology. The role is based in Leeds with flexible working arrangements and offers the chance to lead innovation in a supportive and mission-driven environment. Salary package negotiable on experience from 70,000 to 100,000 package plus excellent benefits package including generous annual leave, fantastic pension and hybrid working. What you need to do now If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at (url removed)
26/11/2025
Full time
Your new company This is a pivotal opportunity to join the Data and Innovation division of a large complex organisation leading the delivery of SAM (Supervisory Analytics and Metrics)-a transformative programme enhancing supervisory decision-making through advanced data and analytics. You will architect and implement cloud-native data solutions aligned with the organisation's enterprise cloud strategy and SAM's Target Operating Model. This is a high-impact role where you'll shape the future of supervisory technology in a collaborative, forward-thinking environment. Your new role Define and implement the data engineering strategy aligned with business and technology goals. Lead development of data ingestion, quality, and metadata pipelines powering SAM's supervisory tools. Deliver scalable, secure, and production-ready data platforms using Azure and Databricks. Collaborate across Technology and DAT to integrate SAM solutions into the organisations Enterprise Data Platform (EDP). Champion CI/CD, DevOps, data governance and federated development within the PRA's Hub & Spoke model. Mentor and coach data engineers on Azure tooling, pipeline management, coding practices, and design principles. Work with data governance teams to maintain a comprehensive data catalogue and ensure compliance with security and privacy regulations. Contribute to Communities of Practice and support their cloud-first strategy. What you'll need to succeed Extensive experience in cloud-based data engineering (preferably Databricks), with a strong background in modernisation and large-scale migration. Expertise in Azure services (API Manager, App Service), Databricks, Spark, Python, SQL, and AI/ML frameworks. Proven track record of leading technical teams and delivering complex data solutions in production. Strong understanding of data governance, security, and compliance in regulated environments. Essential Criteria Proven experience designing and deploying cloud-native data architectures at scale. Proficiency in Python, SQL, PySpark. Demonstrated ability to build secure, scalable, cost-efficient data solutions on Azure. Experience with data security and regulatory compliance tools (e.g. Microsoft Purview, Unity Catalog). Ability to translate strategic goals into technical delivery plans and roadmaps. Desirable Criteria Experience designing and implementing AI/ML-driven solutions within data platforms. Relevant certifications (e.g. Databricks Engineer Professional, Azure Data Engineer Associate, Azure Solutions Architect). Advanced academic qualifications or industry recognition in data engineering and cloud technologies. Experience in DevOps practices using GitHub Actions and automated CI/CD pipelines. What you'll get in return This is a unique opportunity to work on a high-profile programme within a prestigious institution, contributing to the future of supervisory technology. The role is based in Leeds with flexible working arrangements and offers the chance to lead innovation in a supportive and mission-driven environment. Salary package negotiable on experience from 70,000 to 100,000 package plus excellent benefits package including generous annual leave, fantastic pension and hybrid working. What you need to do now If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found at (url removed)
DCS Recruitment Limited
Data Engineer
DCS Recruitment Limited City, Sheffield
Data Engineer Location: Sheffield (Hybrid - 3 days per week onsite) Salary: 50,000- 60,000 depending on experience DCS Tech are searching for an experienced Data Engineer to join our clients growing team! You will play a crucial part in designing, building, and optimising the data infrastructure that underpins the organisation. Key responsibilities Design, develop, and deploy scalable, secure, and reliable data pipelines using modern cloud and data engineering tools. Consolidate data from internal systems, APIs, and third-party sources into a unified data warehouse or data lake environment. Build and maintain robust data models to ensure accuracy, consistency, and accessibility across the organisation. Work closely with Data Analysts, Data Scientists, and business stakeholders to translate data requirements into effective technical solutions. Optimise data systems to deliver fast and accurate insights supporting dashboards, KPIs, and reporting frameworks. Implement monitoring, validation, and quality checks to ensure high levels of data accuracy and trust. Support compliance with relevant data standards and regulations, including GDPR. Maintain strong data security practices relating to access, encryption, and storage. Research and recommend new tools, technologies, and processes to improve performance, scalability, and efficiency. Contribute to migrations and modernisation projects across cloud and data platforms (e.g. AWS, Azure, GCP, Snowflake, Databricks). Create and maintain documentation aligned with internal processes and change management controls. Experience & Technical Skills Proven hands-on experience as a Data Engineer or in a similar data-centric role. Strong proficiency in SQL and Python. Solid understanding of ETL/ELT pipelines, data modelling, and data warehousing principles. Experience working with cloud platforms such as AWS, Azure, or GCP. Exposure to modern data tools such as Snowflake, Databricks, or BigQuery. Familiarity with streaming technologies (e.g., Kafka, Spark Streaming, Flink) is an advantage. Experience with orchestration and infrastructure tools such as Airflow, dbt, Prefect, CI/CD pipelines, and Terraform. What you get in return: Up to 60,000 per annum + benefits Hybrid working (3 in office) Opportunity to lead and mentor within a growing team! Professional development and training support This company is an equal opportunity employer and values diversity. We do not discriminate on the basis of race, religion, colour, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. Interested? Please submit your CV to Meg Kewley at DCS Recruitment via the link provided. Alternatively, email me at or call (phone number removed) . DCS Recruitment and all associated companies are committed to creating a working environment where diversity is celebrated and everyone is treated fairly, regardless of gender, gender identity, disability, ethnic origin, religion or belief, sexual orientation, marital or transgender status, age, or nationality
20/11/2025
Full time
Data Engineer Location: Sheffield (Hybrid - 3 days per week onsite) Salary: 50,000- 60,000 depending on experience DCS Tech are searching for an experienced Data Engineer to join our clients growing team! You will play a crucial part in designing, building, and optimising the data infrastructure that underpins the organisation. Key responsibilities Design, develop, and deploy scalable, secure, and reliable data pipelines using modern cloud and data engineering tools. Consolidate data from internal systems, APIs, and third-party sources into a unified data warehouse or data lake environment. Build and maintain robust data models to ensure accuracy, consistency, and accessibility across the organisation. Work closely with Data Analysts, Data Scientists, and business stakeholders to translate data requirements into effective technical solutions. Optimise data systems to deliver fast and accurate insights supporting dashboards, KPIs, and reporting frameworks. Implement monitoring, validation, and quality checks to ensure high levels of data accuracy and trust. Support compliance with relevant data standards and regulations, including GDPR. Maintain strong data security practices relating to access, encryption, and storage. Research and recommend new tools, technologies, and processes to improve performance, scalability, and efficiency. Contribute to migrations and modernisation projects across cloud and data platforms (e.g. AWS, Azure, GCP, Snowflake, Databricks). Create and maintain documentation aligned with internal processes and change management controls. Experience & Technical Skills Proven hands-on experience as a Data Engineer or in a similar data-centric role. Strong proficiency in SQL and Python. Solid understanding of ETL/ELT pipelines, data modelling, and data warehousing principles. Experience working with cloud platforms such as AWS, Azure, or GCP. Exposure to modern data tools such as Snowflake, Databricks, or BigQuery. Familiarity with streaming technologies (e.g., Kafka, Spark Streaming, Flink) is an advantage. Experience with orchestration and infrastructure tools such as Airflow, dbt, Prefect, CI/CD pipelines, and Terraform. What you get in return: Up to 60,000 per annum + benefits Hybrid working (3 in office) Opportunity to lead and mentor within a growing team! Professional development and training support This company is an equal opportunity employer and values diversity. We do not discriminate on the basis of race, religion, colour, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. Interested? Please submit your CV to Meg Kewley at DCS Recruitment via the link provided. Alternatively, email me at or call (phone number removed) . DCS Recruitment and all associated companies are committed to creating a working environment where diversity is celebrated and everyone is treated fairly, regardless of gender, gender identity, disability, ethnic origin, religion or belief, sexual orientation, marital or transgender status, age, or nationality
Head Resourcing
Senior Data Engineer/ Scientist
Head Resourcing
Senior Data Engineer - Azure & Databricks Lakehouse Glasgow (3/4 days onsite) Exclusive Role with a Leading UK Consumer Business A rapidly scaling UK consumer brand is undertaking a major data modernisation programme-moving away from legacy systems, manual Excel reporting and fragmented data sources into a fully automated Azure Enterprise Landing Zone + Databricks Lakehouse . They are building a modern data platform from the ground up using Lakeflow Declarative Pipelines , Unity Catalog , and Azure Data Factory , and this role sits right at the heart of that transformation. This is a rare opportunity to join early, influence architecture, and help define engineering standards, pipelines, curated layers and best practices that will support Operations, Finance, Sales, Logistics and Customer Care. If you want to build a best-in-class Lakehouse from scratch-this is the one. ? What You'll Be Doing Lakehouse Engineering (Azure + Databricks) Engineer scalable ELT pipelines using Lakeflow Declarative Pipelines , PySpark , and Spark SQL across a full Medallion Architecture (Bronze ? Silver ? Gold) . Implement ingestion patterns for files, APIs, SaaS platforms (e.g. subscription billing), SQL sources, SharePoint and SFTP using ADF + metadata-driven frameworks . Apply Lakeflow expectations for data quality, schema validation and operational reliability. Curated Data Layers & Modelling Build clean, conformed Silver/Gold models aligned to enterprise business domains (customers, subscriptions, deliveries, finance, credit, logistics, operations). Deliver star schemas, harmonisation logic, SCDs and business marts to power high-performance Power BI datasets . Apply governance, lineage and fine-grained permissions via Unity Catalog . Orchestration & Observability Design and optimise orchestration using Lakeflow Workflows and Azure Data Factory . Implement monitoring, alerting, SLAs/SLIs, runbooks and cost-optimisation across the platform. DevOps & Platform Engineering Build CI/CD pipelines in Azure DevOps for notebooks, Lakeflow pipelines, SQL models and ADF artefacts. Ensure secure, enterprise-grade platform operation across Dev ? Prod , using private endpoints, managed identities and Key Vault. Contribute to platform standards, design patterns, code reviews and future roadmap. Collaboration & Delivery Work closely with BI/Analytics teams to deliver curated datasets powering dashboards across the organisation. Influence architecture decisions and uplift engineering maturity within a growing data function. ? Tech Stack You'll Work With Databricks : Lakeflow Declarative Pipelines, Workflows, Unity Catalog, SQL Warehouses Azure : ADLS Gen2, Data Factory, Key Vault, vNets & Private Endpoints Languages : PySpark, Spark SQL, Python, Git DevOps : Azure DevOps Repos, Pipelines, CI/CD Analytics : Power BI, Fabric ? What We're Looking For Experience 5-8+ years of Data Engineering with 2-3+ years delivering production workloads on Azure + Databricks . Strong PySpark/Spark SQL and distributed data processing expertise. Proven Medallion/Lakehouse delivery experience using Delta Lake . Solid dimensional modelling (Kimball) including surrogate keys, SCD types 1/2, and merge strategies. Operational experience-SLAs, observability, idempotent pipelines, reprocessing, backfills. Mindset Strong grounding in secure Azure Landing Zone patterns . Comfort with Git, CI/CD, automated deployments and modern engineering standards. Clear communicator who can translate technical decisions into business outcomes. Nice to Have Databricks Certified Data Engineer Associate Streaming ingestion experience (Auto Loader, structured streaming, watermarking) Subscription/entitlement modelling experience Advanced Unity Catalog security (RLS, ABAC, PII governance) Terraform/Bicep for IaC Fabric Semantic Model / Direct Lake optimisation
17/11/2025
Full time
Senior Data Engineer - Azure & Databricks Lakehouse Glasgow (3/4 days onsite) Exclusive Role with a Leading UK Consumer Business A rapidly scaling UK consumer brand is undertaking a major data modernisation programme-moving away from legacy systems, manual Excel reporting and fragmented data sources into a fully automated Azure Enterprise Landing Zone + Databricks Lakehouse . They are building a modern data platform from the ground up using Lakeflow Declarative Pipelines , Unity Catalog , and Azure Data Factory , and this role sits right at the heart of that transformation. This is a rare opportunity to join early, influence architecture, and help define engineering standards, pipelines, curated layers and best practices that will support Operations, Finance, Sales, Logistics and Customer Care. If you want to build a best-in-class Lakehouse from scratch-this is the one. ? What You'll Be Doing Lakehouse Engineering (Azure + Databricks) Engineer scalable ELT pipelines using Lakeflow Declarative Pipelines , PySpark , and Spark SQL across a full Medallion Architecture (Bronze ? Silver ? Gold) . Implement ingestion patterns for files, APIs, SaaS platforms (e.g. subscription billing), SQL sources, SharePoint and SFTP using ADF + metadata-driven frameworks . Apply Lakeflow expectations for data quality, schema validation and operational reliability. Curated Data Layers & Modelling Build clean, conformed Silver/Gold models aligned to enterprise business domains (customers, subscriptions, deliveries, finance, credit, logistics, operations). Deliver star schemas, harmonisation logic, SCDs and business marts to power high-performance Power BI datasets . Apply governance, lineage and fine-grained permissions via Unity Catalog . Orchestration & Observability Design and optimise orchestration using Lakeflow Workflows and Azure Data Factory . Implement monitoring, alerting, SLAs/SLIs, runbooks and cost-optimisation across the platform. DevOps & Platform Engineering Build CI/CD pipelines in Azure DevOps for notebooks, Lakeflow pipelines, SQL models and ADF artefacts. Ensure secure, enterprise-grade platform operation across Dev ? Prod , using private endpoints, managed identities and Key Vault. Contribute to platform standards, design patterns, code reviews and future roadmap. Collaboration & Delivery Work closely with BI/Analytics teams to deliver curated datasets powering dashboards across the organisation. Influence architecture decisions and uplift engineering maturity within a growing data function. ? Tech Stack You'll Work With Databricks : Lakeflow Declarative Pipelines, Workflows, Unity Catalog, SQL Warehouses Azure : ADLS Gen2, Data Factory, Key Vault, vNets & Private Endpoints Languages : PySpark, Spark SQL, Python, Git DevOps : Azure DevOps Repos, Pipelines, CI/CD Analytics : Power BI, Fabric ? What We're Looking For Experience 5-8+ years of Data Engineering with 2-3+ years delivering production workloads on Azure + Databricks . Strong PySpark/Spark SQL and distributed data processing expertise. Proven Medallion/Lakehouse delivery experience using Delta Lake . Solid dimensional modelling (Kimball) including surrogate keys, SCD types 1/2, and merge strategies. Operational experience-SLAs, observability, idempotent pipelines, reprocessing, backfills. Mindset Strong grounding in secure Azure Landing Zone patterns . Comfort with Git, CI/CD, automated deployments and modern engineering standards. Clear communicator who can translate technical decisions into business outcomes. Nice to Have Databricks Certified Data Engineer Associate Streaming ingestion experience (Auto Loader, structured streaming, watermarking) Subscription/entitlement modelling experience Advanced Unity Catalog security (RLS, ABAC, PII governance) Terraform/Bicep for IaC Fabric Semantic Model / Direct Lake optimisation
TEKsystems
Cloudera To Snowflake SDET/Tester
TEKsystems
Cloudera to Snowflake SDET/Tester Outside IR35 - Contract Opportunity Job Description: We are seeking a dynamic and self-starting Tester with excellent communication skills. You will play a crucial role in our migration project from Cloudera (Hive) to Snowflake, ensuring data accuracy and integrity. This position demands a hands-on approach, requiring a technical understanding of the components involved. What you'll do: Lead and manage the Cloudera to Snowflake migration testing process. Design and execute comprehensive tests to ensure data correctness post-migration. Collaborate effectively with teams to maintain transparency and manage time constraints. Utilise Python, SQL, DBeaver, and Postgres to support testing activities. Engage with technologies such as Nifi, Airflow, DBT, and Azure Blob Storage as part of the testing process. Must have's: Experiencewith Cloudera and Snowflake platforms. Proficiency in SQL and testing methodologies. Strong technical background with the ability to log in and understand system components. Nice to have: Experience with Nifi, Airflow, DBT, and Azure Blob Storage is desirable. Proficiency in Python and database tools like DBeaver and Postgres. Location London, UK Rate/Salary 450.00 GBP Daily Trading as TEKsystems. Allegis Group Limited, Bracknell, RG12 1RT, United Kingdom. No Allegis Group Limited operates as an Employment Business and Employment Agency as set out in the Conduct of Employment Agencies and Employment Businesses Regulations 2003. TEKsystems is a company within the Allegis Group network of companies (collectively referred to as "Allegis Group"). Aerotek, Aston Carter, EASi, Talentis Solutions, TEKsystems, Stamford Consultants and The Stamford Group are Allegis Group brands. If you apply, your personal data will be processed as described in the Allegis Group Online Privacy Notice available at our website. To access our Online Privacy Notice, which explains what information we may collect, use, share, and store about you, and describes your rights and choices about this, please go our website. We are part of a global network of companies and as a result, the personal data you provide will be shared within Allegis Group and transferred and processed outside the UK, Switzerland and European Economic Area subject to the protections described in the Allegis Group Online Privacy Notice. We store personal data in the UK, EEA, Switzerland and the USA. If you would like to exercise your privacy rights, please visit the "Contacting Us" section of our Online Privacy Notice on our website for details on how to contact us. To protect your privacy and security, we may take steps to verify your identity, such as a password and user ID if there is an account associated with your request, or identifying information such as your address or date of birth, before proceeding with your request. commitments under the UK Data Protection Act, EU-U.S. Privacy Shield or the Swiss-U.S. Privacy Shield.
06/10/2025
Contractor
Cloudera to Snowflake SDET/Tester Outside IR35 - Contract Opportunity Job Description: We are seeking a dynamic and self-starting Tester with excellent communication skills. You will play a crucial role in our migration project from Cloudera (Hive) to Snowflake, ensuring data accuracy and integrity. This position demands a hands-on approach, requiring a technical understanding of the components involved. What you'll do: Lead and manage the Cloudera to Snowflake migration testing process. Design and execute comprehensive tests to ensure data correctness post-migration. Collaborate effectively with teams to maintain transparency and manage time constraints. Utilise Python, SQL, DBeaver, and Postgres to support testing activities. Engage with technologies such as Nifi, Airflow, DBT, and Azure Blob Storage as part of the testing process. Must have's: Experiencewith Cloudera and Snowflake platforms. Proficiency in SQL and testing methodologies. Strong technical background with the ability to log in and understand system components. Nice to have: Experience with Nifi, Airflow, DBT, and Azure Blob Storage is desirable. Proficiency in Python and database tools like DBeaver and Postgres. Location London, UK Rate/Salary 450.00 GBP Daily Trading as TEKsystems. Allegis Group Limited, Bracknell, RG12 1RT, United Kingdom. No Allegis Group Limited operates as an Employment Business and Employment Agency as set out in the Conduct of Employment Agencies and Employment Businesses Regulations 2003. TEKsystems is a company within the Allegis Group network of companies (collectively referred to as "Allegis Group"). Aerotek, Aston Carter, EASi, Talentis Solutions, TEKsystems, Stamford Consultants and The Stamford Group are Allegis Group brands. If you apply, your personal data will be processed as described in the Allegis Group Online Privacy Notice available at our website. To access our Online Privacy Notice, which explains what information we may collect, use, share, and store about you, and describes your rights and choices about this, please go our website. We are part of a global network of companies and as a result, the personal data you provide will be shared within Allegis Group and transferred and processed outside the UK, Switzerland and European Economic Area subject to the protections described in the Allegis Group Online Privacy Notice. We store personal data in the UK, EEA, Switzerland and the USA. If you would like to exercise your privacy rights, please visit the "Contacting Us" section of our Online Privacy Notice on our website for details on how to contact us. To protect your privacy and security, we may take steps to verify your identity, such as a password and user ID if there is an account associated with your request, or identifying information such as your address or date of birth, before proceeding with your request. commitments under the UK Data Protection Act, EU-U.S. Privacy Shield or the Swiss-U.S. Privacy Shield.
United Utilities
Business Analyst Water Resources
United Utilities Warrington, Cheshire
United Utilities' (UU) purpose is to deliver great water for a stronger, greener and healthier North West of England. We are committed to providing our services in a way that respects the environment, supports the economy, and benefits society. We value diversity, inclusion and innovation in our workplace, and we foster a culture where our people can grow, excel, and be themselves. We uphold our ethics, values and business model to fulfil our mission and, by setting clear goals and objectives, we create sustainable long-term value for our colleagues, customers and communities. Whether you work with a team that shares your vision or join a network of peers with similar interests, you will find a welcoming and supportive organisation to be part of. We've got a lot to offer. You'll be part of a thriving FTSE 100 company and will enjoy a range of core benefits that reflect your value and value contribution. Benefits A generous annual leave package of 26 days, which increases to 30 days after four years of service (increases one day per year), in addition to 8 bank holidays A competitive pension scheme with up to 14% employer contribution, 21% combined, and life cover Up to 7.5% performance-related bonus scheme, as well as recognition awards for outstanding achievements A comprehensive healthcare plan through our company-funded scheme MyGymDiscounts - gym and wellness benefit that offers up to 25% off on gym memberships and digital fitness subscriptions Best Doctors Salary Finance Wealth at Work courses Deals and discounts EVolve Car Scheme Employee Assistance Plan Mental health first aiders ShareBuy MORE Choices flexible benefits Enhanced parental leave schemes Job Purpose To lead, develop and manage UU's strategic direction for water related activities across the Water business. You will help to develop policies and strategies that support excellent service, add value and support long-term stewardship for water assets. To integrate strategies into wider business plans such as the drought plan, WRMP and the Price Review business plan submission. Accountabilities & Responsibilities As a Business Analyst in the Water Resources team, you will play a vital role in supporting the development and delivery of strategic water resource plans. Your analytical expertise will help ensure the company meets regulatory requirements while delivering sustainable and cost-effective water resource solutions. Key responsibilities include: Supporting the delivery of the Water Resources Management Plan (WRMP), Drought Plan, and the Annual Review of the Water Resources Management Plan through robust data analysis to produce high-quality technical outputs. Preparing and integrating water resource components into the WRMP and Business Plan, ensuring alignment and consistency across key strategic plans. Applying robust water resources planning methodologies to support evidence-based decision-making that balances risk, resilience, and affordability. Translating complex technical outputs into clear, actionable insights for internal stakeholders and senior decision-makers Facilitating effective communication and collaboration between the water resources team, regulators, and key stakeholders Technical Skills & Experience Relevant experience and a good knowledge of water, wastewater and associated practices, techniques, strategies and the operation of water and wastewater assets and business procedures Strong analytical and problem-solving skills, with the ability to interpret complex datasets and draw meaningful conclusions Experience with data handling, analysis, and visualisation tools; with good knowledge of GIS, Power BI and Tableau. Experience in data modelling and analysis using tools such as Python, VBA, and SQL to support water resources planning, regulatory submissions (e.g. WRMP, WINEP), and strategic decision-making. Excellent communication skills, with the ability to present technical information clearly to a range of stakeholders Qualifications Essential Qualifications Degree (or equivalent) in a numerate, scientific, or technical discipline Visa sponsorship may not be available for this role About the Team Water is a vital but limited natural resource. The pressures of population growth, climate change and environmental considerations mean that it's now more important than ever to plan how we will manage water resources. In Water Resources we plan we will continue to deliver a reliable supply of water for customers in the future, while protecting the environment. Utilising the latest techniques we forecast supply and demand and take into account environmental and drought resilience requirements, as well as future customer needs, assessing under regional and national planning frameworks. We define our strategy to achieve a long-term, best value and sustainable plan for water supplies on the North West. We set out the approach to how we manage water supplies to make sure there is always enough for customers, business and the environment. As a result, here in Water Resources, our planning involves making some huge strategic decisions that are critical to the company. We work closely with the Executive to shape the future of United Utilities, providing excellent opportunities for progression
03/10/2025
Full time
United Utilities' (UU) purpose is to deliver great water for a stronger, greener and healthier North West of England. We are committed to providing our services in a way that respects the environment, supports the economy, and benefits society. We value diversity, inclusion and innovation in our workplace, and we foster a culture where our people can grow, excel, and be themselves. We uphold our ethics, values and business model to fulfil our mission and, by setting clear goals and objectives, we create sustainable long-term value for our colleagues, customers and communities. Whether you work with a team that shares your vision or join a network of peers with similar interests, you will find a welcoming and supportive organisation to be part of. We've got a lot to offer. You'll be part of a thriving FTSE 100 company and will enjoy a range of core benefits that reflect your value and value contribution. Benefits A generous annual leave package of 26 days, which increases to 30 days after four years of service (increases one day per year), in addition to 8 bank holidays A competitive pension scheme with up to 14% employer contribution, 21% combined, and life cover Up to 7.5% performance-related bonus scheme, as well as recognition awards for outstanding achievements A comprehensive healthcare plan through our company-funded scheme MyGymDiscounts - gym and wellness benefit that offers up to 25% off on gym memberships and digital fitness subscriptions Best Doctors Salary Finance Wealth at Work courses Deals and discounts EVolve Car Scheme Employee Assistance Plan Mental health first aiders ShareBuy MORE Choices flexible benefits Enhanced parental leave schemes Job Purpose To lead, develop and manage UU's strategic direction for water related activities across the Water business. You will help to develop policies and strategies that support excellent service, add value and support long-term stewardship for water assets. To integrate strategies into wider business plans such as the drought plan, WRMP and the Price Review business plan submission. Accountabilities & Responsibilities As a Business Analyst in the Water Resources team, you will play a vital role in supporting the development and delivery of strategic water resource plans. Your analytical expertise will help ensure the company meets regulatory requirements while delivering sustainable and cost-effective water resource solutions. Key responsibilities include: Supporting the delivery of the Water Resources Management Plan (WRMP), Drought Plan, and the Annual Review of the Water Resources Management Plan through robust data analysis to produce high-quality technical outputs. Preparing and integrating water resource components into the WRMP and Business Plan, ensuring alignment and consistency across key strategic plans. Applying robust water resources planning methodologies to support evidence-based decision-making that balances risk, resilience, and affordability. Translating complex technical outputs into clear, actionable insights for internal stakeholders and senior decision-makers Facilitating effective communication and collaboration between the water resources team, regulators, and key stakeholders Technical Skills & Experience Relevant experience and a good knowledge of water, wastewater and associated practices, techniques, strategies and the operation of water and wastewater assets and business procedures Strong analytical and problem-solving skills, with the ability to interpret complex datasets and draw meaningful conclusions Experience with data handling, analysis, and visualisation tools; with good knowledge of GIS, Power BI and Tableau. Experience in data modelling and analysis using tools such as Python, VBA, and SQL to support water resources planning, regulatory submissions (e.g. WRMP, WINEP), and strategic decision-making. Excellent communication skills, with the ability to present technical information clearly to a range of stakeholders Qualifications Essential Qualifications Degree (or equivalent) in a numerate, scientific, or technical discipline Visa sponsorship may not be available for this role About the Team Water is a vital but limited natural resource. The pressures of population growth, climate change and environmental considerations mean that it's now more important than ever to plan how we will manage water resources. In Water Resources we plan we will continue to deliver a reliable supply of water for customers in the future, while protecting the environment. Utilising the latest techniques we forecast supply and demand and take into account environmental and drought resilience requirements, as well as future customer needs, assessing under regional and national planning frameworks. We define our strategy to achieve a long-term, best value and sustainable plan for water supplies on the North West. We set out the approach to how we manage water supplies to make sure there is always enough for customers, business and the environment. As a result, here in Water Resources, our planning involves making some huge strategic decisions that are critical to the company. We work closely with the Executive to shape the future of United Utilities, providing excellent opportunities for progression
McGregor Boyall
AWS Cloud Engineer
McGregor Boyall
AWS Cloud Engineer - Terraform - Ansible - Python - CI/CD Permanent, up to £85,000 + benefits/bonus Hybrid, 2 days office Leading financial services client is seeking an AWS Cloud Engineer to join their team in London. You will be supporting the AWS Public Cloud infrastructure and implementation of IaC using Terraform. The role will work closely with the SRE and Engineering teams to ensure that the Cloud environment has sufficient observability and is appropriately managed.Skills and experience required: Strong technical operational skills in supporting AWS Cloud Hosted environments, and at least 3 years in an Infrastructure support role Strong understanding of IaC technologies (Terraform, Ansible, GIT, Jenkins) Experience using SRE methodologies within a support team and an understanding of Service Level metrics associated with this Operational risk and control management processes, including security best practices Asset management and lifecycle (EOS/EOL) process management Planning and leading disaster recovery failovers of IT systems and services AWS, including an understanding of AWS services, security and networking. Knowledge of at least 1 programming language (ideally Python) Knowledge of CI/CD specifically relating to Cloud Hosted environments Experience working in a regulated financial services/banking If this is of interest and you have the required skills, please submit your CV over for immediate consideration. McGregor Boyall is an equal opportunity employer and do not discriminate on any grounds.
03/10/2025
Full time
AWS Cloud Engineer - Terraform - Ansible - Python - CI/CD Permanent, up to £85,000 + benefits/bonus Hybrid, 2 days office Leading financial services client is seeking an AWS Cloud Engineer to join their team in London. You will be supporting the AWS Public Cloud infrastructure and implementation of IaC using Terraform. The role will work closely with the SRE and Engineering teams to ensure that the Cloud environment has sufficient observability and is appropriately managed.Skills and experience required: Strong technical operational skills in supporting AWS Cloud Hosted environments, and at least 3 years in an Infrastructure support role Strong understanding of IaC technologies (Terraform, Ansible, GIT, Jenkins) Experience using SRE methodologies within a support team and an understanding of Service Level metrics associated with this Operational risk and control management processes, including security best practices Asset management and lifecycle (EOS/EOL) process management Planning and leading disaster recovery failovers of IT systems and services AWS, including an understanding of AWS services, security and networking. Knowledge of at least 1 programming language (ideally Python) Knowledge of CI/CD specifically relating to Cloud Hosted environments Experience working in a regulated financial services/banking If this is of interest and you have the required skills, please submit your CV over for immediate consideration. McGregor Boyall is an equal opportunity employer and do not discriminate on any grounds.
Gregory Martin International Limited
Principal Consultant
Gregory Martin International Limited Winchester, Hampshire
Principal Consultant /Senior Consultant- Defence Salary: £60K to £90K according to your level of experience, plus bonus and excellent benefits Location: Winchester, Hampshire Our client is looking for a positive and enthusiastic customer facing Principal Consultant with a passion for digital enablement and for helping clients succeed. This is an excellent opportunity to join their team and become an integral part of a small, agile, and growing business. As part of the team, you will work with their consultants, analysts and clients at all levels on a number of projects. You will work across different industries, initially focusing within Defence in the UK. Job role for the Principal Consultant will include: Working directly with clients as a lead business analyst Leading technical architecture, solution design and implementation management Contributing to the development and growth of their digital enablement and data analysis capabilities Developing and managing client relationships Building and managing relationships with digital suppliers and associates Experience working with Dstl, Defence Digital, DE&S, Frontline Commands or UK Defence industry. Initial clients will be based around key UK Defence establishments in the south of England. Skills/Qualifications & Experience required for role of Principal Consultant: We believe skills and experience of the following will enable you to excel in this role: A delivery mindset with a passion for delivering high quality, high impact projects. Leading client interactions and building trusted-advisor relationships, with new and existing clients, on both short term and long-term engagements. Experience eliciting requirements and defining business processes in complex environments. Advanced data analysis in Excel, including VBA Decision making techniques and processes Operating models and organisational design Experience of Microsoft365, SharePoint, PowerBI, Dataverse and PowerApps solutions. Understanding of data science, data analysis and visualisation tools and best practice. Knowledge of current software development using Python Effectively managing delivery teams to deliver high quality results. Interest and knowledge of technology and analysis approaches and best practice. Excellent communication skills, both written and verbal. A positive and flexible approach to your work. The ability to engage and enthuse personnel, and provide effective support and challenge, across all areas and at all levels within client organisations. A preference for building and working within teams. Qualifications:Degree, MBA or equivalent experience, Project Management qualification e.g. APMP would be useful Current or recent UK Defence Security Clearance (SC) would be beneficial Principal Consultant /Senior Consultant - Defence
03/10/2025
Full time
Principal Consultant /Senior Consultant- Defence Salary: £60K to £90K according to your level of experience, plus bonus and excellent benefits Location: Winchester, Hampshire Our client is looking for a positive and enthusiastic customer facing Principal Consultant with a passion for digital enablement and for helping clients succeed. This is an excellent opportunity to join their team and become an integral part of a small, agile, and growing business. As part of the team, you will work with their consultants, analysts and clients at all levels on a number of projects. You will work across different industries, initially focusing within Defence in the UK. Job role for the Principal Consultant will include: Working directly with clients as a lead business analyst Leading technical architecture, solution design and implementation management Contributing to the development and growth of their digital enablement and data analysis capabilities Developing and managing client relationships Building and managing relationships with digital suppliers and associates Experience working with Dstl, Defence Digital, DE&S, Frontline Commands or UK Defence industry. Initial clients will be based around key UK Defence establishments in the south of England. Skills/Qualifications & Experience required for role of Principal Consultant: We believe skills and experience of the following will enable you to excel in this role: A delivery mindset with a passion for delivering high quality, high impact projects. Leading client interactions and building trusted-advisor relationships, with new and existing clients, on both short term and long-term engagements. Experience eliciting requirements and defining business processes in complex environments. Advanced data analysis in Excel, including VBA Decision making techniques and processes Operating models and organisational design Experience of Microsoft365, SharePoint, PowerBI, Dataverse and PowerApps solutions. Understanding of data science, data analysis and visualisation tools and best practice. Knowledge of current software development using Python Effectively managing delivery teams to deliver high quality results. Interest and knowledge of technology and analysis approaches and best practice. Excellent communication skills, both written and verbal. A positive and flexible approach to your work. The ability to engage and enthuse personnel, and provide effective support and challenge, across all areas and at all levels within client organisations. A preference for building and working within teams. Qualifications:Degree, MBA or equivalent experience, Project Management qualification e.g. APMP would be useful Current or recent UK Defence Security Clearance (SC) would be beneficial Principal Consultant /Senior Consultant - Defence
The Bridge IT Recruitment
Senior Data Engineer
The Bridge IT Recruitment City, Leeds
Purpose of the Job: Design, build, and maintain robust data systems and pipelines that support data storage, processing, and analysis on the Cloud. Work with large datasets, ensuring data quality, scalability, and performance, while collaborating. closely with data scientists, analysts, and other engineering teams to understand their data needs and provide them with high-quality, accessible data. They are responsible for ensuring that the underlying data infrastructure supports the organizations broader data and business goals, enabling more effective data-driven decision-making. Key Accountabilities: Design and implement scalable, efficient, and secure data architectures, ensuring optimal data flow across systems in order to achieve high service levels of support, maintenance and development You will own development and change projects to ensure requirements are met in the most cost-effective manner while minimising associated risk to expected standards. Responsibly for cloud data platform development, data modelling, shaping and technical planning You will be a mentor among the owning decision making and evaluation of requirement suitability, facilitate reliable estimates, technical project management, stakeholder management with a project Ensure that resource requirements are understood and planned/estimated effectively against demand, including identification of additional temporary resource capability within projects Maintain appropriate process procedures, compliance and service level monitoring, performance reporting and vendor management. Implementing best practices around data security, privacy, and compliance for the teams compliance with cyber security and data protection and supporting along with BI lead Strong stakeholder management will be required for maintaining relationships with our business users to clarify and influence requirements. Including liaising with internal business departments and functions to manage the service level expected from the data team. Collaborating with external organisations and third-party software/service suppliers for ongoing support, maintenance and development of systems. You will be able to demonstrate you are quality focused to ensure that they solutions are built to an appropriate standard whilst being balanced with a drive to deliver against tight deadlines. Support in developing and implementing best practices and process across the team along with BI lead. Influence the evolution of business and system requirements and contribute to the design of technical solutions to feed a delivery pipeline that increasingly employs Agile methods such as SCRUM and Kanban You will be required to develop unit tested code and then support test cycles including post implementation validation. You will be required to contribute to the transition into service and ongoing support of the applications in the area which provides the opportunity to reduce technical debt and rationalise our technical footprint Mentor data engineers, supporting their professional growth and development Outcome, Results and Key Performance Indicators: Delivery of projects to expected timely, cost and quality standards Excellent levels of application availability and resilience as required by business operations. Necessary governance and control requirements defined - design, code and test standards and guidelines. Ensure data systems comply with necessary governance and control requirements. Internally-developed data solutions are fit for purpose and fit correctly within the data architecture. Built and tested to user requirements, performing to defined performance and capacity requirements. Company data is secure, accurate, maintained and available according to requirements. Technical risks and issues correctly mitigated and managed on Projects and Production support. High quality software delivered in to production - zero critical and high defects before production release. Dimensions of Job: This role is part of a well-established data team, the role offers a great opportunity for the right candidate to hone their modern data management skills in a friendly and supportive environment. This role requires attendance to a Leeds based office as often as needed with a minimum 2 days a week. Able to work effectively as part of a remote team. A great opportunity for a motivated data engineer seeking a new opportunity with a friendly, newly formed data team and able to contribute to the team's growth with their technical expertise Key Relationships: Internal: Wider technical teams (including apps, test, dev ops and more), Project managers, business SME's, data teams and communities , Data scientists, BI Lead, Head of Data External: software & service suppliers, consultants. Knowledge and Skills: Knowledge - Broad data management technical knowledge so as to be able to work across full data cycle. - Proven Experience working with AWS data technologies (S3, Redshift, Glue, Lambda, Lake formation, Cloud Formation), GitHub, CI/CD - Coding experience in Apache Spark, Iceberg or Python (Pandas) - Experience in change and release management. - Experience in Database Warehouse design and data modelling - Experience managing Data Migration projects. - Cloud data platform development and deployment. - Experience of performance tuning in a variery of database settings. - Experience of Infrastructure as code practises. - Proven ability to organise and produce work within deadlines. Skills - Good project and people management skills. - Excellent data development skills. - Excellent data manipulation and analysis skills using a variety of tools including SQL, Phyton, AWS services and the MSBI stack. - Ability to prioritise and be flexible to change those priorities at short notice. - Commercial acumen. - Able to demonstrate a practical approach to problem solving. - Able to provide appropriate and understandable data to a wide ranging audience. - Well-developed and professional communication skills. - Strong analytical skills - ability to create models and analyse data in order to solve complex problems or reinforce commercial decisions. - Able to understand business processes and how this is achieved/influenced by technology. - Must be able to work as part of a collaborative team to solve problems and assist other colleagues. - Ability to learn new technologies, programs and procedures. Technical Essentials: - Expertise across data warehouse and ETL/ ELT development in AWS preferred with experience in the following: - Strong experience in some of the AWS services like Redshift, Lambda,S3,Step Functions, Batch, Cloud formation, Lake Formation, Code Build, CI/CD, GitHub, IAM, SQS, SNS, Aurora DB - Good experience with DBT, Apache Iceberg, Docker, Microsoft BI stack (nice to have) - Experience in data warehouse design (Kimball and lake house, medallion and data vault) is a definite preference as is knowledge of other data tools and programming languages such as Python & Spark and Strong SQL experience. - Experience is building Data lake and building CI/CD data pipelines - A candidate is expected to understand and can demonstrate experience across the delivery lifecycle and understand both Agile and Waterfall methods and when to apply these. Experience: This position requires several years of practical experience in a similar environment. We require a good balance of technical and personal/softer skills so successful candidates can be fully effective immediately. - Proven experience in developing, delivering and maintaining tactical and enterprise data management solutions. - Proven experience in delivering data solutions using cloud platform tools. - Proven experience in assessing the impact of proposed changes on production solutions. - Proven experience in managing and developing a team of technical experts to deliver business outcomes and meet performance criteria. - Exposure to Energy markets, Energy Supply industry sector - Developing and implementing operational processes and procedures.
02/10/2025
Full time
Purpose of the Job: Design, build, and maintain robust data systems and pipelines that support data storage, processing, and analysis on the Cloud. Work with large datasets, ensuring data quality, scalability, and performance, while collaborating. closely with data scientists, analysts, and other engineering teams to understand their data needs and provide them with high-quality, accessible data. They are responsible for ensuring that the underlying data infrastructure supports the organizations broader data and business goals, enabling more effective data-driven decision-making. Key Accountabilities: Design and implement scalable, efficient, and secure data architectures, ensuring optimal data flow across systems in order to achieve high service levels of support, maintenance and development You will own development and change projects to ensure requirements are met in the most cost-effective manner while minimising associated risk to expected standards. Responsibly for cloud data platform development, data modelling, shaping and technical planning You will be a mentor among the owning decision making and evaluation of requirement suitability, facilitate reliable estimates, technical project management, stakeholder management with a project Ensure that resource requirements are understood and planned/estimated effectively against demand, including identification of additional temporary resource capability within projects Maintain appropriate process procedures, compliance and service level monitoring, performance reporting and vendor management. Implementing best practices around data security, privacy, and compliance for the teams compliance with cyber security and data protection and supporting along with BI lead Strong stakeholder management will be required for maintaining relationships with our business users to clarify and influence requirements. Including liaising with internal business departments and functions to manage the service level expected from the data team. Collaborating with external organisations and third-party software/service suppliers for ongoing support, maintenance and development of systems. You will be able to demonstrate you are quality focused to ensure that they solutions are built to an appropriate standard whilst being balanced with a drive to deliver against tight deadlines. Support in developing and implementing best practices and process across the team along with BI lead. Influence the evolution of business and system requirements and contribute to the design of technical solutions to feed a delivery pipeline that increasingly employs Agile methods such as SCRUM and Kanban You will be required to develop unit tested code and then support test cycles including post implementation validation. You will be required to contribute to the transition into service and ongoing support of the applications in the area which provides the opportunity to reduce technical debt and rationalise our technical footprint Mentor data engineers, supporting their professional growth and development Outcome, Results and Key Performance Indicators: Delivery of projects to expected timely, cost and quality standards Excellent levels of application availability and resilience as required by business operations. Necessary governance and control requirements defined - design, code and test standards and guidelines. Ensure data systems comply with necessary governance and control requirements. Internally-developed data solutions are fit for purpose and fit correctly within the data architecture. Built and tested to user requirements, performing to defined performance and capacity requirements. Company data is secure, accurate, maintained and available according to requirements. Technical risks and issues correctly mitigated and managed on Projects and Production support. High quality software delivered in to production - zero critical and high defects before production release. Dimensions of Job: This role is part of a well-established data team, the role offers a great opportunity for the right candidate to hone their modern data management skills in a friendly and supportive environment. This role requires attendance to a Leeds based office as often as needed with a minimum 2 days a week. Able to work effectively as part of a remote team. A great opportunity for a motivated data engineer seeking a new opportunity with a friendly, newly formed data team and able to contribute to the team's growth with their technical expertise Key Relationships: Internal: Wider technical teams (including apps, test, dev ops and more), Project managers, business SME's, data teams and communities , Data scientists, BI Lead, Head of Data External: software & service suppliers, consultants. Knowledge and Skills: Knowledge - Broad data management technical knowledge so as to be able to work across full data cycle. - Proven Experience working with AWS data technologies (S3, Redshift, Glue, Lambda, Lake formation, Cloud Formation), GitHub, CI/CD - Coding experience in Apache Spark, Iceberg or Python (Pandas) - Experience in change and release management. - Experience in Database Warehouse design and data modelling - Experience managing Data Migration projects. - Cloud data platform development and deployment. - Experience of performance tuning in a variery of database settings. - Experience of Infrastructure as code practises. - Proven ability to organise and produce work within deadlines. Skills - Good project and people management skills. - Excellent data development skills. - Excellent data manipulation and analysis skills using a variety of tools including SQL, Phyton, AWS services and the MSBI stack. - Ability to prioritise and be flexible to change those priorities at short notice. - Commercial acumen. - Able to demonstrate a practical approach to problem solving. - Able to provide appropriate and understandable data to a wide ranging audience. - Well-developed and professional communication skills. - Strong analytical skills - ability to create models and analyse data in order to solve complex problems or reinforce commercial decisions. - Able to understand business processes and how this is achieved/influenced by technology. - Must be able to work as part of a collaborative team to solve problems and assist other colleagues. - Ability to learn new technologies, programs and procedures. Technical Essentials: - Expertise across data warehouse and ETL/ ELT development in AWS preferred with experience in the following: - Strong experience in some of the AWS services like Redshift, Lambda,S3,Step Functions, Batch, Cloud formation, Lake Formation, Code Build, CI/CD, GitHub, IAM, SQS, SNS, Aurora DB - Good experience with DBT, Apache Iceberg, Docker, Microsoft BI stack (nice to have) - Experience in data warehouse design (Kimball and lake house, medallion and data vault) is a definite preference as is knowledge of other data tools and programming languages such as Python & Spark and Strong SQL experience. - Experience is building Data lake and building CI/CD data pipelines - A candidate is expected to understand and can demonstrate experience across the delivery lifecycle and understand both Agile and Waterfall methods and when to apply these. Experience: This position requires several years of practical experience in a similar environment. We require a good balance of technical and personal/softer skills so successful candidates can be fully effective immediately. - Proven experience in developing, delivering and maintaining tactical and enterprise data management solutions. - Proven experience in delivering data solutions using cloud platform tools. - Proven experience in assessing the impact of proposed changes on production solutions. - Proven experience in managing and developing a team of technical experts to deliver business outcomes and meet performance criteria. - Exposure to Energy markets, Energy Supply industry sector - Developing and implementing operational processes and procedures.
Lucid Support Services Ltd
Cloud DevOps Support Engineer
Lucid Support Services Ltd Bristol, Somerset
Cloud DevOps Support Engineer Salary: £45-55k Hybrid- Cardiff/Bristol Join an industry-leading MSP and cloud consulting business at an exciting phase of growth. This is a fantastic opportunity to work with some of the top AWS and Azure partner talent in the sector, contributing to the management and evolution of high-scale operational environments. As a Cloud DevOps Support Engineer, this position is predominantly operational (80%), with opportunities for rotation into project delivery and solution development to further enhance technical skills and cloud expertise. You'll play a critical part in optimising and supporting our customers' AWS and Azure environments, leveraging your Infrastructure-as-Code (IaC) proficiency, automation skills, and passion for cloud technology. This role suits a coder by nature who enjoys troubleshooting complex technical problems in cloud-native and hybrid settings, ensuring the highest standards of reliability, efficiency, and innovation. The successful candidate will be directly involved in managing our customer cloud platforms for a diverse enterprise client base, acting both as a trusted technical expert and a collaborative team player. You'll work side-by-side within a cross functional squad supporting both day-to-day operational excellence and next-gen cloud adoption initiatives. AWS associate level certification is essential, with a commitment to achieve professional certification needed; AI/ML experience is advantageous but not mandatory. Key technologies you will need to support include Windows, networking, with a blend of cloud native- PaaS expertise across security, serverless and AI/ML. Now is a great time to join and contribute to our operational maturity journey, benefit from best-in-class mentoring, and accelerate your career as we scale to meet ambitious growth targets. What you'll be doing: Operational Cloud Support Providing technical support and troubleshooting of AWS and Azure environments for enterprise customers, including incident management, monitoring, backup, and disaster recovery. Implement and maintain robust monitoring, alerting, and reporting frameworks to ensure SLA adherence and proactive issue detection. Support upgrades, patches, and problem resolution across cloud platforms with an automation-first mindset. Supporting cost optimisation (FinOps) and security posture improvement across client deployments. Automation, IaC, and CI/CD Build, optimise, and manage Infrastructure-as-Code (IaC) templates and automation scripts-primarily using Terraform, CloudFormation, ARM/Azure Bicep, and related tools. Develop, maintain, and enhance CI/CD pipelines and GitOps workflows to accelerate cloud deployments and streamline operational changes. Participate in release management, change configuration, and cloud resource life cycle operations. Project Delivery & Skill Development Rotate into project-based delivery assignments to participate in cloud migration, modernisation, and optimisation engagements, building hands-on expertise and expanding knowledge of new services (including AI/ML/GenAI when relevant). Contribute to knowledge sharing and continually develop skillsets by collaborating with cloud architects, engineers, and product specialists. Collaboration & Continuous Improvement Work closely with service desk, SREs, developers, and security teams to resolve incidents, enhance reliability, and adopt best operational practices. Document technical solutions, create playbooks, and recommend process improvements to drive efficiency and standardisation. Promote a culture of automation, continuous learning, and operational excellence within the cloud team. What you need to succeed: Solid, hands-on experience supporting, configuring, and troubleshooting AWS and/or Azure environments in large-scale or MSP settings. Diligent and client-focussed mentality ensuring customer outcomes are maintained. Expertise in moving Windows Server workloads to AWS Workspaces or Azure AVD/Workspaces is advantageous. Proficiency in Infrastructure-as-Code (Terraform, CloudFormation, or equivalent), with a strong automation and Scripting background (Python, PowerShell, or Bash). Direct experience with cloud platform operations, monitoring, and incident response, including root cause analysis and problem management. Demonstrated ability to manage CI/CD tools, source control (Git), and modern DevOps workflows. Enthusiasm for collaborating with diverse technical teams and mentoring less-experienced team members. Strong communication skills, both written and verbal, for engaging with technical peers, customers, and non-technical stakeholders. AWS Associate certification required; willingness to achieve AWS Professional (DevOps or Solutions Architect). Azure certification or experience highly valued. Experience or demonstrated interest in supporting AI/ML/GenAI operations is a plus but not essential. At Lucid, we celebrate difference and value diverse perspectives, underpinned by our values of Honesty, Integrity, and Pragmatism. We welcome applications from all suitably qualified or experienced candidates, regardless of personal characteristics. If you have a disability or health condition and seek support throughout the recruitment process, please do not hesitate to contact us.
02/10/2025
Full time
Cloud DevOps Support Engineer Salary: £45-55k Hybrid- Cardiff/Bristol Join an industry-leading MSP and cloud consulting business at an exciting phase of growth. This is a fantastic opportunity to work with some of the top AWS and Azure partner talent in the sector, contributing to the management and evolution of high-scale operational environments. As a Cloud DevOps Support Engineer, this position is predominantly operational (80%), with opportunities for rotation into project delivery and solution development to further enhance technical skills and cloud expertise. You'll play a critical part in optimising and supporting our customers' AWS and Azure environments, leveraging your Infrastructure-as-Code (IaC) proficiency, automation skills, and passion for cloud technology. This role suits a coder by nature who enjoys troubleshooting complex technical problems in cloud-native and hybrid settings, ensuring the highest standards of reliability, efficiency, and innovation. The successful candidate will be directly involved in managing our customer cloud platforms for a diverse enterprise client base, acting both as a trusted technical expert and a collaborative team player. You'll work side-by-side within a cross functional squad supporting both day-to-day operational excellence and next-gen cloud adoption initiatives. AWS associate level certification is essential, with a commitment to achieve professional certification needed; AI/ML experience is advantageous but not mandatory. Key technologies you will need to support include Windows, networking, with a blend of cloud native- PaaS expertise across security, serverless and AI/ML. Now is a great time to join and contribute to our operational maturity journey, benefit from best-in-class mentoring, and accelerate your career as we scale to meet ambitious growth targets. What you'll be doing: Operational Cloud Support Providing technical support and troubleshooting of AWS and Azure environments for enterprise customers, including incident management, monitoring, backup, and disaster recovery. Implement and maintain robust monitoring, alerting, and reporting frameworks to ensure SLA adherence and proactive issue detection. Support upgrades, patches, and problem resolution across cloud platforms with an automation-first mindset. Supporting cost optimisation (FinOps) and security posture improvement across client deployments. Automation, IaC, and CI/CD Build, optimise, and manage Infrastructure-as-Code (IaC) templates and automation scripts-primarily using Terraform, CloudFormation, ARM/Azure Bicep, and related tools. Develop, maintain, and enhance CI/CD pipelines and GitOps workflows to accelerate cloud deployments and streamline operational changes. Participate in release management, change configuration, and cloud resource life cycle operations. Project Delivery & Skill Development Rotate into project-based delivery assignments to participate in cloud migration, modernisation, and optimisation engagements, building hands-on expertise and expanding knowledge of new services (including AI/ML/GenAI when relevant). Contribute to knowledge sharing and continually develop skillsets by collaborating with cloud architects, engineers, and product specialists. Collaboration & Continuous Improvement Work closely with service desk, SREs, developers, and security teams to resolve incidents, enhance reliability, and adopt best operational practices. Document technical solutions, create playbooks, and recommend process improvements to drive efficiency and standardisation. Promote a culture of automation, continuous learning, and operational excellence within the cloud team. What you need to succeed: Solid, hands-on experience supporting, configuring, and troubleshooting AWS and/or Azure environments in large-scale or MSP settings. Diligent and client-focussed mentality ensuring customer outcomes are maintained. Expertise in moving Windows Server workloads to AWS Workspaces or Azure AVD/Workspaces is advantageous. Proficiency in Infrastructure-as-Code (Terraform, CloudFormation, or equivalent), with a strong automation and Scripting background (Python, PowerShell, or Bash). Direct experience with cloud platform operations, monitoring, and incident response, including root cause analysis and problem management. Demonstrated ability to manage CI/CD tools, source control (Git), and modern DevOps workflows. Enthusiasm for collaborating with diverse technical teams and mentoring less-experienced team members. Strong communication skills, both written and verbal, for engaging with technical peers, customers, and non-technical stakeholders. AWS Associate certification required; willingness to achieve AWS Professional (DevOps or Solutions Architect). Azure certification or experience highly valued. Experience or demonstrated interest in supporting AI/ML/GenAI operations is a plus but not essential. At Lucid, we celebrate difference and value diverse perspectives, underpinned by our values of Honesty, Integrity, and Pragmatism. We welcome applications from all suitably qualified or experienced candidates, regardless of personal characteristics. If you have a disability or health condition and seek support throughout the recruitment process, please do not hesitate to contact us.
TEKsystems
Cloudera To Snowflake SDET/Tester
TEKsystems
Cloudera to Snowflake SDET/Tester Outside IR35 - Contract Opportunity Job Description: We are seeking a dynamic and self-starting Tester with excellent communication skills. You will play a crucial role in our migration project from Cloudera (Hive) to Snowflake, ensuring data accuracy and integrity. This position demands a hands-on approach, requiring a technical understanding of the components involved. What you'll do: Lead and manage the Cloudera to Snowflake migration testing process. Design and execute comprehensive tests to ensure data correctness post-migration. Collaborate effectively with teams to maintain transparency and manage time constraints. Utilise Python, SQL, DBeaver, and Postgres to support testing activities. Engage with technologies such as Nifi, Airflow, DBT, and Azure Blob Storage as part of the testing process. Must have's: Experiencewith Cloudera and Snowflake platforms. Proficiency in SQL and testing methodologies. Strong technical background with the ability to log in and understand system components. Nice to have: Experience with Nifi, Airflow, DBT, and Azure Blob Storage is desirable. Proficiency in Python and database tools like DBeaver and Postgres. Location London, UK Rate/Salary 450.00 GBP Daily Trading as TEKsystems. Allegis Group Limited, Bracknell, RG12 1RT, United Kingdom. No Allegis Group Limited operates as an Employment Business and Employment Agency as set out in the Conduct of Employment Agencies and Employment Businesses Regulations 2003. TEKsystems is a company within the Allegis Group network of companies (collectively referred to as "Allegis Group"). Aerotek, Aston Carter, EASi, Talentis Solutions, TEKsystems, Stamford Consultants and The Stamford Group are Allegis Group brands. If you apply, your personal data will be processed as described in the Allegis Group Online Privacy Notice available at our website. To access our Online Privacy Notice, which explains what information we may collect, use, share, and store about you, and describes your rights and choices about this, please go our website. We are part of a global network of companies and as a result, the personal data you provide will be shared within Allegis Group and transferred and processed outside the UK, Switzerland and European Economic Area subject to the protections described in the Allegis Group Online Privacy Notice. We store personal data in the UK, EEA, Switzerland and the USA. If you would like to exercise your privacy rights, please visit the "Contacting Us" section of our Online Privacy Notice on our website for details on how to contact us. To protect your privacy and security, we may take steps to verify your identity, such as a password and user ID if there is an account associated with your request, or identifying information such as your address or date of birth, before proceeding with your request. commitments under the UK Data Protection Act, EU-U.S. Privacy Shield or the Swiss-U.S. Privacy Shield.
02/10/2025
Contractor
Cloudera to Snowflake SDET/Tester Outside IR35 - Contract Opportunity Job Description: We are seeking a dynamic and self-starting Tester with excellent communication skills. You will play a crucial role in our migration project from Cloudera (Hive) to Snowflake, ensuring data accuracy and integrity. This position demands a hands-on approach, requiring a technical understanding of the components involved. What you'll do: Lead and manage the Cloudera to Snowflake migration testing process. Design and execute comprehensive tests to ensure data correctness post-migration. Collaborate effectively with teams to maintain transparency and manage time constraints. Utilise Python, SQL, DBeaver, and Postgres to support testing activities. Engage with technologies such as Nifi, Airflow, DBT, and Azure Blob Storage as part of the testing process. Must have's: Experiencewith Cloudera and Snowflake platforms. Proficiency in SQL and testing methodologies. Strong technical background with the ability to log in and understand system components. Nice to have: Experience with Nifi, Airflow, DBT, and Azure Blob Storage is desirable. Proficiency in Python and database tools like DBeaver and Postgres. Location London, UK Rate/Salary 450.00 GBP Daily Trading as TEKsystems. Allegis Group Limited, Bracknell, RG12 1RT, United Kingdom. No Allegis Group Limited operates as an Employment Business and Employment Agency as set out in the Conduct of Employment Agencies and Employment Businesses Regulations 2003. TEKsystems is a company within the Allegis Group network of companies (collectively referred to as "Allegis Group"). Aerotek, Aston Carter, EASi, Talentis Solutions, TEKsystems, Stamford Consultants and The Stamford Group are Allegis Group brands. If you apply, your personal data will be processed as described in the Allegis Group Online Privacy Notice available at our website. To access our Online Privacy Notice, which explains what information we may collect, use, share, and store about you, and describes your rights and choices about this, please go our website. We are part of a global network of companies and as a result, the personal data you provide will be shared within Allegis Group and transferred and processed outside the UK, Switzerland and European Economic Area subject to the protections described in the Allegis Group Online Privacy Notice. We store personal data in the UK, EEA, Switzerland and the USA. If you would like to exercise your privacy rights, please visit the "Contacting Us" section of our Online Privacy Notice on our website for details on how to contact us. To protect your privacy and security, we may take steps to verify your identity, such as a password and user ID if there is an account associated with your request, or identifying information such as your address or date of birth, before proceeding with your request. commitments under the UK Data Protection Act, EU-U.S. Privacy Shield or the Swiss-U.S. Privacy Shield.
Lorien
Quantitative C++ Developer
Lorien
Quantitative C++ Developer6 Month ContractLocation: London (Hybrid) Lorien's UK leading Investment banking client are currently looking for a highly skilled Quantative C++ Developer to join the team on an initial 6-month contract. Essential C++ development knowledge Discipline expert, typically with a number of years post qualification experience or equivalent business experience. Quantitative Degree (BSc/MSc) or equivalent experience Strong working knowledge of systems and programming languages used on a day to day basis Excellent written and oral English skills in order to articulate technical issues associated to work area and communicate with individuals across the business. Team orientated and an ability to work with individuals to set individual objectives and manage performance to ensure their delivery. Preferred Working knowledge of Fixed Income Derivatives Working knowledge of Bonds and Futures Prior experience using Visual Studio Prior experience with Python If you find this opportunity intriguing and aligning with your skill set, we welcome the submission of your CV without delay. Carbon60, Lorien & SRG - The Impellam Group STEM Portfolio are acting as an Employment Business in relation to this vacancy.
02/10/2025
Full time
Quantitative C++ Developer6 Month ContractLocation: London (Hybrid) Lorien's UK leading Investment banking client are currently looking for a highly skilled Quantative C++ Developer to join the team on an initial 6-month contract. Essential C++ development knowledge Discipline expert, typically with a number of years post qualification experience or equivalent business experience. Quantitative Degree (BSc/MSc) or equivalent experience Strong working knowledge of systems and programming languages used on a day to day basis Excellent written and oral English skills in order to articulate technical issues associated to work area and communicate with individuals across the business. Team orientated and an ability to work with individuals to set individual objectives and manage performance to ensure their delivery. Preferred Working knowledge of Fixed Income Derivatives Working knowledge of Bonds and Futures Prior experience using Visual Studio Prior experience with Python If you find this opportunity intriguing and aligning with your skill set, we welcome the submission of your CV without delay. Carbon60, Lorien & SRG - The Impellam Group STEM Portfolio are acting as an Employment Business in relation to this vacancy.
Capital Pay Software Solutions Ltd
Senior Backend Developer
Capital Pay Software Solutions Ltd
Senior Backend Developer Capital Pay Software Solutions Ltd Promoting world class payment solution systems to global audiences Capital Pay Software Solutions Ltd is advancing its capabilities by enhancing its Payment Aggregator Platform. This strategic initiative is crucial to our international operations, designed for optimal flexibility and adaptability across diverse markets. We are seeking highly skilled and motivated professionals to strengthen our expert team. If you excel in a dynamic environment and are dedicated to developing secure and scalable FinTech solutions, we encourage you to connect with us. Join Capital Pay International in driving innovation in the global payment landscape. We are seeking a Senior Backend Developer with extensive experience in the FinTech industry, specifically in building secure and robust solutions. The ideal candidate will be a technical leader, capable of designing and implementing the core architecture of our payment aggregation platform. Key Responsibilities: Lead the design and development of the core backend architecture, including the API gateway, transaction management layer, and merchant management layer. Select and implement appropriate technologies from our stack, which includes python, Node.js, or Jave for programming languages; Django, Express.js, or Spring Boot for frameworks; and PostgreSQL or MySQL for transactional date, with Redis for caching and session management. Design and implement robust security measures, including AES-256 encryption for sensitive data, TLS for secure communication, and OAuth/JWT for authentication and authorisation. Ensure the platform is compliant with PCI-DSS, GDPR, and other relevant data protection regulations. Integrate multiple payment gateways (Capital Pay, Stripe, Paypal, Barclaycard, Adyen, Worldpay) using provided SDKs/APIs. Implement advanced fraud detection and anti-mney laundering (AML) systems. Develop and maintain RESTful APIs for seamless communication with the frontend and external systems. Implement features for transaction tracking and status management (pending, completed, failed), refund and chargeback handling, and payment settlement (funds transfer to merchant accounts). Participate in architectural design discussions, code reviews, and technical mentoring. Contribute to the development of a developer-friendly API and comprehensive documentation. Set up and manage cloud infrastructure on AWS, Google Cloud, or Azure. Implement and manage continuous integration and continuous delivery (CI/CD) pipelines to automate software builds and deployments We are looking for a candidate who: • Has a proven track record of 6+ years in backend development, with significant experience in the FinTech or financial services sector. • Possesses deep expertise in building scalable and secure backend services. • Is proficient in at least one of the specified programming languages (Python, Node.js, Kotlin or Java) and their associated frameworks. • Has a strong experience with database design and management, including both SQL (PostgreSQL or MySQL) and potentially NoSQL databases. • Has hands-on experience with RESTful API design and microservices architecture. • Demonstrates a strong understanding of security best practices and compliance standards like PCIDSS and GDPR. • Has experience integrating with third-party APIs, particularly payment gateways. • Has experience in NFC/RFID technology and Payment Networks integrations. • Is adept at problem-solving, has excellent attnetion to detail, and can work effectively in a fast-paced,agile environment. • Familiarity with serverless architecture is beneficial. • Experience with messaging systems like RabbitMQ or Kafka is a plus. • Experience implementing two-factor authentication (2FA) for user logins • Experience with performance optimisation for high-traffic scenarios and a large number of concurrent users.
01/09/2025
Full time
Senior Backend Developer Capital Pay Software Solutions Ltd Promoting world class payment solution systems to global audiences Capital Pay Software Solutions Ltd is advancing its capabilities by enhancing its Payment Aggregator Platform. This strategic initiative is crucial to our international operations, designed for optimal flexibility and adaptability across diverse markets. We are seeking highly skilled and motivated professionals to strengthen our expert team. If you excel in a dynamic environment and are dedicated to developing secure and scalable FinTech solutions, we encourage you to connect with us. Join Capital Pay International in driving innovation in the global payment landscape. We are seeking a Senior Backend Developer with extensive experience in the FinTech industry, specifically in building secure and robust solutions. The ideal candidate will be a technical leader, capable of designing and implementing the core architecture of our payment aggregation platform. Key Responsibilities: Lead the design and development of the core backend architecture, including the API gateway, transaction management layer, and merchant management layer. Select and implement appropriate technologies from our stack, which includes python, Node.js, or Jave for programming languages; Django, Express.js, or Spring Boot for frameworks; and PostgreSQL or MySQL for transactional date, with Redis for caching and session management. Design and implement robust security measures, including AES-256 encryption for sensitive data, TLS for secure communication, and OAuth/JWT for authentication and authorisation. Ensure the platform is compliant with PCI-DSS, GDPR, and other relevant data protection regulations. Integrate multiple payment gateways (Capital Pay, Stripe, Paypal, Barclaycard, Adyen, Worldpay) using provided SDKs/APIs. Implement advanced fraud detection and anti-mney laundering (AML) systems. Develop and maintain RESTful APIs for seamless communication with the frontend and external systems. Implement features for transaction tracking and status management (pending, completed, failed), refund and chargeback handling, and payment settlement (funds transfer to merchant accounts). Participate in architectural design discussions, code reviews, and technical mentoring. Contribute to the development of a developer-friendly API and comprehensive documentation. Set up and manage cloud infrastructure on AWS, Google Cloud, or Azure. Implement and manage continuous integration and continuous delivery (CI/CD) pipelines to automate software builds and deployments We are looking for a candidate who: • Has a proven track record of 6+ years in backend development, with significant experience in the FinTech or financial services sector. • Possesses deep expertise in building scalable and secure backend services. • Is proficient in at least one of the specified programming languages (Python, Node.js, Kotlin or Java) and their associated frameworks. • Has a strong experience with database design and management, including both SQL (PostgreSQL or MySQL) and potentially NoSQL databases. • Has hands-on experience with RESTful API design and microservices architecture. • Demonstrates a strong understanding of security best practices and compliance standards like PCIDSS and GDPR. • Has experience integrating with third-party APIs, particularly payment gateways. • Has experience in NFC/RFID technology and Payment Networks integrations. • Is adept at problem-solving, has excellent attnetion to detail, and can work effectively in a fast-paced,agile environment. • Familiarity with serverless architecture is beneficial. • Experience with messaging systems like RabbitMQ or Kafka is a plus. • Experience implementing two-factor authentication (2FA) for user logins • Experience with performance optimisation for high-traffic scenarios and a large number of concurrent users.

Modal Window

  • Home
  • Contact
  • About Us
  • FAQs
  • Terms & Conditions
  • Privacy
  • Employer
  • Post a Job
  • Search Resumes
  • Sign in
  • Job Seeker
  • Find Jobs
  • Create Resume
  • Sign in
  • IT blog
  • Facebook
  • Twitter
  • LinkedIn
  • Youtube
© 2008-2025 IT Job Board