Power Platform & AI Developer Salary: Up to £55,000 + Bonus + Benefits Location: Central London or Esher - Hybrid (4 days per week onsite) Working Hours: Full time - PermanentAn organisation based is seeking a Power Platform & AI Developer to support its digital transformation programme, delivering intelligent, scalable solutions that embed automation and AI into core business processes. This Power Platform & AI Developer role will focus on building applications, automations and AI-driven solutions using the Microsoft ecosystem, helping to improve operational efficiency, data quality and user experience across the organisation. It is well suited to a developer who enjoys working across both low-code platforms and modern AI technologies within a fast-paced, business-facing environment. Responsibilities for the Power Platform & AI Developer: Design and develop business applications using Power Apps (Canvas and Model-Driven) Build automated workflows using Power Automate to streamline business processes Develop conversational AI solutions using Copilot Studio / Power Virtual Agents Integrate Power Platform solutions with Microsoft 365, Azure services and third-party systems via APIs Build and maintain integration pipelines using Azure Functions, Logic Apps, API Management and Service Bus Implement CI/CD pipelines and ALM practices using Azure DevOps or GitHub Monitor and support data integrations, ensuring reliability and performance Collaborate with stakeholders to gather requirements and deliver user-focused solutions Maintain high standards of security, compliance and data governance across solutions Essential Skills for the Power Platform & AI Developer: Microsoft Power Platform (Power Apps, Power Automate, Dataverse) Experience developing AI-driven solutions Proficiency in at least one programming language Strong understanding of secure development, data protection and access control Experience working in Agile environments with strong stakeholder engagement skills Desirable Skills for the Power Platform & AI Developer: Experience with Azure OpenAI or other AI services, Power BI or Microsoft Fabric for reporting and analytics, Dynamics 365 integrations, APIs and Azure integration services Knowledge of MLOps or AI model lifecycle concepts Experience working with third-party systems and vendors If you are a developer looking to work with Power Platform, AI and modern Microsoft technologies, this role offers the opportunity to build impactful solutions that directly support business transformation. Power Platform, AI, Artifical Intelligence
01/04/2026
Full time
Power Platform & AI Developer Salary: Up to £55,000 + Bonus + Benefits Location: Central London or Esher - Hybrid (4 days per week onsite) Working Hours: Full time - PermanentAn organisation based is seeking a Power Platform & AI Developer to support its digital transformation programme, delivering intelligent, scalable solutions that embed automation and AI into core business processes. This Power Platform & AI Developer role will focus on building applications, automations and AI-driven solutions using the Microsoft ecosystem, helping to improve operational efficiency, data quality and user experience across the organisation. It is well suited to a developer who enjoys working across both low-code platforms and modern AI technologies within a fast-paced, business-facing environment. Responsibilities for the Power Platform & AI Developer: Design and develop business applications using Power Apps (Canvas and Model-Driven) Build automated workflows using Power Automate to streamline business processes Develop conversational AI solutions using Copilot Studio / Power Virtual Agents Integrate Power Platform solutions with Microsoft 365, Azure services and third-party systems via APIs Build and maintain integration pipelines using Azure Functions, Logic Apps, API Management and Service Bus Implement CI/CD pipelines and ALM practices using Azure DevOps or GitHub Monitor and support data integrations, ensuring reliability and performance Collaborate with stakeholders to gather requirements and deliver user-focused solutions Maintain high standards of security, compliance and data governance across solutions Essential Skills for the Power Platform & AI Developer: Microsoft Power Platform (Power Apps, Power Automate, Dataverse) Experience developing AI-driven solutions Proficiency in at least one programming language Strong understanding of secure development, data protection and access control Experience working in Agile environments with strong stakeholder engagement skills Desirable Skills for the Power Platform & AI Developer: Experience with Azure OpenAI or other AI services, Power BI or Microsoft Fabric for reporting and analytics, Dynamics 365 integrations, APIs and Azure integration services Knowledge of MLOps or AI model lifecycle concepts Experience working with third-party systems and vendors If you are a developer looking to work with Power Platform, AI and modern Microsoft technologies, this role offers the opportunity to build impactful solutions that directly support business transformation. Power Platform, AI, Artifical Intelligence
Azure Platform Engineer (Azure Foundry IaC Cloud Engineering) Contract: 3-6 months (Inside IR35) Rate: Market Rate Client: Telecommunications Location: Remote (may need occasional UK travel once per quarter to Newbury) We are looking for an experienced Azure Platform Engineer to support the delivery of secure, scalable, and automated cloud platforms across Azure Public Cloud, Hybrid, and Edge environments. This role focuses heavily on Azure Foundry , Infrastructure-as-Code , and Cloud Platform Engineering . Key Responsibilities Engineer secure Azure environments using IaC (Terraform, Bicep). Deliver Azure Landing Zones aligned to CAF & Well-Architected Framework. Azure Foundry Build and scale cloud platforms supporting AI/ML workloads and Azure Foundry services. Implement automation across CI/CD pipelines using Azure DevOps . Support cloud migrations and hybrid/edge deployments. Design cloud-native solutions including AKS and containerized workloads. Embed governance, security, and compliance across platform designs. Required Skills Strong Azure engineering background across IaaS, PaaS, and cloud-native services. Expertise with Terraform , Bicep/ARM , and IaC best practices. Hands-on experience with Azure DevOps (pipelines, repos, release). Knowledge of Azure networking, security, and Entra ID . Experience with Kubernetes / AKS and container architectures. Exposure to Hybrid and Edge solutions. Azure Foundry & AI (Highly Desirable) Experience with Azure Foundry or AI platform services. Understanding of enabling AI/ML workloads on Azure. Ability to integrate AI services into cloud platforms via IaC and DevOps. Familiarity with MLOps concepts and AI governance.
31/03/2026
Contractor
Azure Platform Engineer (Azure Foundry IaC Cloud Engineering) Contract: 3-6 months (Inside IR35) Rate: Market Rate Client: Telecommunications Location: Remote (may need occasional UK travel once per quarter to Newbury) We are looking for an experienced Azure Platform Engineer to support the delivery of secure, scalable, and automated cloud platforms across Azure Public Cloud, Hybrid, and Edge environments. This role focuses heavily on Azure Foundry , Infrastructure-as-Code , and Cloud Platform Engineering . Key Responsibilities Engineer secure Azure environments using IaC (Terraform, Bicep). Deliver Azure Landing Zones aligned to CAF & Well-Architected Framework. Azure Foundry Build and scale cloud platforms supporting AI/ML workloads and Azure Foundry services. Implement automation across CI/CD pipelines using Azure DevOps . Support cloud migrations and hybrid/edge deployments. Design cloud-native solutions including AKS and containerized workloads. Embed governance, security, and compliance across platform designs. Required Skills Strong Azure engineering background across IaaS, PaaS, and cloud-native services. Expertise with Terraform , Bicep/ARM , and IaC best practices. Hands-on experience with Azure DevOps (pipelines, repos, release). Knowledge of Azure networking, security, and Entra ID . Experience with Kubernetes / AKS and container architectures. Exposure to Hybrid and Edge solutions. Azure Foundry & AI (Highly Desirable) Experience with Azure Foundry or AI platform services. Understanding of enabling AI/ML workloads on Azure. Ability to integrate AI services into cloud platforms via IaC and DevOps. Familiarity with MLOps concepts and AI governance.
AI Architect Financial Services London Hybrid At Datatech Analytics, we're delighted to partner with a global consulting organisation expanding its AI and Data capability within Financial Services. The firm works with major banks, insurers and capital markets institutions to design and deploy enterprise AI platforms, helping organisations transform how data and AI drive decision making across the business. As demand for AI transformation programmes continues to grow, the business is looking to hire an AI Architect to help shape and deliver complex AI platforms for large financial institutions. The role You will design and architect enterprise AI platforms, translating complex business challenges into scalable AI and data solutions. Working closely with engineering teams and senior stakeholders, you'll help organisations move from experimentation with AI to production-ready AI systems embedded within core business platforms. What you'll be doing Defining enterprise AI architecture strategies for financial services clients. Designing scalable AI platforms and ML infrastructure integrated with enterprise data systems. Architecting end-to-end AI pipelines , from data ingestion through to model deployment. Leading engineering teams across AI, machine learning and data engineering. Engaging senior stakeholders to shape AI transformation programmes and technical strategy. Technology environment Typical platforms and technologies include: Cloud platforms such as AWS, Azure or Google Cloud Data platforms including Databricks, Snowflake or BigQuery Python and modern ML frameworks Generative AI, LLM integration and RAG pipelines MLOps tooling and modern ML lifecycle management. The profile Experience designing enterprise AI or data platforms in complex technology environments. A background delivering large-scale transformation programmes, often within consulting or advisory environments. Strong technical leadership combined with the ability to engage senior stakeholders across business and technology teams. Experience working within Financial Services environments is highly valued. If you'd like to learn more about the opportunity, feel free to reach out for a confidential conversation.APPLY NOW Datatech Analytics
31/03/2026
Full time
AI Architect Financial Services London Hybrid At Datatech Analytics, we're delighted to partner with a global consulting organisation expanding its AI and Data capability within Financial Services. The firm works with major banks, insurers and capital markets institutions to design and deploy enterprise AI platforms, helping organisations transform how data and AI drive decision making across the business. As demand for AI transformation programmes continues to grow, the business is looking to hire an AI Architect to help shape and deliver complex AI platforms for large financial institutions. The role You will design and architect enterprise AI platforms, translating complex business challenges into scalable AI and data solutions. Working closely with engineering teams and senior stakeholders, you'll help organisations move from experimentation with AI to production-ready AI systems embedded within core business platforms. What you'll be doing Defining enterprise AI architecture strategies for financial services clients. Designing scalable AI platforms and ML infrastructure integrated with enterprise data systems. Architecting end-to-end AI pipelines , from data ingestion through to model deployment. Leading engineering teams across AI, machine learning and data engineering. Engaging senior stakeholders to shape AI transformation programmes and technical strategy. Technology environment Typical platforms and technologies include: Cloud platforms such as AWS, Azure or Google Cloud Data platforms including Databricks, Snowflake or BigQuery Python and modern ML frameworks Generative AI, LLM integration and RAG pipelines MLOps tooling and modern ML lifecycle management. The profile Experience designing enterprise AI or data platforms in complex technology environments. A background delivering large-scale transformation programmes, often within consulting or advisory environments. Strong technical leadership combined with the ability to engage senior stakeholders across business and technology teams. Experience working within Financial Services environments is highly valued. If you'd like to learn more about the opportunity, feel free to reach out for a confidential conversation.APPLY NOW Datatech Analytics
Senior Data & AI Strategy Consultant 1000 - 1100 per day INSIDE IR35 Contract: 3 days per week - 1 day per week in london 3 months Role Purpose TXP has been engaged to support an early-stage programme shaping. This Senior Data & AI Strategy Consultant will lead a comprehensive review of our clients data landscape, analyse business and technical requirements across all delivery streams, and produce a clear roadmap with prioritised recommendations to support decision-making ahead of programme launch. This role requires someone who can operate at the intersection of enterprise data strategy, AI capability development and consulting delivery. The successful candidate will combine deep technical fluency in modern data and AI platforms with a track record of building and scaling data practices in complex regulated environments. Essential Skills & Experience Proven senior leadership in data strategy, enterprise data consulting, AI/ML capability development or data practice leadership within professional services or technology consulting Demonstrated success building and scaling multi-disciplinary data and analytics teams (from inception to 50+ people) in high-growth consulting environments Deep technical fluency across modern data and AI stacks including cloud-native architectures (Azure, AWS), data platforms (Databricks, Snowflake, Microsoft Fabric), ML/AI tools (MLflow, LLMs, RAG pipelines, vector stores) and analytics technologies (Power BI, Tableau) Experience reviewing end-to-end data programmes and defining future-state delivery models in regulated or public sector environments Ability to translate complex organisational data requirements into clear delivery plans, roadmaps and actionable recommendations at board level Strong stakeholder engagement skills with experience advising senior programme leadership, C-suite executives and board-level decision makers Desireable Experience Experience with Microsoft Fabric, OneLake architecture and Azure AI Foundry Experience designing AI governance frameworks, model risk management and Responsible AI controls Background in delivering MLOps, intelligent document processing (IDP) or LLM-based solutions at enterprise scale
31/03/2026
Contractor
Senior Data & AI Strategy Consultant 1000 - 1100 per day INSIDE IR35 Contract: 3 days per week - 1 day per week in london 3 months Role Purpose TXP has been engaged to support an early-stage programme shaping. This Senior Data & AI Strategy Consultant will lead a comprehensive review of our clients data landscape, analyse business and technical requirements across all delivery streams, and produce a clear roadmap with prioritised recommendations to support decision-making ahead of programme launch. This role requires someone who can operate at the intersection of enterprise data strategy, AI capability development and consulting delivery. The successful candidate will combine deep technical fluency in modern data and AI platforms with a track record of building and scaling data practices in complex regulated environments. Essential Skills & Experience Proven senior leadership in data strategy, enterprise data consulting, AI/ML capability development or data practice leadership within professional services or technology consulting Demonstrated success building and scaling multi-disciplinary data and analytics teams (from inception to 50+ people) in high-growth consulting environments Deep technical fluency across modern data and AI stacks including cloud-native architectures (Azure, AWS), data platforms (Databricks, Snowflake, Microsoft Fabric), ML/AI tools (MLflow, LLMs, RAG pipelines, vector stores) and analytics technologies (Power BI, Tableau) Experience reviewing end-to-end data programmes and defining future-state delivery models in regulated or public sector environments Ability to translate complex organisational data requirements into clear delivery plans, roadmaps and actionable recommendations at board level Strong stakeholder engagement skills with experience advising senior programme leadership, C-suite executives and board-level decision makers Desireable Experience Experience with Microsoft Fabric, OneLake architecture and Azure AI Foundry Experience designing AI governance frameworks, model risk management and Responsible AI controls Background in delivering MLOps, intelligent document processing (IDP) or LLM-based solutions at enterprise scale
Our client is building the most advanced AI platform in their market. They help their clients serve customers with unmatched speed and accuracy. They ve invested heavily into building the ML stack, partnered with leading universities, and trained models on millions of expert-tagged images. Now, they re scaling globally and need a world-class Lead Data Scientist to help push the boundaries of computer vision, video analysis, and multimodal LLMs while solving real-world challenges. Role Overview They are looking for an experienced Lead Data Scientist to spearhead machine-learning initiatives, with particular focus on computer vision, large language models, and production ready ML pipelines in Azure. You will act as the technical lead for the team, setting direction, guiding best practices, and ensuring the successful delivery of high-impact AI solutions. Key Responsibilities Develop, train, and deploy computer vision models (object detection, image classification, segmentation, multi-modal learning) Fine-tune, evaluate, and productionise multi-modal LLMs for business applications. Drive experimentation and prototyping of advanced ML/AI techniques Provide technical direction, mentoring, and hands-on guidance to the data science team. Work with engineering, product, and business stakeholders to align ML strategy with business goals. Architect and productionise end-to-end ML pipelines on Azure, while ensuring scalability, reproducibility, and monitoring of deployed models. Requirements 6+ years in data science / ML, with at least 2 years in a technical lead role. Deep experience in training and deploying computer vision models into production Proven track record with LLM fine-tuning, prompt engineering and productionisation Deep experience in MLOps on Azure, including CI/CD, monitoring and scaling pipelines. Strong coding skills in Python, with frameworks such as PyTorch, FastAPI and Azure CLI. ALL APPLICANTS MUST BE FREE TO WORK IN THE UK Exposed Solutions is acting as an employment agency to this client. Please note that no terminology in this advert is intended to discriminate on any grounds, and we confirm that we will gladly accept applications from any person for this role.
31/03/2026
Full time
Our client is building the most advanced AI platform in their market. They help their clients serve customers with unmatched speed and accuracy. They ve invested heavily into building the ML stack, partnered with leading universities, and trained models on millions of expert-tagged images. Now, they re scaling globally and need a world-class Lead Data Scientist to help push the boundaries of computer vision, video analysis, and multimodal LLMs while solving real-world challenges. Role Overview They are looking for an experienced Lead Data Scientist to spearhead machine-learning initiatives, with particular focus on computer vision, large language models, and production ready ML pipelines in Azure. You will act as the technical lead for the team, setting direction, guiding best practices, and ensuring the successful delivery of high-impact AI solutions. Key Responsibilities Develop, train, and deploy computer vision models (object detection, image classification, segmentation, multi-modal learning) Fine-tune, evaluate, and productionise multi-modal LLMs for business applications. Drive experimentation and prototyping of advanced ML/AI techniques Provide technical direction, mentoring, and hands-on guidance to the data science team. Work with engineering, product, and business stakeholders to align ML strategy with business goals. Architect and productionise end-to-end ML pipelines on Azure, while ensuring scalability, reproducibility, and monitoring of deployed models. Requirements 6+ years in data science / ML, with at least 2 years in a technical lead role. Deep experience in training and deploying computer vision models into production Proven track record with LLM fine-tuning, prompt engineering and productionisation Deep experience in MLOps on Azure, including CI/CD, monitoring and scaling pipelines. Strong coding skills in Python, with frameworks such as PyTorch, FastAPI and Azure CLI. ALL APPLICANTS MUST BE FREE TO WORK IN THE UK Exposed Solutions is acting as an employment agency to this client. Please note that no terminology in this advert is intended to discriminate on any grounds, and we confirm that we will gladly accept applications from any person for this role.
Global Leader in software supply chain for DevOps, DevSecOps, and MLOps is seeking a pre-sales focused Solutions Architect to work closely with strategic customers and prospects. This role is ideal for someone who thrives at the intersection of technology and business, and who enjoys driving impactful conversations with technical and executive stakeholders. This is a hybrid role based out of London, with three days per week in the office. Excellent + OTE + Bens + Stock Key skills for the Solutions Architect - DevOps Significant experience in technical pre-sales, solutions architecture, or similar roles Strong relationship-building skills with both technical users and senior stakeholders in enterprise environments Practical knowledge and hands-on experience with Docker, Kubernetes, CI/CD pipelines, Git workflows, and build tools Familiarity with application security tools such as SCA, SAST, SBOM management, and container security Ability to build and manage modern software pipelines using diverse DevOps tooling Solid hands-on experience with major cloud platforms (AWS, Azure, GCP) - mandatory Background in software development is a significant advantage K ey responsibilities for the Solutions Architect DevOps - include Engage with customers to ensure their success in their DevOps and DevSecOps journey leveraging the software supply chain Platform Support Sales motion and significantly contribute to the customer journey to build technical wins and championship Train our customers, prospects and community about product offering and solutions Represent the company in events and conferences Influence the features and roadmap of products based on customer needs Stay current with the latest technology trends related to the DevOps and DevSecOps landscape Join a company trusted by thousands of enterprise customers software engineering teams to deliver secure continuous updates, and is used by the majority of the Fortune 100, and help shape the future of secure and efficient software delivery. Opus Resourcing acts as an employment agency with respect to permanent employment. Skills: CI/CD, AZURE, GIT, DEVOPS, DOCKER, KUBERNETES, AWS,presales,Security,cloud platforms,Application Security,SAST,Sales Engineering,Technical Sales Consulting,Pre-Sales Technical Consulting
06/10/2025
Full time
Global Leader in software supply chain for DevOps, DevSecOps, and MLOps is seeking a pre-sales focused Solutions Architect to work closely with strategic customers and prospects. This role is ideal for someone who thrives at the intersection of technology and business, and who enjoys driving impactful conversations with technical and executive stakeholders. This is a hybrid role based out of London, with three days per week in the office. Excellent + OTE + Bens + Stock Key skills for the Solutions Architect - DevOps Significant experience in technical pre-sales, solutions architecture, or similar roles Strong relationship-building skills with both technical users and senior stakeholders in enterprise environments Practical knowledge and hands-on experience with Docker, Kubernetes, CI/CD pipelines, Git workflows, and build tools Familiarity with application security tools such as SCA, SAST, SBOM management, and container security Ability to build and manage modern software pipelines using diverse DevOps tooling Solid hands-on experience with major cloud platforms (AWS, Azure, GCP) - mandatory Background in software development is a significant advantage K ey responsibilities for the Solutions Architect DevOps - include Engage with customers to ensure their success in their DevOps and DevSecOps journey leveraging the software supply chain Platform Support Sales motion and significantly contribute to the customer journey to build technical wins and championship Train our customers, prospects and community about product offering and solutions Represent the company in events and conferences Influence the features and roadmap of products based on customer needs Stay current with the latest technology trends related to the DevOps and DevSecOps landscape Join a company trusted by thousands of enterprise customers software engineering teams to deliver secure continuous updates, and is used by the majority of the Fortune 100, and help shape the future of secure and efficient software delivery. Opus Resourcing acts as an employment agency with respect to permanent employment. Skills: CI/CD, AZURE, GIT, DEVOPS, DOCKER, KUBERNETES, AWS,presales,Security,cloud platforms,Application Security,SAST,Sales Engineering,Technical Sales Consulting,Pre-Sales Technical Consulting
Senior Machine Learning Engineer - Behavioural Modeling & Threat Detection - £150,000 - £180,000 - Fully Remote UK BASED CANDIDATES ONLY My client is looking for an experienced Machine Learning Engineer ready to play a pivotal role in shaping the technical direction of their behavioural modelling and threat detection systems. This position offers the opportunity to influence not just their engineering roadmap, but how they fundamentally approach solving complex, real-world security challenges with data. You'll work at the intersection of data science, ML infrastructure, and product innovation, leading efforts to build and evolve ML-driven capabilities, while also ensuring the reliability and scalability of their models in production environments. What You'll Do Spearhead the design and refinement of machine learning models focused on understanding behaviour patterns and identifying cybersecurity anomalies. Partner with product, engineering, and domain experts to translate strategic goals and customer needs into practical, scalable ML solutions. Drive model development end-to-end, from exploratory analysis, feature design, and prototyping to validation and deployment. Collaborate with platform and infra teams to operationalize models and ship ML-powered features into production. Continuously assess and iterate on production models, balancing long-term ML strategy with tactical improvements. Champion code quality, observability, and resilience within their ML systems through reviews and hands-on contributions. Help shape their internal ML standards and practices, ensuring they stay ahead of industry advancements. Offer technical mentorship and be a thought partner to colleagues across data, ML, and engineering disciplines. What We're Looking For Hands-on experience in developing and deploying machine learning models at scale. Deep familiarity with core ML concepts (classification, time-series, statistical modeling) and their real-world tradeoffs. Fluency in Python and commonly used ML libraries (e.g. pandas, scikit-learn; experience with PyTorch or TensorFlow is a plus). Experience with model lifecycle management (MLOps), including monitoring, retraining, and model versioning. Ability to work across data infrastructure, from SQL to large-scale distributed data tools (Spark, etc.). Strong written and verbal communication skills, especially in cross-functional contexts. Bonus Experience (Nice to Have) Exposure to large language models (LLMs) or foundational model adaptation. Previous work in cybersecurity, anomaly detection, or behavioural analytics. Familiarity with orchestration frameworks (Airflow or similar). Experience with scalable ML systems, pipelines, or real-time data processing. Advanced degree or equivalent experience in ML/AI research or applied science. Cloud platform proficiency (AWS, GCP, Azure). If this sounds like something you would be interested in, please apply with your latest CV, a number to reach you on and I will be in touch. Alternatively, you can email me at . RSG Plc is acting as an Employment Agency in relation to this vacancy.
02/10/2025
Full time
Senior Machine Learning Engineer - Behavioural Modeling & Threat Detection - £150,000 - £180,000 - Fully Remote UK BASED CANDIDATES ONLY My client is looking for an experienced Machine Learning Engineer ready to play a pivotal role in shaping the technical direction of their behavioural modelling and threat detection systems. This position offers the opportunity to influence not just their engineering roadmap, but how they fundamentally approach solving complex, real-world security challenges with data. You'll work at the intersection of data science, ML infrastructure, and product innovation, leading efforts to build and evolve ML-driven capabilities, while also ensuring the reliability and scalability of their models in production environments. What You'll Do Spearhead the design and refinement of machine learning models focused on understanding behaviour patterns and identifying cybersecurity anomalies. Partner with product, engineering, and domain experts to translate strategic goals and customer needs into practical, scalable ML solutions. Drive model development end-to-end, from exploratory analysis, feature design, and prototyping to validation and deployment. Collaborate with platform and infra teams to operationalize models and ship ML-powered features into production. Continuously assess and iterate on production models, balancing long-term ML strategy with tactical improvements. Champion code quality, observability, and resilience within their ML systems through reviews and hands-on contributions. Help shape their internal ML standards and practices, ensuring they stay ahead of industry advancements. Offer technical mentorship and be a thought partner to colleagues across data, ML, and engineering disciplines. What We're Looking For Hands-on experience in developing and deploying machine learning models at scale. Deep familiarity with core ML concepts (classification, time-series, statistical modeling) and their real-world tradeoffs. Fluency in Python and commonly used ML libraries (e.g. pandas, scikit-learn; experience with PyTorch or TensorFlow is a plus). Experience with model lifecycle management (MLOps), including monitoring, retraining, and model versioning. Ability to work across data infrastructure, from SQL to large-scale distributed data tools (Spark, etc.). Strong written and verbal communication skills, especially in cross-functional contexts. Bonus Experience (Nice to Have) Exposure to large language models (LLMs) or foundational model adaptation. Previous work in cybersecurity, anomaly detection, or behavioural analytics. Familiarity with orchestration frameworks (Airflow or similar). Experience with scalable ML systems, pipelines, or real-time data processing. Advanced degree or equivalent experience in ML/AI research or applied science. Cloud platform proficiency (AWS, GCP, Azure). If this sounds like something you would be interested in, please apply with your latest CV, a number to reach you on and I will be in touch. Alternatively, you can email me at . RSG Plc is acting as an Employment Agency in relation to this vacancy.
Machine Learning Engineer
Up to £70K DOE
Hybrid – London (2 days per week onsite)
My client is looking for a Junior to Mid-Level Machine Learning Engineer to take ownership of the infrastructure and services that power machine learning systems in production. In this role, you’ll act as a bridge between data science and engineering, ensuring robust, scalable, and low-latency deployment of models that serve millions of requests per day.
You’ll be responsible for building and maintaining Python microservices, leveraging modern DevOps practices and tooling to support rapid, reliable delivery. With sub-second response times and a high-throughput environment (2M+ requests/day), this is a high-impact role that blends software engineering, DevOps, and MLOps at scale.
Key Responsibilities
* Design, develop, and maintain Python microservices for serving machine learning models
* Collaborate with Data Scientists to deploy, monitor, and support models in production
* Implement and manage CI/CD pipelines using Azure DevOps
* Support containerized deployments with Kubernetes and Docker
* Ensure high performance, fault-tolerant, and secure infrastructure
* Promote code quality, testing standards, and scalable architecture
* Proactively identify infrastructure improvements and lead implementation
Requirements
* 2 + years of experience in Software Engineering, DevOps, or Data Engineering
* Strong Python skills with experience in microservices and web frameworks
* Solid understanding of CI/CD, ideally using Azure DevOps
* Familiarity with containerized environments (Docker/Kubernetes)
* Exposure to Data Science or Machine Learning concepts
* Experience operating in high-throughput environments
* Independent, curious, and driven by continuous improvement
* Effective communicator with the ability to bridge data science and engineering teams
Why Join?
You’ll be joining a company with strong business performance and ambitious plans for data-driven growth. This is a rare opportunity to take technical ownership of real-time machine learning infrastructure and play a key role in scaling systems that make an immediate business impact
01/06/2025
Machine Learning Engineer
Up to £70K DOE
Hybrid – London (2 days per week onsite)
My client is looking for a Junior to Mid-Level Machine Learning Engineer to take ownership of the infrastructure and services that power machine learning systems in production. In this role, you’ll act as a bridge between data science and engineering, ensuring robust, scalable, and low-latency deployment of models that serve millions of requests per day.
You’ll be responsible for building and maintaining Python microservices, leveraging modern DevOps practices and tooling to support rapid, reliable delivery. With sub-second response times and a high-throughput environment (2M+ requests/day), this is a high-impact role that blends software engineering, DevOps, and MLOps at scale.
Key Responsibilities
* Design, develop, and maintain Python microservices for serving machine learning models
* Collaborate with Data Scientists to deploy, monitor, and support models in production
* Implement and manage CI/CD pipelines using Azure DevOps
* Support containerized deployments with Kubernetes and Docker
* Ensure high performance, fault-tolerant, and secure infrastructure
* Promote code quality, testing standards, and scalable architecture
* Proactively identify infrastructure improvements and lead implementation
Requirements
* 2 + years of experience in Software Engineering, DevOps, or Data Engineering
* Strong Python skills with experience in microservices and web frameworks
* Solid understanding of CI/CD, ideally using Azure DevOps
* Familiarity with containerized environments (Docker/Kubernetes)
* Exposure to Data Science or Machine Learning concepts
* Experience operating in high-throughput environments
* Independent, curious, and driven by continuous improvement
* Effective communicator with the ability to bridge data science and engineering teams
Why Join?
You’ll be joining a company with strong business performance and ambitious plans for data-driven growth. This is a rare opportunity to take technical ownership of real-time machine learning infrastructure and play a key role in scaling systems that make an immediate business impact
My Client are seeking a Senior Data Scientist to join the team on a permanent basis specialising in ontology, knowledge engineering, knowledge graphs, and more. This role is not for the faint-hearted but for those energised by challenges, inspired by innovation, and driven by real-world impact. The role can pay up to £80,000 per annum for the right person so please click apply if you feel the below suits your skill set: Qualifications: Education: MSc/PhD in Data Science, Computer Science, or related field Experience: 5-7 years of hands-on experience in the industrySkills: Deep knowledge in First-Order Logic, Descriptive Logic, Web Ontology Language (OWL) Proficiency in graph neural networks (GNN), network science, and semantic networks Mastery over task-specific finetuning, including data extraction, classification, and generation A comprehensive understanding of traditional statistics, Machine Learning (ML), and multi-objective optimisation techniques Excellent communication and written skills Passion for continuous learning and collaboration as a team player Tools: Python: Structured workflows using environments, Conda, Git, GitFlow, etc.Databases: Traditional (SQL) and Graph Databases (openCypher, Gremlin, SPARQL) Technologies: MLOps/LLMOps, Protege, Cloud environments (Azure/AWS)
15/08/2023
Full time
My Client are seeking a Senior Data Scientist to join the team on a permanent basis specialising in ontology, knowledge engineering, knowledge graphs, and more. This role is not for the faint-hearted but for those energised by challenges, inspired by innovation, and driven by real-world impact. The role can pay up to £80,000 per annum for the right person so please click apply if you feel the below suits your skill set: Qualifications: Education: MSc/PhD in Data Science, Computer Science, or related field Experience: 5-7 years of hands-on experience in the industrySkills: Deep knowledge in First-Order Logic, Descriptive Logic, Web Ontology Language (OWL) Proficiency in graph neural networks (GNN), network science, and semantic networks Mastery over task-specific finetuning, including data extraction, classification, and generation A comprehensive understanding of traditional statistics, Machine Learning (ML), and multi-objective optimisation techniques Excellent communication and written skills Passion for continuous learning and collaboration as a team player Tools: Python: Structured workflows using environments, Conda, Git, GitFlow, etc.Databases: Traditional (SQL) and Graph Databases (openCypher, Gremlin, SPARQL) Technologies: MLOps/LLMOps, Protege, Cloud environments (Azure/AWS)
Job Introduction BBC R&D has recently established an Automation Applied Research Area focussed on the use of Machine Learning across the BBC. Automation works closely with other BBC R&D Applied Research Areas, BBC Product and Technology Groups and senior business stakeholders across the BBC to accelerate Machine Learning based innovation. Reporting to the Head of Automation, this role will lead a team of experts exploring the ML platforms, tools, performance, and sustainability that will underpin the BBC's approach to Machine Learning innovation. It will ensure that best practice and correct technology choices are downstreamed into R&D ML applications as well as supporting the wider BBC in making the right strategic decisions for its future ML technology. BBC R&D has five applied research areas focussed on Audiences, Automation, Distribution, Infrastructure and Production who are looking to solve some of the most interesting challenges in Media and Broadcasting; as well as our Commercial, Partnerships & Engagement team who ensure we're collaborating with the right external partners and optimising commercial returns through the exploitation of our Intellectual Property and grant funding. Our work supports the BBC's current ambition as well as informing future strategy. If you're excited by the prospect of working in an innovative environment with smart and supportive colleagues, then BBC R&D is the place for you. Role Responsibility This is a hands-on role. Your key responsibilities will be: Build and lead a team of ML engineers to develop an infrastructure to manage ML lifecycle through experimentation, deployment, and testing. Own the Automation MLOps strategy, roadmap, and backlog. Provide leadership and guidance on the delivery of ML models from prototypes to production, mentor and coach team members on ML engineering best practises; work alongside researchers to enable BBC to benefit more rapidly from fundamental ML research. Contribute to the design of ML systems and infrastructure to shape how ML is used across the BBC. Develop relationships with pan-BBC and external contributors and stakeholders. You will need to bring to life long-term ambitions to secure required support and buy-in for tangible and intangible benefits and outcomes. Focus on ensuring our ML technology delivers on performance, cost and sustainability goals and is supportive of the BBC's responsible and ethical ML objectives. Work with our Technology Strategy and Governance team to identify and communicate strategic investment decisions required to mature the BBC's ML technology in line with business needs. Are you the right candidate? Solid understanding of machine learning concepts and algorithms Experience deploying machine learning solutions Expert knowledge of Python programming and machine learning libraries (Scikit-learn, TensorFlow, Keras, PyTorch, MxNet, etc.) Experience implementing ML automation, MLOps (scalable deployment practices aimed to deploy and maintain machine learning models in production reliably and efficiently) and related tools (e.g., MLflow, Kubeflow, Airflow, Sagemaker) Experience working in accordance with DevOps principles, and with industry deployment best practices using CI/CD tools and infrastructure as code (e.g., Docker, Kubernetes, Terraform) Experience in at least one cloud platform (e.g., AWS, GCP, Azure) and associated machine learning services, e.g., Amazon SageMaker, Azure ML, Databricks. Package Description Band: E Contract type: Permanent - Full time Location: UK wide We're happy to discuss flexible working. Please indicate your choice under the flexible working question in the application . There is no obligation to raise this at the application stage but if you wish to do so, you are welcome to. Flexible working will be part of the discussion at offer stage. Excellent career progression - the BBC offers great opportunities for employees to seek new challenges and work in different areas of the organisation. Unrivalled training and development opportunities - our in-house Academy hosts a wide range of internal and external courses and certification. Benefits - We offer a competitive salary package, a flexible 35-hour working week for work-life balance and 26 days (1 of which is a corporation day) with the option to buy an extra 5 days, a defined pension scheme and discounted dental, health care, gym and much more. The situation regarding the coronavirus outbreak is developing quickly and the BBC is keen to continue to ensure the safety and wellbeing of people across the BBC, while continuing to protect our services. To reduce the risk access to BBC buildings is limited to those essential to our broadcast output. From Wednesday 18 th March until further notice all assessments and interviews will be conducted remotely. For more information go to Mae'r sefyllfa gyda'r coronafeirws yn datblygu'n gyflym, ac mae'r BBC yn awyddus i barhau i sicrhau diogelwch a lles pobl ar draws y BBC, gan barhau i warchod ein gwasanaethau hefyd. I leihau'r risg, dim ond y bobl sy'n hanfodol i'n hallbwn darlledu fydd yn cael mynediad i adeiladau'r BBC. O ddydd Mercher 18 fed Mawrth ymlaen, bydd pob asesiad a chyfweliad yn cael ei gynnal o bell, nes rhoddir gwybod yn wahanol. I gael mwy o wybodaeth, ewch i About the BBC We don't focus simply on what we do - we also care how we do it. Our values and the way we behave are important to us. Please make sure you've read about our values and behaviours in the document attached below. Diversity matters at the BBC. We have a working environment where we value and respect every individual's unique contribution, enabling all of our employees to thrive and achieve their full potential. We want to attract the broadest range of talented people to be part of the BBC - whether that's to contribute to our programming or our wide range of non-production roles. The more diverse our workforce, the better able we are to respond to and reflect our audiences in all their diversity. We are committed to equality of opportunity and welcome applications from individuals, regardless of age, gender, ethnicity, disability, sexual orientation, gender identity, socio-economic background, religion and/or belief. We will consider flexible working requests for all roles, unless operational requirements prevent otherwise. To find out more about Diversity and Inclusion at the BBC, please click here
23/09/2022
Full time
Job Introduction BBC R&D has recently established an Automation Applied Research Area focussed on the use of Machine Learning across the BBC. Automation works closely with other BBC R&D Applied Research Areas, BBC Product and Technology Groups and senior business stakeholders across the BBC to accelerate Machine Learning based innovation. Reporting to the Head of Automation, this role will lead a team of experts exploring the ML platforms, tools, performance, and sustainability that will underpin the BBC's approach to Machine Learning innovation. It will ensure that best practice and correct technology choices are downstreamed into R&D ML applications as well as supporting the wider BBC in making the right strategic decisions for its future ML technology. BBC R&D has five applied research areas focussed on Audiences, Automation, Distribution, Infrastructure and Production who are looking to solve some of the most interesting challenges in Media and Broadcasting; as well as our Commercial, Partnerships & Engagement team who ensure we're collaborating with the right external partners and optimising commercial returns through the exploitation of our Intellectual Property and grant funding. Our work supports the BBC's current ambition as well as informing future strategy. If you're excited by the prospect of working in an innovative environment with smart and supportive colleagues, then BBC R&D is the place for you. Role Responsibility This is a hands-on role. Your key responsibilities will be: Build and lead a team of ML engineers to develop an infrastructure to manage ML lifecycle through experimentation, deployment, and testing. Own the Automation MLOps strategy, roadmap, and backlog. Provide leadership and guidance on the delivery of ML models from prototypes to production, mentor and coach team members on ML engineering best practises; work alongside researchers to enable BBC to benefit more rapidly from fundamental ML research. Contribute to the design of ML systems and infrastructure to shape how ML is used across the BBC. Develop relationships with pan-BBC and external contributors and stakeholders. You will need to bring to life long-term ambitions to secure required support and buy-in for tangible and intangible benefits and outcomes. Focus on ensuring our ML technology delivers on performance, cost and sustainability goals and is supportive of the BBC's responsible and ethical ML objectives. Work with our Technology Strategy and Governance team to identify and communicate strategic investment decisions required to mature the BBC's ML technology in line with business needs. Are you the right candidate? Solid understanding of machine learning concepts and algorithms Experience deploying machine learning solutions Expert knowledge of Python programming and machine learning libraries (Scikit-learn, TensorFlow, Keras, PyTorch, MxNet, etc.) Experience implementing ML automation, MLOps (scalable deployment practices aimed to deploy and maintain machine learning models in production reliably and efficiently) and related tools (e.g., MLflow, Kubeflow, Airflow, Sagemaker) Experience working in accordance with DevOps principles, and with industry deployment best practices using CI/CD tools and infrastructure as code (e.g., Docker, Kubernetes, Terraform) Experience in at least one cloud platform (e.g., AWS, GCP, Azure) and associated machine learning services, e.g., Amazon SageMaker, Azure ML, Databricks. Package Description Band: E Contract type: Permanent - Full time Location: UK wide We're happy to discuss flexible working. Please indicate your choice under the flexible working question in the application . There is no obligation to raise this at the application stage but if you wish to do so, you are welcome to. Flexible working will be part of the discussion at offer stage. Excellent career progression - the BBC offers great opportunities for employees to seek new challenges and work in different areas of the organisation. Unrivalled training and development opportunities - our in-house Academy hosts a wide range of internal and external courses and certification. Benefits - We offer a competitive salary package, a flexible 35-hour working week for work-life balance and 26 days (1 of which is a corporation day) with the option to buy an extra 5 days, a defined pension scheme and discounted dental, health care, gym and much more. The situation regarding the coronavirus outbreak is developing quickly and the BBC is keen to continue to ensure the safety and wellbeing of people across the BBC, while continuing to protect our services. To reduce the risk access to BBC buildings is limited to those essential to our broadcast output. From Wednesday 18 th March until further notice all assessments and interviews will be conducted remotely. For more information go to Mae'r sefyllfa gyda'r coronafeirws yn datblygu'n gyflym, ac mae'r BBC yn awyddus i barhau i sicrhau diogelwch a lles pobl ar draws y BBC, gan barhau i warchod ein gwasanaethau hefyd. I leihau'r risg, dim ond y bobl sy'n hanfodol i'n hallbwn darlledu fydd yn cael mynediad i adeiladau'r BBC. O ddydd Mercher 18 fed Mawrth ymlaen, bydd pob asesiad a chyfweliad yn cael ei gynnal o bell, nes rhoddir gwybod yn wahanol. I gael mwy o wybodaeth, ewch i About the BBC We don't focus simply on what we do - we also care how we do it. Our values and the way we behave are important to us. Please make sure you've read about our values and behaviours in the document attached below. Diversity matters at the BBC. We have a working environment where we value and respect every individual's unique contribution, enabling all of our employees to thrive and achieve their full potential. We want to attract the broadest range of talented people to be part of the BBC - whether that's to contribute to our programming or our wide range of non-production roles. The more diverse our workforce, the better able we are to respond to and reflect our audiences in all their diversity. We are committed to equality of opportunity and welcome applications from individuals, regardless of age, gender, ethnicity, disability, sexual orientation, gender identity, socio-economic background, religion and/or belief. We will consider flexible working requests for all roles, unless operational requirements prevent otherwise. To find out more about Diversity and Inclusion at the BBC, please click here
Site Name: UK - Hertfordshire - Stevenage, USA - Connecticut - Hartford, USA - Delaware - Dover, USA - Maryland - Rockville, USA - Massachusetts - Cambridge, USA - Massachusetts - Waltham, USA - New Jersey - Trenton, USA - Pennsylvania - Upper Providence Posted Date: Jun 6 2022 The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. Achieving delivery of the right data to the right people at the right time needs design and implementation of data flows and data products which leverage internal and external data assets and tools to drive discovery and development is a key objective for the Data Science and Data Engineering (DSDE) team within GSK's Pharmaceutical R&D organization. There are five key drivers for this approach, which are closely aligned with GSK's corporate priorities of Innovation, Performance and Trust: Automation of end-to-end data flows :Faster and reliable ingestion of high throughput data in genetics, genomics and multi-omics, to extract value of investments in new technology (instrument to analysis-ready data in Enabling governance by design of external and internal data :with engineered practical solutions for controlled use and monitoring Innovative disease-specific and domain-expert specific data products : to enable computational scientists and their research unit collaborators to get faster to key insights leading to faster biopharmaceutical development cycles. Supporting e2ecode traceability and data provenance :Increasing assurance of data integrity through automation, integration. Improving engineering efficiency :Extensible, reusable, scalable,updateable,maintainable, virtualized traceable data and code would be driven by data engineering innovation and better resource utilization. We are looking for experienced Senior DevOps Engineers to join our growing Data Ops team. As a Senior Dev Ops Engineer is a highly technical individual contributor, building modern, cloud-native, DevOps-first systems for standardizing and templatizingbiomedical and scientificdata engineering, with demonstrable experience across the following areas: Deliver declarative components for common data ingestion, transformation and publishing techniques Define and implement data governance aligned to modern standards Establish scalable, automated processes for data engineering teams across GSK Thought leader and partner with wider DSDE data engineering teams to advise on implementation and best practices Cloud Infrastructure-as-Code Define Service and Flow orchestration Data as a configurable resource(including configuration-driven access to scientific data modelling tools) Observability (monitoring, alerting, logging, tracing, etc.) Enable quality engineering through KPIs and code coverage and quality checks Standardise GitOps/declarative software development lifecycle Audit as a service Senior DevOpsEngineers take full ownership of delivering high-performing, high-impactbiomedical and scientificdataopsproducts and services, froma description of apattern thatcustomer Data Engineers are trying touseall the way through tofinal delivery (and ongoing monitoring and operations)of a templated project and all associated automation. They arestandard-bearers for software engineering and quality coding practices within theteam andareexpected to mentor more junior engineers; they may even coordinate the work of more junior engineers on a large project.Theydevise useful metrics for ensuring their services are meeting customer demand and having animpact anditerate to deliver and improve on those metrics in an agile fashion. Successful Senior DevOpsEngineers are developing expertise with the types of data and types of tools that are leveraged in the biomedical and scientific data engineering space, andhas the following skills and experience(withsignificant depth in one or more of these areas): Demonstrable experience deploying robust modularised/container-based solutions to production (ideally GCP) and leveraging the Cloud NativeComputing Foundation (CNCF) ecosystem Significant depth in DevOps principles and tools (e.g.GitOps, Jenkins,CircleCI, Azure DevOps, etc.), and how to integrate these tools with other productivity tools (e.g. Jira, Slack, Microsoft Teams) to build a comprehensive workflow Programming in Python. Scala orGo Embedding agile software engineering (task/issue management, testing, documentation, software development lifecycle, source control, etc.) Leveraging major cloud providers, both via Kubernetesorvia vendor-specific services Authentication and Authorization flows and associated technologies (e.g.OAuth2 + JWT) Common distributed data tools (e.g.Spark, Hive) The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: We are looking for professionals with these required skills to achieve our goals: Masters in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering, etc, plus 5 years job experience (or PhD plus 3 years job experience) Experience with DevOps tools and concepts (e.g. Jira, GitLabs / Jenkins / CircleCI / Azure DevOps /etc.)Excellent with common distributed data tools in a production setting (Spark, Kafka, etc) Experience with specialized data architecture (e.g. optimizing physical layout for access patterns, including bloom filters, optimizing against self-describing formats such as ORC or Parquet, etc.) Experience with search / indexing systems (e.g. Elasticsearch) Expertise with agile development in Python, Scala, Go, and/or C++ Experience building reusable components on top of the CNCF ecosystem including Kubernetes Metrics-first mindset Experience mentoring junior engineers into deep technical expertise Preferred Qualifications: If you have the following characteristics, it would be a plus: Experience with agile software development Experience with building and designing a DevOps-first way of working Experience with building reusable components on top of the CNCF ecosystem including Kubernetes (or similar ecosystem ) LI-GSK Why GSK? Our values and expectationsare at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness As a company driven by our values of Patient focus, Transparency, Respect and Integrity, we know inclusion and diversity are essential for us to be able to succeed. We want all our colleagues to thrive at GSK bringing their unique experiences, ensuring they feel good and to keep growing their careers. As a candidate for a role, we want you to feel the same way. As an Equal Opportunity Employer, we are open to all talent. In the US, we also adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to race/ethnicity, colour, national origin, religion, gender, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class ( US only). We believe in an agile working culture for all our roles. If flexibility is important to you, we encourage you to explore with our hiring team what the opportunities are. Please don't hesitate to contact us if you'd like to discuss any adjustments to our process which might help you demonstrate your strengths and capabilities. You can either call us on , or send an email As you apply, we will ask you to share some personal information which is entirely voluntary..... click apply for full job details
23/09/2022
Full time
Site Name: UK - Hertfordshire - Stevenage, USA - Connecticut - Hartford, USA - Delaware - Dover, USA - Maryland - Rockville, USA - Massachusetts - Cambridge, USA - Massachusetts - Waltham, USA - New Jersey - Trenton, USA - Pennsylvania - Upper Providence Posted Date: Jun 6 2022 The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. Achieving delivery of the right data to the right people at the right time needs design and implementation of data flows and data products which leverage internal and external data assets and tools to drive discovery and development is a key objective for the Data Science and Data Engineering (DSDE) team within GSK's Pharmaceutical R&D organization. There are five key drivers for this approach, which are closely aligned with GSK's corporate priorities of Innovation, Performance and Trust: Automation of end-to-end data flows :Faster and reliable ingestion of high throughput data in genetics, genomics and multi-omics, to extract value of investments in new technology (instrument to analysis-ready data in Enabling governance by design of external and internal data :with engineered practical solutions for controlled use and monitoring Innovative disease-specific and domain-expert specific data products : to enable computational scientists and their research unit collaborators to get faster to key insights leading to faster biopharmaceutical development cycles. Supporting e2ecode traceability and data provenance :Increasing assurance of data integrity through automation, integration. Improving engineering efficiency :Extensible, reusable, scalable,updateable,maintainable, virtualized traceable data and code would be driven by data engineering innovation and better resource utilization. We are looking for experienced Senior DevOps Engineers to join our growing Data Ops team. As a Senior Dev Ops Engineer is a highly technical individual contributor, building modern, cloud-native, DevOps-first systems for standardizing and templatizingbiomedical and scientificdata engineering, with demonstrable experience across the following areas: Deliver declarative components for common data ingestion, transformation and publishing techniques Define and implement data governance aligned to modern standards Establish scalable, automated processes for data engineering teams across GSK Thought leader and partner with wider DSDE data engineering teams to advise on implementation and best practices Cloud Infrastructure-as-Code Define Service and Flow orchestration Data as a configurable resource(including configuration-driven access to scientific data modelling tools) Observability (monitoring, alerting, logging, tracing, etc.) Enable quality engineering through KPIs and code coverage and quality checks Standardise GitOps/declarative software development lifecycle Audit as a service Senior DevOpsEngineers take full ownership of delivering high-performing, high-impactbiomedical and scientificdataopsproducts and services, froma description of apattern thatcustomer Data Engineers are trying touseall the way through tofinal delivery (and ongoing monitoring and operations)of a templated project and all associated automation. They arestandard-bearers for software engineering and quality coding practices within theteam andareexpected to mentor more junior engineers; they may even coordinate the work of more junior engineers on a large project.Theydevise useful metrics for ensuring their services are meeting customer demand and having animpact anditerate to deliver and improve on those metrics in an agile fashion. Successful Senior DevOpsEngineers are developing expertise with the types of data and types of tools that are leveraged in the biomedical and scientific data engineering space, andhas the following skills and experience(withsignificant depth in one or more of these areas): Demonstrable experience deploying robust modularised/container-based solutions to production (ideally GCP) and leveraging the Cloud NativeComputing Foundation (CNCF) ecosystem Significant depth in DevOps principles and tools (e.g.GitOps, Jenkins,CircleCI, Azure DevOps, etc.), and how to integrate these tools with other productivity tools (e.g. Jira, Slack, Microsoft Teams) to build a comprehensive workflow Programming in Python. Scala orGo Embedding agile software engineering (task/issue management, testing, documentation, software development lifecycle, source control, etc.) Leveraging major cloud providers, both via Kubernetesorvia vendor-specific services Authentication and Authorization flows and associated technologies (e.g.OAuth2 + JWT) Common distributed data tools (e.g.Spark, Hive) The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: We are looking for professionals with these required skills to achieve our goals: Masters in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering, etc, plus 5 years job experience (or PhD plus 3 years job experience) Experience with DevOps tools and concepts (e.g. Jira, GitLabs / Jenkins / CircleCI / Azure DevOps /etc.)Excellent with common distributed data tools in a production setting (Spark, Kafka, etc) Experience with specialized data architecture (e.g. optimizing physical layout for access patterns, including bloom filters, optimizing against self-describing formats such as ORC or Parquet, etc.) Experience with search / indexing systems (e.g. Elasticsearch) Expertise with agile development in Python, Scala, Go, and/or C++ Experience building reusable components on top of the CNCF ecosystem including Kubernetes Metrics-first mindset Experience mentoring junior engineers into deep technical expertise Preferred Qualifications: If you have the following characteristics, it would be a plus: Experience with agile software development Experience with building and designing a DevOps-first way of working Experience with building reusable components on top of the CNCF ecosystem including Kubernetes (or similar ecosystem ) LI-GSK Why GSK? Our values and expectationsare at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness As a company driven by our values of Patient focus, Transparency, Respect and Integrity, we know inclusion and diversity are essential for us to be able to succeed. We want all our colleagues to thrive at GSK bringing their unique experiences, ensuring they feel good and to keep growing their careers. As a candidate for a role, we want you to feel the same way. As an Equal Opportunity Employer, we are open to all talent. In the US, we also adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to race/ethnicity, colour, national origin, religion, gender, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class ( US only). We believe in an agile working culture for all our roles. If flexibility is important to you, we encourage you to explore with our hiring team what the opportunities are. Please don't hesitate to contact us if you'd like to discuss any adjustments to our process which might help you demonstrate your strengths and capabilities. You can either call us on , or send an email As you apply, we will ask you to share some personal information which is entirely voluntary..... click apply for full job details
Site Name: UK - Hertfordshire - Stevenage, USA - Connecticut - Hartford, USA - Delaware - Dover, USA - Maryland - Rockville, USA - Massachusetts - Waltham, USA - Pennsylvania - Upper Providence, Warren NJ Posted Date: Aug The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. Achieving delivery of the right data to the right people at the right time needs design and implementation of data flows and data products which leverage internal and external data assets and tools to drive discovery and development is a key objective for the Data Science and Data Engineering (DS D E) team within GSK's Pharmaceutical R&D organisation . There are five key drivers for this approach, which are closely aligned with GSK's corporate priorities of Innovation, Performance and Trust: Automation of end-to-end data flows: Faster and reliable ingestion of high throughput data in genetics, genomics and multi-omics, to extract value of investments in new technology (instrument to analysis-ready data in Enabling governance by design of external and internal data: with engineered practical solutions for controlled use and monitoring Innovative disease-specific and domain-expert specific data products : to enable computational scientists and their research unit collaborators to get faster to key insights leading to faster biopharmaceutical development cycles. Supporting e2 e code traceability and data provenance: Increasing assurance of data integrity through automation, integration Improving engineering efficiency: Extensible, reusable, scalable,updateable,maintainable, virtualized traceable data and code would b e driven by data engineering innovation and better resource utilization. We are looking for an experienced Sr. Data Ops Engineer to join our growing Data Ops team. As a Sr. Data Ops Engineer is a highly technical individual contributor, building modern, cloud-native, DevOps-first systems for standardizing and templatizingbiomedical and scientificdata engineering, with demonstrable experience across the following areas : Deliver declarative components for common data ingestion, transformation and publishing techniques Define and implement data governance aligned to modern standards Establish scalable, automated processes for data engineering team s across GSK Thought leader and partner with wider DSDE data engineering teams to advise on implementation and best practices Cloud Infrastructure-as-Code D efine Service and Flow orchestration Data as a configurable resource(including configuration-driven access to scientific data modelling tools) Ob servabilty (monitoring, alerting, logging, tracing, ...) Enable quality engineering through KPIs and c ode coverage and quality checks Standardise GitOps /declarative software development lifecycle Audit as a service Sr. DataOpsEngineerstake full ownership of delivering high-performing, high-impactbiomedical and scientificdataopsproducts and services, froma description of apattern thatcustomer Data Engineers are trying touseall the way through tofinal delivery (and ongoing monitoring and operations)of a templated project and all associated automation. They arestandard-bearers for software engineering and quality coding practices within theteam andareexpected to mentor more junior engineers; they may even coordinate the work of more junior engineers on a large project.Theydevise useful metrics for ensuring their services are meeting customer demand and having animpact anditerate to deliver and improve on those metrics in an agile fashion. A successfulSr.DataOpsEngineeris developing expertise with the types of data and types of tools that are leveraged in the biomedical and scientific data engineering space, andhas the following skills and experience(withsignificant depth in one or more of these areas): Demonstrable experience deploying robust modularised/ container based solutions to production (ideally GCP) and leveraging the Cloud NativeComputing Foundation (CNCF) ecosystem Significant depth in DevOps principles and tools ( e.g. GitOps , Jenkins, CircleCI , Azure DevOps, ...), and how to integrate these tools with other productivity tools (e.g. Jira, Slack, Microsoft Teams) to build a comprehensive workflow P rogramming in Python. Scala or Go Embedding agile s oftware engineering ( task/issue management, testing, documentation, software development lifecycle, source control, ) Leveraging major cloud providers, both via Kubernetes or via vendor-specific services Authentication and Authorization flows and associated technologies ( e.g. OAuth2 + JWT) Common distributed data tools ( e.g. Spark, Hive) The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: Bachelors degree in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering, etc, plus 7 years job experience or Masters degree with 5 Years of experience (or PhD plus 3 years job experience) Deep experience with DevOps tools and concepts ( e.g. Jira, GitLabs / Jenkins / CircleCI / Azure DevOps / ...) Excellent with common distributed data tools in a production setting (Spark, Kafka, etc) Experience with specialized data architecture ( e.g. optimizing physical layout for access patterns, including bloom filters, optimizing against self-describing formats such as ORC or Parquet, etc) Experience with search / indexing systems ( e.g. Elasticsearch) Deep expertise with agile development in Python, Scala, Go, and/or C++ Experience building reusable components on top of the CNCF ecosystem including Kubernetes Metrics-first mindset Experience mentoring junior engineers into deep technical expertise Preferred Qualifications: If you have the following characteristics, it would be a plus: Experience with agile software development Experience building and designing a DevOps-first way of working Demonstrated experience building reusable components on top of the CNCF ecosystem including Kubernetes (or similar ecosystem ) LI-GSK Why GSK? Our values and expectations are at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness As a company driven by our values of Patient focus, Transparency, Respect and Integrity, we know inclusion and diversity are essential for us to be able to succeed. We want all our colleagues to thrive at GSK bringing their unique experiences, ensuring they feel good and to keep growing their careers. As a candidate for a role, we want you to feel the same way. As an Equal Opportunity Employer, we are open to all talent. In the US, we also adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to neurodiversity, race/ethnicity, colour, national origin, religion, gender, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class ( US only). We believe in an agile working culture for all our roles. If flexibility is important to you, we encourage you to explore with our hiring team what the opportunities are. Please don't hesitate to contact us if you'd like to discuss any adjustments to our process which might help you demonstrate your strengths and capabilities. You can either call us on , or send an email As you apply, we will ask you to share some personal information which is entirely voluntary..... click apply for full job details
23/09/2022
Full time
Site Name: UK - Hertfordshire - Stevenage, USA - Connecticut - Hartford, USA - Delaware - Dover, USA - Maryland - Rockville, USA - Massachusetts - Waltham, USA - Pennsylvania - Upper Providence, Warren NJ Posted Date: Aug The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. Achieving delivery of the right data to the right people at the right time needs design and implementation of data flows and data products which leverage internal and external data assets and tools to drive discovery and development is a key objective for the Data Science and Data Engineering (DS D E) team within GSK's Pharmaceutical R&D organisation . There are five key drivers for this approach, which are closely aligned with GSK's corporate priorities of Innovation, Performance and Trust: Automation of end-to-end data flows: Faster and reliable ingestion of high throughput data in genetics, genomics and multi-omics, to extract value of investments in new technology (instrument to analysis-ready data in Enabling governance by design of external and internal data: with engineered practical solutions for controlled use and monitoring Innovative disease-specific and domain-expert specific data products : to enable computational scientists and their research unit collaborators to get faster to key insights leading to faster biopharmaceutical development cycles. Supporting e2 e code traceability and data provenance: Increasing assurance of data integrity through automation, integration Improving engineering efficiency: Extensible, reusable, scalable,updateable,maintainable, virtualized traceable data and code would b e driven by data engineering innovation and better resource utilization. We are looking for an experienced Sr. Data Ops Engineer to join our growing Data Ops team. As a Sr. Data Ops Engineer is a highly technical individual contributor, building modern, cloud-native, DevOps-first systems for standardizing and templatizingbiomedical and scientificdata engineering, with demonstrable experience across the following areas : Deliver declarative components for common data ingestion, transformation and publishing techniques Define and implement data governance aligned to modern standards Establish scalable, automated processes for data engineering team s across GSK Thought leader and partner with wider DSDE data engineering teams to advise on implementation and best practices Cloud Infrastructure-as-Code D efine Service and Flow orchestration Data as a configurable resource(including configuration-driven access to scientific data modelling tools) Ob servabilty (monitoring, alerting, logging, tracing, ...) Enable quality engineering through KPIs and c ode coverage and quality checks Standardise GitOps /declarative software development lifecycle Audit as a service Sr. DataOpsEngineerstake full ownership of delivering high-performing, high-impactbiomedical and scientificdataopsproducts and services, froma description of apattern thatcustomer Data Engineers are trying touseall the way through tofinal delivery (and ongoing monitoring and operations)of a templated project and all associated automation. They arestandard-bearers for software engineering and quality coding practices within theteam andareexpected to mentor more junior engineers; they may even coordinate the work of more junior engineers on a large project.Theydevise useful metrics for ensuring their services are meeting customer demand and having animpact anditerate to deliver and improve on those metrics in an agile fashion. A successfulSr.DataOpsEngineeris developing expertise with the types of data and types of tools that are leveraged in the biomedical and scientific data engineering space, andhas the following skills and experience(withsignificant depth in one or more of these areas): Demonstrable experience deploying robust modularised/ container based solutions to production (ideally GCP) and leveraging the Cloud NativeComputing Foundation (CNCF) ecosystem Significant depth in DevOps principles and tools ( e.g. GitOps , Jenkins, CircleCI , Azure DevOps, ...), and how to integrate these tools with other productivity tools (e.g. Jira, Slack, Microsoft Teams) to build a comprehensive workflow P rogramming in Python. Scala or Go Embedding agile s oftware engineering ( task/issue management, testing, documentation, software development lifecycle, source control, ) Leveraging major cloud providers, both via Kubernetes or via vendor-specific services Authentication and Authorization flows and associated technologies ( e.g. OAuth2 + JWT) Common distributed data tools ( e.g. Spark, Hive) The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: Bachelors degree in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering, etc, plus 7 years job experience or Masters degree with 5 Years of experience (or PhD plus 3 years job experience) Deep experience with DevOps tools and concepts ( e.g. Jira, GitLabs / Jenkins / CircleCI / Azure DevOps / ...) Excellent with common distributed data tools in a production setting (Spark, Kafka, etc) Experience with specialized data architecture ( e.g. optimizing physical layout for access patterns, including bloom filters, optimizing against self-describing formats such as ORC or Parquet, etc) Experience with search / indexing systems ( e.g. Elasticsearch) Deep expertise with agile development in Python, Scala, Go, and/or C++ Experience building reusable components on top of the CNCF ecosystem including Kubernetes Metrics-first mindset Experience mentoring junior engineers into deep technical expertise Preferred Qualifications: If you have the following characteristics, it would be a plus: Experience with agile software development Experience building and designing a DevOps-first way of working Demonstrated experience building reusable components on top of the CNCF ecosystem including Kubernetes (or similar ecosystem ) LI-GSK Why GSK? Our values and expectations are at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness As a company driven by our values of Patient focus, Transparency, Respect and Integrity, we know inclusion and diversity are essential for us to be able to succeed. We want all our colleagues to thrive at GSK bringing their unique experiences, ensuring they feel good and to keep growing their careers. As a candidate for a role, we want you to feel the same way. As an Equal Opportunity Employer, we are open to all talent. In the US, we also adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to neurodiversity, race/ethnicity, colour, national origin, religion, gender, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class ( US only). We believe in an agile working culture for all our roles. If flexibility is important to you, we encourage you to explore with our hiring team what the opportunities are. Please don't hesitate to contact us if you'd like to discuss any adjustments to our process which might help you demonstrate your strengths and capabilities. You can either call us on , or send an email As you apply, we will ask you to share some personal information which is entirely voluntary..... click apply for full job details
Site Name: USA - Pennsylvania - Upper Providence, UK - Hertfordshire - Stevenage, UK - London - Brentford, USA - Pennsylvania - Philadelphia Posted Date: Oct The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. We are looking for a skilled Data Ops Engineer II to join our growing team. The Data Ops team acceleratesbiomedicaland scientificdata product development and ensures consistent, professional-grade operations for the Data Science and Engineering organization by building templated projects (code repository plus DevOps pipelines) for various Data Science/Data Engineering architecture patternsin the challenging biomedical data space.A Data Ops Engineer IIknows the metrics desired for their tools andservices anditerates to deliver and improve on those metrics in an agile fashion. A Data Ops Engineer II is a highly technical individual contributor, building modern, cloud-native systems for standardizing and templatizing data engineering, such as: Standardized physical storage and search / indexing systems Schema management (data + metadata + versioning + provenance + governance) API semantics and ontology management Standard API architectures Kafka + standard streaming semantics Standard components for publishing data to file-based, relational, and other sorts of data stores Metadata systems Tooling for QA / evaluation Audit as a Service Additional responsibilities also include: Given a well-specified data framework problem, implement end-to-end solutionsusing appropriate programming languages(e.g.Python,Scala, or Go), open-source tools (e.g.Spark,Elasticsearch, ...), and cloud vendor-provided tools (e.g.Amazon S3) Leverage tools provided by Tech (e.g.infrastructure as code, CloudOps,DevOps, logging / alerting, ...) in delivery ofsolutions Write proper documentation in code as well as in wikis/other documentationsystems Writefantastic code along with theproper unit, functional, and integration tests for code and services to ensurequality Stayup to datewith developments in theopen-sourcecommunity around data engineering, data science, and similartooling The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: We are looking for professionals with these required skills to achieve our goals: Master's in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering and 2+ years of experience OR PhD in Computer Science Demonstrated experience with software engineering (testing, documentation, software development lifecycle, source control, ... Experience with DevOps tools and concepts (e.g. Jira, GitLabs / Jenkins / CircleCI / Azure DevOps / ...) Experience with common distributed data tools in a production setting (Spark, Kafka, etc) Experience with basics of search engines/indexing (e.g. Elasticsearch, Lucene) Demonstrated experience in writing Python, Scala, Go, and/or C++ Preferred Qualifications: If you have the following characteristics, it would be a plus: Comfort with specialized data architecture (e.g. optimizing physical layout for access patterns, including bloom filters, optimizing against self-describing formats such as ORC or Parquet, etc) Experience with the CNCF ecosystem / Kubernetes Comfort with search/indexing systems (e.g. Elasticsearch) Experience with schema tools/schema management (Avro, Protobuf) Why GSK? Our values and expectations are at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness LI-GSK If you require an accommodation or other assistance to apply for a job at GSK, please contact the GSK Service Centre at 1- (US Toll Free) or +1 (outside US). GSK is an Equal Opportunity Employer and, in the US, we adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to race, color, national origin, religion, sex, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class. At GSK, the health and safety of our employees are of paramount importance. As a science-led healthcare company on a mission to get ahead of disease together, we believe that supporting vaccination against COVID-19 is the single best thing we can do in the US to ensure the health and safety of our employees, complementary workers, workplaces, customers, consumers, communities, and the patients we serve. GSK has made the decision to require all US employees to be fully vaccinated against COVID-19, where allowed by state or local law and where vaccine supply is readily available. The only exceptions to this requirement are employees who are approved for an accommodation for religious, medical or disability-related reasons. Important notice to Employment businesses/ Agencies GSK does not accept referrals from employment businesses and/or employment agencies in respect of the vacancies posted on this site. All employment businesses/agencies are required to contact GSK's commercial and general procurement/human resources department to obtain prior written authorization before referring any candidates to GSK. The obtaining of prior written authorization is a condition precedent to any agreement (verbal or written) between the employment business/ agency and GSK. In the absence of such written authorization being obtained any actions undertaken by the employment business/agency shall be deemed to have been performed without the consent or contractual agreement of GSK. GSK shall therefore not be liable for any fees arising from such actions or any fees arising from any referrals by employment businesses/agencies in respect of the vacancies posted on this site. Please note that if you are a US Licensed Healthcare Professional or Healthcare Professional as defined by the laws of the state issuing your license, GSK may be required to capture and report expenses GSK incurs, on your behalf, in the event you are afforded an interview for employment. This capture of applicable transfers of value is necessary to ensure GSK's compliance to all federal and state US Transparency requirements. For more information, please visit GSK's Transparency Reporting For the Record site.
21/09/2022
Full time
Site Name: USA - Pennsylvania - Upper Providence, UK - Hertfordshire - Stevenage, UK - London - Brentford, USA - Pennsylvania - Philadelphia Posted Date: Oct The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. We are looking for a skilled Data Ops Engineer II to join our growing team. The Data Ops team acceleratesbiomedicaland scientificdata product development and ensures consistent, professional-grade operations for the Data Science and Engineering organization by building templated projects (code repository plus DevOps pipelines) for various Data Science/Data Engineering architecture patternsin the challenging biomedical data space.A Data Ops Engineer IIknows the metrics desired for their tools andservices anditerates to deliver and improve on those metrics in an agile fashion. A Data Ops Engineer II is a highly technical individual contributor, building modern, cloud-native systems for standardizing and templatizing data engineering, such as: Standardized physical storage and search / indexing systems Schema management (data + metadata + versioning + provenance + governance) API semantics and ontology management Standard API architectures Kafka + standard streaming semantics Standard components for publishing data to file-based, relational, and other sorts of data stores Metadata systems Tooling for QA / evaluation Audit as a Service Additional responsibilities also include: Given a well-specified data framework problem, implement end-to-end solutionsusing appropriate programming languages(e.g.Python,Scala, or Go), open-source tools (e.g.Spark,Elasticsearch, ...), and cloud vendor-provided tools (e.g.Amazon S3) Leverage tools provided by Tech (e.g.infrastructure as code, CloudOps,DevOps, logging / alerting, ...) in delivery ofsolutions Write proper documentation in code as well as in wikis/other documentationsystems Writefantastic code along with theproper unit, functional, and integration tests for code and services to ensurequality Stayup to datewith developments in theopen-sourcecommunity around data engineering, data science, and similartooling The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: We are looking for professionals with these required skills to achieve our goals: Master's in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering and 2+ years of experience OR PhD in Computer Science Demonstrated experience with software engineering (testing, documentation, software development lifecycle, source control, ... Experience with DevOps tools and concepts (e.g. Jira, GitLabs / Jenkins / CircleCI / Azure DevOps / ...) Experience with common distributed data tools in a production setting (Spark, Kafka, etc) Experience with basics of search engines/indexing (e.g. Elasticsearch, Lucene) Demonstrated experience in writing Python, Scala, Go, and/or C++ Preferred Qualifications: If you have the following characteristics, it would be a plus: Comfort with specialized data architecture (e.g. optimizing physical layout for access patterns, including bloom filters, optimizing against self-describing formats such as ORC or Parquet, etc) Experience with the CNCF ecosystem / Kubernetes Comfort with search/indexing systems (e.g. Elasticsearch) Experience with schema tools/schema management (Avro, Protobuf) Why GSK? Our values and expectations are at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness LI-GSK If you require an accommodation or other assistance to apply for a job at GSK, please contact the GSK Service Centre at 1- (US Toll Free) or +1 (outside US). GSK is an Equal Opportunity Employer and, in the US, we adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to race, color, national origin, religion, sex, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class. At GSK, the health and safety of our employees are of paramount importance. As a science-led healthcare company on a mission to get ahead of disease together, we believe that supporting vaccination against COVID-19 is the single best thing we can do in the US to ensure the health and safety of our employees, complementary workers, workplaces, customers, consumers, communities, and the patients we serve. GSK has made the decision to require all US employees to be fully vaccinated against COVID-19, where allowed by state or local law and where vaccine supply is readily available. The only exceptions to this requirement are employees who are approved for an accommodation for religious, medical or disability-related reasons. Important notice to Employment businesses/ Agencies GSK does not accept referrals from employment businesses and/or employment agencies in respect of the vacancies posted on this site. All employment businesses/agencies are required to contact GSK's commercial and general procurement/human resources department to obtain prior written authorization before referring any candidates to GSK. The obtaining of prior written authorization is a condition precedent to any agreement (verbal or written) between the employment business/ agency and GSK. In the absence of such written authorization being obtained any actions undertaken by the employment business/agency shall be deemed to have been performed without the consent or contractual agreement of GSK. GSK shall therefore not be liable for any fees arising from such actions or any fees arising from any referrals by employment businesses/agencies in respect of the vacancies posted on this site. Please note that if you are a US Licensed Healthcare Professional or Healthcare Professional as defined by the laws of the state issuing your license, GSK may be required to capture and report expenses GSK incurs, on your behalf, in the event you are afforded an interview for employment. This capture of applicable transfers of value is necessary to ensure GSK's compliance to all federal and state US Transparency requirements. For more information, please visit GSK's Transparency Reporting For the Record site.