Senior Machine Learning Engineer - Remote As a Senior Machine Learning Engineer do you want to work with a leading provider of customer engagement solutions? As a Senior Machine Learning Engineer you will have the opportunity to join a high-growth organisation providing customised software solutions that accelerates and simplifies the journey of digital transformation. Your role: As a Senior Machine Learning Engineer, your role will be pivotal in designing, developing, testing, maintaining, and supporting software components using Python. You will be leading the development of AI projects taking ownership and driving innovation of development tasks. You'll be maintaining and enhancing CI/CD pipelines alongside architecting and managing cloud infrastructure using Terraform We'd love to see these skills from you: Proficiency in Python Strong NLP experience - NumPy, Pandas etc. Commercial experience leveraging open-source models, finetuning LLMs & RAG pipelines Expertise in learning algorithms, neural networks and ML frameworks (TensorFlow, PyTorch etc.) MLOps experience Nice to have: Familiarity with Git or other Version Control Systems Computer Vision Library exposure Understanding of Big Data Technologies (Hadoop, Spark etc) Experience with Cloud platforms (AWS, GCP or Azure) This is a fully remote role, but may require very occasional travel (once a month or less) to their Bedford office. Salary up to £75,000. The client is unable to provide sponsorship for this position and you must be located in the UK to apply. If you are interested, please apply. Contact Danielle Blake on OR for more information. Understanding Recruitment is passionate about equity, diversity and inclusion. We seek individuals from the widest talent pool and encourage underrepresented talent to apply for vacancies with us. We are committed to recruitment processes that are fair for all, regardless of background and personal characteristics.
Apr 27, 2024
Full time
Senior Machine Learning Engineer - Remote As a Senior Machine Learning Engineer do you want to work with a leading provider of customer engagement solutions? As a Senior Machine Learning Engineer you will have the opportunity to join a high-growth organisation providing customised software solutions that accelerates and simplifies the journey of digital transformation. Your role: As a Senior Machine Learning Engineer, your role will be pivotal in designing, developing, testing, maintaining, and supporting software components using Python. You will be leading the development of AI projects taking ownership and driving innovation of development tasks. You'll be maintaining and enhancing CI/CD pipelines alongside architecting and managing cloud infrastructure using Terraform We'd love to see these skills from you: Proficiency in Python Strong NLP experience - NumPy, Pandas etc. Commercial experience leveraging open-source models, finetuning LLMs & RAG pipelines Expertise in learning algorithms, neural networks and ML frameworks (TensorFlow, PyTorch etc.) MLOps experience Nice to have: Familiarity with Git or other Version Control Systems Computer Vision Library exposure Understanding of Big Data Technologies (Hadoop, Spark etc) Experience with Cloud platforms (AWS, GCP or Azure) This is a fully remote role, but may require very occasional travel (once a month or less) to their Bedford office. Salary up to £75,000. The client is unable to provide sponsorship for this position and you must be located in the UK to apply. If you are interested, please apply. Contact Danielle Blake on OR for more information. Understanding Recruitment is passionate about equity, diversity and inclusion. We seek individuals from the widest talent pool and encourage underrepresented talent to apply for vacancies with us. We are committed to recruitment processes that are fair for all, regardless of background and personal characteristics.
Do you want to join a team working at the cutting edge of engineering sustainability? Here at Monolith, we're on a mission to empower engineers to use AI to solve the most intractable physics problems like developing next-gen EV batteries that charge faster and last longer. With strong product-market fit, we've doubled in size over the last four years, are growing globally, and we have ambitious plans to expand. It's an exciting time! To continue in our growth, we are recruiting a Senior Software Engineer focussing on Python. What you'll be doing: As a Senior Software Engineer, you will play a crucial role in driving the re-platforming efforts of our SaaS software product. Your responsibilities will involve independently and swiftly addressing specific technical challenges within this framework, ensuring seamless transition and enhancement of our platform. Our New Tech stack: Athena SQL, Athena & EMR Spark, ECS, Temporal; Tech we're keeping: Python, Flask, Redis, Postgres, React, Plotly, Docker. We might add Azure later Key Responsibilities: Rapidly deliver high-quality code for our re-platforming project. Proactively identify and resolve blockers for team members, ensuring smooth progress. Break down complex technical tasks into manageable deliverables (from epics to tasks). Apply senior-level expertise and pragmatism to coding and decision-making processes, making trade-offs explicit and understandable to the team. Required Skills and Attributes: 7 years or more of coding experience, with the last 3 years primarily focused on Python. Preference for candidates who haven't primarily worked in large corporations, big tech firms, late-stage companies, or software agencies. Previous involvement with AWS platforms. Self-sufficient in initiating and completing tasks end-to-end, adhering to product requirements even with minimal supervision. Exceptional communicator, adept at effectively engaging with both fellow developers and higher-level stakeholders such as team leads and managers. Highly focused on identifying and advancing critical tasks, both for oneself and others, ensuring progress aligns with project goals. Nice to have: Previous experience in startup environments. Proficiency or experience with Apache Spark. Familiarity or background in working with Azure. Experience orchestrating workflows, particularly within distributed system environments. Knowledge of MLOps principles and practices, especially in implementing them within production settings. Why Monolith? Our culture is passionate, engaging and collaborative. We are genuine, we bring our true selves to work and celebrate those little quirks that make us different. We have a culture of learning, we encourage new ideas, out of the box thinkers and risk takers. We're all human and sometimes we make mistakes, but we brush ourselves off and try again. Our culture encourages freedom, flexibility and creativity. At Monolith our values are core to how we do business. They're not just words on a wall, we live them every day. Our values are embedded in our internal processes so that we're always reminded what's important to us and we continue to grow as individuals and as a company. Our values are: Bring yourself to work Always be curious and open Think like an engineer Work smart, not hard Be in this together A few things to note: Monolith is proud to be an equal opportunity employer and we value diversity and inclusion. We welcome people of different nationalities, backgrounds, experiences, abilities and perspectives. We don't have an end date to apply for this role, but we will prioritise early applicants, so if you're interested then please apply soon. We are not open to working with external recruitment agencies at this time. If you don't quite match everything above but you feel you can succeed in this role then we encourage your application and look forward to hearing from you.
Apr 26, 2024
Full time
Do you want to join a team working at the cutting edge of engineering sustainability? Here at Monolith, we're on a mission to empower engineers to use AI to solve the most intractable physics problems like developing next-gen EV batteries that charge faster and last longer. With strong product-market fit, we've doubled in size over the last four years, are growing globally, and we have ambitious plans to expand. It's an exciting time! To continue in our growth, we are recruiting a Senior Software Engineer focussing on Python. What you'll be doing: As a Senior Software Engineer, you will play a crucial role in driving the re-platforming efforts of our SaaS software product. Your responsibilities will involve independently and swiftly addressing specific technical challenges within this framework, ensuring seamless transition and enhancement of our platform. Our New Tech stack: Athena SQL, Athena & EMR Spark, ECS, Temporal; Tech we're keeping: Python, Flask, Redis, Postgres, React, Plotly, Docker. We might add Azure later Key Responsibilities: Rapidly deliver high-quality code for our re-platforming project. Proactively identify and resolve blockers for team members, ensuring smooth progress. Break down complex technical tasks into manageable deliverables (from epics to tasks). Apply senior-level expertise and pragmatism to coding and decision-making processes, making trade-offs explicit and understandable to the team. Required Skills and Attributes: 7 years or more of coding experience, with the last 3 years primarily focused on Python. Preference for candidates who haven't primarily worked in large corporations, big tech firms, late-stage companies, or software agencies. Previous involvement with AWS platforms. Self-sufficient in initiating and completing tasks end-to-end, adhering to product requirements even with minimal supervision. Exceptional communicator, adept at effectively engaging with both fellow developers and higher-level stakeholders such as team leads and managers. Highly focused on identifying and advancing critical tasks, both for oneself and others, ensuring progress aligns with project goals. Nice to have: Previous experience in startup environments. Proficiency or experience with Apache Spark. Familiarity or background in working with Azure. Experience orchestrating workflows, particularly within distributed system environments. Knowledge of MLOps principles and practices, especially in implementing them within production settings. Why Monolith? Our culture is passionate, engaging and collaborative. We are genuine, we bring our true selves to work and celebrate those little quirks that make us different. We have a culture of learning, we encourage new ideas, out of the box thinkers and risk takers. We're all human and sometimes we make mistakes, but we brush ourselves off and try again. Our culture encourages freedom, flexibility and creativity. At Monolith our values are core to how we do business. They're not just words on a wall, we live them every day. Our values are embedded in our internal processes so that we're always reminded what's important to us and we continue to grow as individuals and as a company. Our values are: Bring yourself to work Always be curious and open Think like an engineer Work smart, not hard Be in this together A few things to note: Monolith is proud to be an equal opportunity employer and we value diversity and inclusion. We welcome people of different nationalities, backgrounds, experiences, abilities and perspectives. We don't have an end date to apply for this role, but we will prioritise early applicants, so if you're interested then please apply soon. We are not open to working with external recruitment agencies at this time. If you don't quite match everything above but you feel you can succeed in this role then we encourage your application and look forward to hearing from you.
Senior Data & AI Solution Architect (Contract) 6 Months Hybrid (Home-based with flexible travel requirements) £600 - £700 per day outside IR35 Our client is at the forefront of integrating Large Language Models (LLMs) and AI Agentic Reasoning into business operations. As they prepare for a phase of rapid growth, they are developing smart, context-aware systems that not only enhance operational efficiency but also foster significant innovation. As such, they are seeking a Senior AI Solution Architect to lead the development and implementation of cutting-edge Data & AI-driven solutions. This role is pivotal in enhancing business efficiency and improving decision-making processes through strategic AI integration. Core Responsibilities: Lead the design and implementation of robust AI systems that incorporate advanced LLMs and our proprietary techniques of intelligent facilitation. Manage projects from conceptualisation to deployment, ensuring alignment with strategic objectives to bolster decision-making and operational efficiencies. Engage with senior-level stakeholders to present complex AI solutions, fostering strategic partnerships and deepening client relationships. Assist in advancing Context-Driven Generative Analytics, transforming complex data into actionable insights. Required Skills and Experience: Demonstrable experience in designing and leading the deployment of Data & AI solutions. Proficiency in the Microsoft Azure Data Stack, experience with LLMs, prompt engineering, AI Agents, and familiarity with MLOps/DevOps practices. Strong coding skills in Python and C# are a plus. Exceptional skills in client engagement and strategic solution delivery. Ability to inspire teams and translate strategic visions into actionable plans. History of innovative problem-solving and adopting new AI technologies and methods. Strong record in fostering a collaborative team environment and mentoring others. Capable of thriving in a dynamic, fast-paced setting, managing multiple projects with flexibility. As part of this engagement, you will work on initiatives that redefine business efficiency through AI. You will have significant development opportunities in a rapidly expanding company and will contribute to groundbreaking projects that drive. To be considered, please click apply now!
Apr 26, 2024
Contractor
Senior Data & AI Solution Architect (Contract) 6 Months Hybrid (Home-based with flexible travel requirements) £600 - £700 per day outside IR35 Our client is at the forefront of integrating Large Language Models (LLMs) and AI Agentic Reasoning into business operations. As they prepare for a phase of rapid growth, they are developing smart, context-aware systems that not only enhance operational efficiency but also foster significant innovation. As such, they are seeking a Senior AI Solution Architect to lead the development and implementation of cutting-edge Data & AI-driven solutions. This role is pivotal in enhancing business efficiency and improving decision-making processes through strategic AI integration. Core Responsibilities: Lead the design and implementation of robust AI systems that incorporate advanced LLMs and our proprietary techniques of intelligent facilitation. Manage projects from conceptualisation to deployment, ensuring alignment with strategic objectives to bolster decision-making and operational efficiencies. Engage with senior-level stakeholders to present complex AI solutions, fostering strategic partnerships and deepening client relationships. Assist in advancing Context-Driven Generative Analytics, transforming complex data into actionable insights. Required Skills and Experience: Demonstrable experience in designing and leading the deployment of Data & AI solutions. Proficiency in the Microsoft Azure Data Stack, experience with LLMs, prompt engineering, AI Agents, and familiarity with MLOps/DevOps practices. Strong coding skills in Python and C# are a plus. Exceptional skills in client engagement and strategic solution delivery. Ability to inspire teams and translate strategic visions into actionable plans. History of innovative problem-solving and adopting new AI technologies and methods. Strong record in fostering a collaborative team environment and mentoring others. Capable of thriving in a dynamic, fast-paced setting, managing multiple projects with flexibility. As part of this engagement, you will work on initiatives that redefine business efficiency through AI. You will have significant development opportunities in a rapidly expanding company and will contribute to groundbreaking projects that drive. To be considered, please click apply now!
Job description Site Name: London The Stanley Building Posted Date: Apr At GSK, we want to supercharge our data capability to better understand our patients and accelerate our ability to discover vaccines and medicines. The Onyx Research Data Platform organization represents a major investment by GSK R&D and Digital & Tech, designed to deliver a step-change in our ability to leverage data, knowledge, and prediction to find new medicines. We are a full-stack shop consisting of product and portfolio leadership, data engineering, infrastructure and DevOps, data / metadata / knowledge platforms, and AI/ML and analysis platforms, all geared toward: Building a next-generation, metadata- and automation-driven data experience for GSK's scientists, engineers, and decision-makers, increasing productivity and reducing time spent on "data mechanics" Providing best-in-class AI/ML and data analysis environments to accelerate our predictive capabilities and attract top-tier talent Aggressively engineering our data at scale, as one unified asset, to unlock the value of our unique collection of data and predictions in real-time Onyx Product Management is at the heart of our mission, ensuring that everything from our infrastructure, to platforms, to end-user facing data assets and environments is designed to maximize our impact on R&D. The Product Management team partners with R&D stakeholders and Onyx leadership to develop a strategic roadmap for all customer-facing aspects of Onyx, including data assets, ontology, Knowledge Graph / semantic search, data / computing / analysis platforms, and data-powered applications. We are seeking a highly skilled and experienced Manager for our Computing Platforms Products. In this role, you will be responsible for developing the product strategy of our Computing Platform to meet customer needs. You will partner closely with Onyx's organizations, including AI/ML, a diversity of R&D teams utilizing data to accelerate drug discovery (genomics sciences, computational biology, imaging, computational chemistry, to name a few), along with the Onyx portfolio management and engineering function heads to deliver industry-leading solutions that power R&D workloads. You will drive the product roadmap, guide product development initiatives, and ensure the successful launch and adoption of our Compute platform including the migration of existing GSK applications to the platform. Together, you will facilitate joint planning and execution of the product roadmap, ensuring a balance between strategic development and customer-facing deliverables. You will also play a key role in devising, tracking, and publicizing metrics that measure the impact and performance of Onyx Compute Platform Products. You will be responsible for understanding the business areas using Onyx, Platform capabilities, translating customer needs into requirements aligned with standard frameworks such as ontologies and engineering pipelines, and ensure our R&D teams receive the solutions they need to succeed. In this role you will Product Strategy: Develop and execute a comprehensive product strategy for our AI/ML compute platform, aligning with the Onyx's overall goals and objectives. Roadmap Development: Define and prioritize features, enhancements, and functionalities for the platform based on user analysis, customer feedback, and business requirements. Cross-functional Collaboration: Collaborate closely with engineering, AI/ML, and portfolio/program teams to ensure successful product development and deployment. Stakeholder Engagement: Collaborate with customers, partners, and internal stakeholders to understand their needs, gather feedback, and incorporate it into product planning and development processes. Product Launch: Plan and oversee product launches, ensuring effective communication, documentation, and training to drive product adoption and success. Performance Measurement: Define key product metrics, establish monitoring systems, and regularly evaluate and report on the performance and success of the compute platform. Product Ambassador: Serve as an ambassador of the compute platform, effectively communicating its value and benefits to GSK Research and Development leadership and identifying potential customers. Industry Expertise: Stay up to date with the latest advancements and trends in AI, machine learning, and compute platforms, applying industry knowledge to drive innovation and competitive advantage. Why you? Qualifications & Skills: We are looking for professionals with these required skills to achieve our goals: Experience of cloud computing management for scientific computing, data science and/or artificial intelligence model training with a major cloud provider (AWS, Google Cloud, Azure etc) Strong relevant experience in Data Science, Scientific Computing, Machine Learning/AI, Computer Science, Platform Engineering, or related discipline. Excellent communication, collaboration, and stakeholder management skills. Strong leadership abilities and a self-driven, proactive approach. Ability to thrive in a fast-paced, dynamic environment and manage multiple priorities effectively. Preferred Qualifications & Skills: If you have the following characteristics, it would be a plus: Technical Knowledge: Experience with and strong understanding of on-prem and cloud computing, and software development practices; familiarity with MLOps and distributed computing is highly desirable. Experience with containers and virtual machines including Kubernetes, Slurm or other orchestration tools. Knowledge of modern infrastructure including Infrastructure-as-code tools (e.g. Terraform, Ansible ) Familiar with software engineering ways of working and engagement model Strong proficiency in utilizing various product management tools, including Jira and Confluence. Proven track record of managing developer platforms, tools, and services. Strong proficiency in utilizing various product management tools, including Jira and Confluence. Prior product management experience of enterprise AI/ML platform is strongly preferred. Experience with bioinformatics/genomics database, biological datasets, Pharma R&D is a plus, but not required. Strategic Thinker: Proven track record in developing and executing product strategies that drive business growth and customer satisfaction. Stakeholder Skills: Demonstrated ability to keep cross-functional teams, set clear objectives, and foster a collaborative and innovative work environment. Can lead without authority. Customer Focus: A customer-centric mindset with a deep understanding of customer needs and the ability to translate them into effective product solutions. Analytical and Data-Driven: Strong analytical skills with the ability to gather and interpret data, perform market research, and make data-driven decisions. Excellent Communication: Exceptional written and verbal communication skills, with the ability to effectively present complex ideas and concepts to both technical and non-technical audiences. Adaptability: Thrives in a fast-paced, dynamic environment and can adapt quickly to changing priorities and business needs. Closing Date for Applications: Monday 6th May 2024 (COB) Please take a copy of the Job Description, as this will not be available post closure of the advert. When applying for this role, please use the 'cover letter' of the online application or your CV to describe how you meet the competencies for this role, as outlined in the job requirements above. The information that you have provided in your cover letter and CV will be used to assess your application. During the course of your application, you will be requested to complete voluntary information which will be used in monitoring the effectiveness of our equality and diversity policies. Your information will be treated as confidential and will not be used in any part of the selection process. If you require a reasonable adjustment to the application / selection process to enable you to demonstrate your ability to perform the job requirements, please contact . This will help us to understand any modifications we may need to make to support you throughout our selection process. Why GSK? Uniting science, technology and talent to get ahead of disease together. GSK is a global biopharma company with a special purpose - to unite science, technology and talent to get ahead of disease together - so we can positively impact the health of billions of people and deliver stronger, more sustainable shareholder returns - as an organisation where people can thrive. We prevent and treat disease with vaccines, specialty and general medicines. We focus on the science of the immune system and the use of new platform and data technologies, investing in four core therapeutic areas (infectious diseases, HIV, respiratory/ immunology and oncology). Our success absolutely depends on our people. While getting ahead of disease together is about our ambition for patients and shareholders, it's also about making GSK a place where people can thrive. We want GSK to be a place where people feel inspired, encouraged and challenged to be the best they can be. A place where they can be themselves - feeling welcome, valued, and included. Where they can keep growing and look after their wellbeing. So, if you share our ambition . click apply for full job details
Apr 25, 2024
Full time
Job description Site Name: London The Stanley Building Posted Date: Apr At GSK, we want to supercharge our data capability to better understand our patients and accelerate our ability to discover vaccines and medicines. The Onyx Research Data Platform organization represents a major investment by GSK R&D and Digital & Tech, designed to deliver a step-change in our ability to leverage data, knowledge, and prediction to find new medicines. We are a full-stack shop consisting of product and portfolio leadership, data engineering, infrastructure and DevOps, data / metadata / knowledge platforms, and AI/ML and analysis platforms, all geared toward: Building a next-generation, metadata- and automation-driven data experience for GSK's scientists, engineers, and decision-makers, increasing productivity and reducing time spent on "data mechanics" Providing best-in-class AI/ML and data analysis environments to accelerate our predictive capabilities and attract top-tier talent Aggressively engineering our data at scale, as one unified asset, to unlock the value of our unique collection of data and predictions in real-time Onyx Product Management is at the heart of our mission, ensuring that everything from our infrastructure, to platforms, to end-user facing data assets and environments is designed to maximize our impact on R&D. The Product Management team partners with R&D stakeholders and Onyx leadership to develop a strategic roadmap for all customer-facing aspects of Onyx, including data assets, ontology, Knowledge Graph / semantic search, data / computing / analysis platforms, and data-powered applications. We are seeking a highly skilled and experienced Manager for our Computing Platforms Products. In this role, you will be responsible for developing the product strategy of our Computing Platform to meet customer needs. You will partner closely with Onyx's organizations, including AI/ML, a diversity of R&D teams utilizing data to accelerate drug discovery (genomics sciences, computational biology, imaging, computational chemistry, to name a few), along with the Onyx portfolio management and engineering function heads to deliver industry-leading solutions that power R&D workloads. You will drive the product roadmap, guide product development initiatives, and ensure the successful launch and adoption of our Compute platform including the migration of existing GSK applications to the platform. Together, you will facilitate joint planning and execution of the product roadmap, ensuring a balance between strategic development and customer-facing deliverables. You will also play a key role in devising, tracking, and publicizing metrics that measure the impact and performance of Onyx Compute Platform Products. You will be responsible for understanding the business areas using Onyx, Platform capabilities, translating customer needs into requirements aligned with standard frameworks such as ontologies and engineering pipelines, and ensure our R&D teams receive the solutions they need to succeed. In this role you will Product Strategy: Develop and execute a comprehensive product strategy for our AI/ML compute platform, aligning with the Onyx's overall goals and objectives. Roadmap Development: Define and prioritize features, enhancements, and functionalities for the platform based on user analysis, customer feedback, and business requirements. Cross-functional Collaboration: Collaborate closely with engineering, AI/ML, and portfolio/program teams to ensure successful product development and deployment. Stakeholder Engagement: Collaborate with customers, partners, and internal stakeholders to understand their needs, gather feedback, and incorporate it into product planning and development processes. Product Launch: Plan and oversee product launches, ensuring effective communication, documentation, and training to drive product adoption and success. Performance Measurement: Define key product metrics, establish monitoring systems, and regularly evaluate and report on the performance and success of the compute platform. Product Ambassador: Serve as an ambassador of the compute platform, effectively communicating its value and benefits to GSK Research and Development leadership and identifying potential customers. Industry Expertise: Stay up to date with the latest advancements and trends in AI, machine learning, and compute platforms, applying industry knowledge to drive innovation and competitive advantage. Why you? Qualifications & Skills: We are looking for professionals with these required skills to achieve our goals: Experience of cloud computing management for scientific computing, data science and/or artificial intelligence model training with a major cloud provider (AWS, Google Cloud, Azure etc) Strong relevant experience in Data Science, Scientific Computing, Machine Learning/AI, Computer Science, Platform Engineering, or related discipline. Excellent communication, collaboration, and stakeholder management skills. Strong leadership abilities and a self-driven, proactive approach. Ability to thrive in a fast-paced, dynamic environment and manage multiple priorities effectively. Preferred Qualifications & Skills: If you have the following characteristics, it would be a plus: Technical Knowledge: Experience with and strong understanding of on-prem and cloud computing, and software development practices; familiarity with MLOps and distributed computing is highly desirable. Experience with containers and virtual machines including Kubernetes, Slurm or other orchestration tools. Knowledge of modern infrastructure including Infrastructure-as-code tools (e.g. Terraform, Ansible ) Familiar with software engineering ways of working and engagement model Strong proficiency in utilizing various product management tools, including Jira and Confluence. Proven track record of managing developer platforms, tools, and services. Strong proficiency in utilizing various product management tools, including Jira and Confluence. Prior product management experience of enterprise AI/ML platform is strongly preferred. Experience with bioinformatics/genomics database, biological datasets, Pharma R&D is a plus, but not required. Strategic Thinker: Proven track record in developing and executing product strategies that drive business growth and customer satisfaction. Stakeholder Skills: Demonstrated ability to keep cross-functional teams, set clear objectives, and foster a collaborative and innovative work environment. Can lead without authority. Customer Focus: A customer-centric mindset with a deep understanding of customer needs and the ability to translate them into effective product solutions. Analytical and Data-Driven: Strong analytical skills with the ability to gather and interpret data, perform market research, and make data-driven decisions. Excellent Communication: Exceptional written and verbal communication skills, with the ability to effectively present complex ideas and concepts to both technical and non-technical audiences. Adaptability: Thrives in a fast-paced, dynamic environment and can adapt quickly to changing priorities and business needs. Closing Date for Applications: Monday 6th May 2024 (COB) Please take a copy of the Job Description, as this will not be available post closure of the advert. When applying for this role, please use the 'cover letter' of the online application or your CV to describe how you meet the competencies for this role, as outlined in the job requirements above. The information that you have provided in your cover letter and CV will be used to assess your application. During the course of your application, you will be requested to complete voluntary information which will be used in monitoring the effectiveness of our equality and diversity policies. Your information will be treated as confidential and will not be used in any part of the selection process. If you require a reasonable adjustment to the application / selection process to enable you to demonstrate your ability to perform the job requirements, please contact . This will help us to understand any modifications we may need to make to support you throughout our selection process. Why GSK? Uniting science, technology and talent to get ahead of disease together. GSK is a global biopharma company with a special purpose - to unite science, technology and talent to get ahead of disease together - so we can positively impact the health of billions of people and deliver stronger, more sustainable shareholder returns - as an organisation where people can thrive. We prevent and treat disease with vaccines, specialty and general medicines. We focus on the science of the immune system and the use of new platform and data technologies, investing in four core therapeutic areas (infectious diseases, HIV, respiratory/ immunology and oncology). Our success absolutely depends on our people. While getting ahead of disease together is about our ambition for patients and shareholders, it's also about making GSK a place where people can thrive. We want GSK to be a place where people feel inspired, encouraged and challenged to be the best they can be. A place where they can be themselves - feeling welcome, valued, and included. Where they can keep growing and look after their wellbeing. So, if you share our ambition . click apply for full job details
Machine Learning EngineerEdinburgh (remote) £50,000-£65,000 About the Company: Our client is a visionary network intelligence company at the forefront of developing a cutting-edge deep traffic analysis platform for mobile network operators. Their innovative solutions help reduce operational costs, optimize energy consumption, and enhance user experiences. Recently recognized as an AI Visionary and one of the top 100 AI companies in 2022, this startup is driving transformative change in the telecom industry. Role Overview: We are seeking an enthusiastic Machine Learning Engineer to join our team and work on designing, training, testing, and integrating industry-grade machine learning models for deep mobile network traffic analysis. You will expand our portfolio of neural network architectures for traffic forecasting, anomaly prediction, and decomposition, collaborating with other ML engineers and receiving guidance from senior leadership. Key Responsibilities for a Machine Learning Engineer: Develop and debug code in Python, utilizing deep learning frameworks like TensorFlow 2 and Keras Perform in-depth data analysis tasks and design new neural architectures Participate in code reviews, maintain codebase quality, and ensure feature sustainability Investigate problems and devise solutions for large data sets Prototype implementations of new architectures to solve specific challenges Document results in design documents and technical reports Requirements for a Machine Learning Engineer: Strong mathematical/analytical skills and technical background in mobile networks Deep understanding of machine learning concepts, techniques, and optimization algorithms Experience with data visualization (matplotlib or similar) Proficiency in Git workflow and best coding practices Familiarity with cloud computing concepts Ability to read and summarize research papers Experience following a research methodology and creating experiments MSc or PhD in Artificial Intelligence, Computer Science, or equivalent Excellent communication and interpersonal skills Nice to Have: Experience with other AI platforms, especially for reinforcement learning Knowledge of machine learning lifecycle management (MLOps) If you are a driven and creative professional passionate about developing innovative AI solutions that revolutionize the telecom industry, click apply! McGregor Boyall is an equal opportunity employer and do not discriminate on any grounds.
Apr 25, 2024
Full time
Machine Learning EngineerEdinburgh (remote) £50,000-£65,000 About the Company: Our client is a visionary network intelligence company at the forefront of developing a cutting-edge deep traffic analysis platform for mobile network operators. Their innovative solutions help reduce operational costs, optimize energy consumption, and enhance user experiences. Recently recognized as an AI Visionary and one of the top 100 AI companies in 2022, this startup is driving transformative change in the telecom industry. Role Overview: We are seeking an enthusiastic Machine Learning Engineer to join our team and work on designing, training, testing, and integrating industry-grade machine learning models for deep mobile network traffic analysis. You will expand our portfolio of neural network architectures for traffic forecasting, anomaly prediction, and decomposition, collaborating with other ML engineers and receiving guidance from senior leadership. Key Responsibilities for a Machine Learning Engineer: Develop and debug code in Python, utilizing deep learning frameworks like TensorFlow 2 and Keras Perform in-depth data analysis tasks and design new neural architectures Participate in code reviews, maintain codebase quality, and ensure feature sustainability Investigate problems and devise solutions for large data sets Prototype implementations of new architectures to solve specific challenges Document results in design documents and technical reports Requirements for a Machine Learning Engineer: Strong mathematical/analytical skills and technical background in mobile networks Deep understanding of machine learning concepts, techniques, and optimization algorithms Experience with data visualization (matplotlib or similar) Proficiency in Git workflow and best coding practices Familiarity with cloud computing concepts Ability to read and summarize research papers Experience following a research methodology and creating experiments MSc or PhD in Artificial Intelligence, Computer Science, or equivalent Excellent communication and interpersonal skills Nice to Have: Experience with other AI platforms, especially for reinforcement learning Knowledge of machine learning lifecycle management (MLOps) If you are a driven and creative professional passionate about developing innovative AI solutions that revolutionize the telecom industry, click apply! McGregor Boyall is an equal opportunity employer and do not discriminate on any grounds.
Xpertise is seeking two talented Machine Learning Engineers to join our esteemed team in Birmingham. As part of our growing engineering division, you will play a pivotal role in designing, implementing, and optimizing machine learning models and data pipelines. With a strong emphasis on AWS technologies and MLOps practices, you'll have the opportunity to contribute to the development of scalable, production-grade solutions that drive business value. Key details: Salary: £55,000-95,000 (Mid-Lead) I'd consider experienced contractors with a rate of £400.00 per day (Outisde IR35) Benefits: 10-25% bonus + healthcare + 10% pension Location: Birmingham; can be remote-based, hybrid working or office-based Key experience desired / what you will learn: Experience developing, deploying, and maintaining machine learning models in production environments. Strong understanding of AWS cloud services, especially in building and managing data pipelines and machine learning workflows: S3, Redshift, Lambda, Glue, EMR, EKS (Kubernetes) Familiarity with MLOps / DevOps concepts and practices, including version control, CI/CD, and model monitoring. Proficiency in Python and relevant data manipulation and analysis libraries (e.g., pandas, NumPy). Experience with distributed computing frameworks like Apache Spark is a plus. Apache Spark and Airflow would be a bonus. Role overview: If you're looking to work with a team of ambitious software engineers, talented senior leaders all while working with the latest data, AI and cloud technologies, then this one's for you. They have big plans to disrupt the industry with this machine learning work, so it's a great time to join. Interested? Please apply with your CV and/or message Billy Hall for further details. Xpertise acts as an employment agency.
Apr 24, 2024
Full time
Xpertise is seeking two talented Machine Learning Engineers to join our esteemed team in Birmingham. As part of our growing engineering division, you will play a pivotal role in designing, implementing, and optimizing machine learning models and data pipelines. With a strong emphasis on AWS technologies and MLOps practices, you'll have the opportunity to contribute to the development of scalable, production-grade solutions that drive business value. Key details: Salary: £55,000-95,000 (Mid-Lead) I'd consider experienced contractors with a rate of £400.00 per day (Outisde IR35) Benefits: 10-25% bonus + healthcare + 10% pension Location: Birmingham; can be remote-based, hybrid working or office-based Key experience desired / what you will learn: Experience developing, deploying, and maintaining machine learning models in production environments. Strong understanding of AWS cloud services, especially in building and managing data pipelines and machine learning workflows: S3, Redshift, Lambda, Glue, EMR, EKS (Kubernetes) Familiarity with MLOps / DevOps concepts and practices, including version control, CI/CD, and model monitoring. Proficiency in Python and relevant data manipulation and analysis libraries (e.g., pandas, NumPy). Experience with distributed computing frameworks like Apache Spark is a plus. Apache Spark and Airflow would be a bonus. Role overview: If you're looking to work with a team of ambitious software engineers, talented senior leaders all while working with the latest data, AI and cloud technologies, then this one's for you. They have big plans to disrupt the industry with this machine learning work, so it's a great time to join. Interested? Please apply with your CV and/or message Billy Hall for further details. Xpertise acts as an employment agency.
Xpertise Recruitment Ltd
Newcastle Upon Tyne, Tyne And Wear
Xpertise is seeking two talented Machine Learning Engineers to join our esteemed team in Birmingham. As part of our growing engineering division, you will play a pivotal role in designing, implementing, and optimizing machine learning models and data pipelines. With a strong emphasis on AWS technologies and MLOps practices, you'll have the opportunity to contribute to the development of scalable, production-grade solutions that drive business value. Key details: Salary: £55,000-95,000 (Mid-Lead) I'd consider experienced contractors with a rate of £400.00 per day (Outisde IR35) Benefits: 10-25% bonus + healthcare + 10% pension Location: Newcastle or Birmingham; can be remote-based, hybrid working or office-based Key experience desired / what you will learn: Experience developing, deploying, and maintaining machine learning models in production environments. Strong understanding of AWS cloud services, especially in building and managing data pipelines and machine learning workflows: S3, Redshift, Lambda, Glue, EMR, EKS (Kubernetes) Familiarity with MLOps / DevOps concepts and practices, including version control, CI/CD, and model monitoring. Proficiency in Python and relevant data manipulation and analysis libraries (e.g., pandas, NumPy). Experience with distributed computing frameworks like Apache Spark is a plus. Apache Spark and Airflow would be a bonus. Role overview: If you're looking to work with a team of ambitious software engineers, talented senior leaders all while working with the latest data, AI and cloud technologies, then this one's for you. They have big plans to disrupt the industry with this machine learning work, so it's a great time to join. Interested? Please apply with your CV and/or message Billy Hall for further details. Xpertise acts as an employment agency.
Apr 24, 2024
Full time
Xpertise is seeking two talented Machine Learning Engineers to join our esteemed team in Birmingham. As part of our growing engineering division, you will play a pivotal role in designing, implementing, and optimizing machine learning models and data pipelines. With a strong emphasis on AWS technologies and MLOps practices, you'll have the opportunity to contribute to the development of scalable, production-grade solutions that drive business value. Key details: Salary: £55,000-95,000 (Mid-Lead) I'd consider experienced contractors with a rate of £400.00 per day (Outisde IR35) Benefits: 10-25% bonus + healthcare + 10% pension Location: Newcastle or Birmingham; can be remote-based, hybrid working or office-based Key experience desired / what you will learn: Experience developing, deploying, and maintaining machine learning models in production environments. Strong understanding of AWS cloud services, especially in building and managing data pipelines and machine learning workflows: S3, Redshift, Lambda, Glue, EMR, EKS (Kubernetes) Familiarity with MLOps / DevOps concepts and practices, including version control, CI/CD, and model monitoring. Proficiency in Python and relevant data manipulation and analysis libraries (e.g., pandas, NumPy). Experience with distributed computing frameworks like Apache Spark is a plus. Apache Spark and Airflow would be a bonus. Role overview: If you're looking to work with a team of ambitious software engineers, talented senior leaders all while working with the latest data, AI and cloud technologies, then this one's for you. They have big plans to disrupt the industry with this machine learning work, so it's a great time to join. Interested? Please apply with your CV and/or message Billy Hall for further details. Xpertise acts as an employment agency.
Senior Machine Learning Engineer Remote UK - £ 7 5000 We are helping an innovative tech business scale their ML software team. Due to continued growth and demand for their products they now urgently need a Senior Python Machine Learning Engineer to join them ASAP. This role would suit a Python Machine Learning Engineer who has a strong background in Machine Learning and ideally MLOps. This role is fully remote within the UK but if you do want to use their office they are based in Milton Keynes. To be a successful, the ideal Python Machine Learning Engineer candidate will have: Strong background in Python of Machine Learning or AI Strong knowledge of PyTorch, Hugging Face, TensorFlow Experience of MLOps would be a benefit Experience of working in a small but growing team. What is in it for you? As a talented Python Machine Learning Engineer you can expect: Great salary Up to £75k base commission and Package (neg for the right person) Flexible working An opportunity to work with some of the brightest minds in the tech sector If you are an ambitious Python Machine Learning Engineer hit apply and we will do the rest. Please apply with your CV and we will be in touch for a confidential chat. Noa Recruitment specialise in helping Software and Web Professionals and technical talent find great careers. If this role doesn t sound like you, but you know a great person who might be interested then please do share these details with them.
Apr 24, 2024
Full time
Senior Machine Learning Engineer Remote UK - £ 7 5000 We are helping an innovative tech business scale their ML software team. Due to continued growth and demand for their products they now urgently need a Senior Python Machine Learning Engineer to join them ASAP. This role would suit a Python Machine Learning Engineer who has a strong background in Machine Learning and ideally MLOps. This role is fully remote within the UK but if you do want to use their office they are based in Milton Keynes. To be a successful, the ideal Python Machine Learning Engineer candidate will have: Strong background in Python of Machine Learning or AI Strong knowledge of PyTorch, Hugging Face, TensorFlow Experience of MLOps would be a benefit Experience of working in a small but growing team. What is in it for you? As a talented Python Machine Learning Engineer you can expect: Great salary Up to £75k base commission and Package (neg for the right person) Flexible working An opportunity to work with some of the brightest minds in the tech sector If you are an ambitious Python Machine Learning Engineer hit apply and we will do the rest. Please apply with your CV and we will be in touch for a confidential chat. Noa Recruitment specialise in helping Software and Web Professionals and technical talent find great careers. If this role doesn t sound like you, but you know a great person who might be interested then please do share these details with them.
Do you want to join a team working at the cutting edge of engineering sustainability? Here at Monolith, we're on a mission to empower engineers to use AI to solve the most intractable physics problems like developing next-gen EV batteries that charge faster and last longer. With strong product-market fit, we've doubled in size over the last four years, are growing globally, and we have ambitious plans to expand. It's an exciting time! To continue in our growth, we are recruiting a Senior Software Engineer focussing on Python for a six-month period. If you are looking for a permanent opportunity and available immediately, there could be scope for this position to be a permanent role so please apply anyway. What you'll be doing: As a Senior Software Engineer, you will play a crucial role in driving the re-platforming efforts of our SaaS software product. Your responsibilities will involve independently and swiftly addressing specific technical challenges within this framework, ensuring seamless transition and enhancement of our platform. Our New Tech stack: Athena SQL, Athena & EMR Spark, ECS, Temporal; Tech we're keeping: Python, Flask, Redis, Postgres, React, Plotly, Docker. We might add Azure later Key Responsibilities: Rapidly deliver high-quality code for our re-platforming project. Proactively identify and resolve blockers for team members, ensuring smooth progress. Break down complex technical tasks into manageable deliverables (from epics to tasks). Apply senior-level expertise and pragmatism to coding and decision-making processes, making trade-offs explicit and understandable to the team. Required Skills and Attributes: 7 years or more of coding experience, with the last 3 years primarily focused on Python. Preference for candidates who haven't primarily worked in large corporations, big tech firms, late-stage companies, or software agencies. Previous involvement with AWS platforms. Self-sufficient in initiating and completing tasks end-to-end, adhering to product requirements even with minimal supervision. Exceptional communicator, adept at effectively engaging with both fellow developers and higher-level stakeholders such as team leads and managers. Highly focused on identifying and advancing critical tasks, both for oneself and others, ensuring progress aligns with project goals. Nice to have: Previous experience in startup environments. Proficiency or experience with Apache Spark. Familiarity or background in working with Azure. Experience orchestrating workflows, particularly within distributed system environments. Knowledge of MLOps principles and practices, especially in implementing them within production settings. Why Monolith? Our culture is passionate, engaging and collaborative. We are genuine, we bring our true selves to work and celebrate those little quirks that make us different. We have a culture of learning, we encourage new ideas, out of the box thinkers and risk takers. We're all human and sometimes we make mistakes, but we brush ourselves off and try again. Our culture encourages freedom, flexibility and creativity. At Monolith our values are core to how we do business. They're not just words on a wall, we live them every day. Our values are embedded in our internal processes so that we're always reminded what's important to us and we continue to grow as individuals and as a company. Our values are: Bring yourself to work Always be curious and open Think like an engineer Work smart, not hard Be in this together A few things to note: Monolith is proud to be an equal opportunity employer and we value diversity and inclusion. We welcome people of different nationalities, backgrounds, experiences, abilities and perspectives. We don't have an end date to apply for this role, but we will prioritise early applicants, so if you're interested then please apply soon. We are not open to working with external recruitment agencies at this time. If you don't quite match everything above but you feel you can succeed in this role then we encourage your application and look forward to hearing from you.
Apr 23, 2024
Full time
Do you want to join a team working at the cutting edge of engineering sustainability? Here at Monolith, we're on a mission to empower engineers to use AI to solve the most intractable physics problems like developing next-gen EV batteries that charge faster and last longer. With strong product-market fit, we've doubled in size over the last four years, are growing globally, and we have ambitious plans to expand. It's an exciting time! To continue in our growth, we are recruiting a Senior Software Engineer focussing on Python for a six-month period. If you are looking for a permanent opportunity and available immediately, there could be scope for this position to be a permanent role so please apply anyway. What you'll be doing: As a Senior Software Engineer, you will play a crucial role in driving the re-platforming efforts of our SaaS software product. Your responsibilities will involve independently and swiftly addressing specific technical challenges within this framework, ensuring seamless transition and enhancement of our platform. Our New Tech stack: Athena SQL, Athena & EMR Spark, ECS, Temporal; Tech we're keeping: Python, Flask, Redis, Postgres, React, Plotly, Docker. We might add Azure later Key Responsibilities: Rapidly deliver high-quality code for our re-platforming project. Proactively identify and resolve blockers for team members, ensuring smooth progress. Break down complex technical tasks into manageable deliverables (from epics to tasks). Apply senior-level expertise and pragmatism to coding and decision-making processes, making trade-offs explicit and understandable to the team. Required Skills and Attributes: 7 years or more of coding experience, with the last 3 years primarily focused on Python. Preference for candidates who haven't primarily worked in large corporations, big tech firms, late-stage companies, or software agencies. Previous involvement with AWS platforms. Self-sufficient in initiating and completing tasks end-to-end, adhering to product requirements even with minimal supervision. Exceptional communicator, adept at effectively engaging with both fellow developers and higher-level stakeholders such as team leads and managers. Highly focused on identifying and advancing critical tasks, both for oneself and others, ensuring progress aligns with project goals. Nice to have: Previous experience in startup environments. Proficiency or experience with Apache Spark. Familiarity or background in working with Azure. Experience orchestrating workflows, particularly within distributed system environments. Knowledge of MLOps principles and practices, especially in implementing them within production settings. Why Monolith? Our culture is passionate, engaging and collaborative. We are genuine, we bring our true selves to work and celebrate those little quirks that make us different. We have a culture of learning, we encourage new ideas, out of the box thinkers and risk takers. We're all human and sometimes we make mistakes, but we brush ourselves off and try again. Our culture encourages freedom, flexibility and creativity. At Monolith our values are core to how we do business. They're not just words on a wall, we live them every day. Our values are embedded in our internal processes so that we're always reminded what's important to us and we continue to grow as individuals and as a company. Our values are: Bring yourself to work Always be curious and open Think like an engineer Work smart, not hard Be in this together A few things to note: Monolith is proud to be an equal opportunity employer and we value diversity and inclusion. We welcome people of different nationalities, backgrounds, experiences, abilities and perspectives. We don't have an end date to apply for this role, but we will prioritise early applicants, so if you're interested then please apply soon. We are not open to working with external recruitment agencies at this time. If you don't quite match everything above but you feel you can succeed in this role then we encourage your application and look forward to hearing from you.
Senior Machine Learning Engineer - Remote UK - £ 7 5000 We are helping an innovative tech business scale their ML software team. Due to continued growth and demand for their products they now urgently need a Senior Python Machine Learning Engineer to join them ASAP. This role would suit a Python Machine Learning Engineer who has a strong background in Machine Learning and ideally MLOps.This role is fully remote within the UK but if you do want to use their office they are based in Milton Keynes. To be a successful, the ideal Python Machine Learning Engineer candidate will have: Strong background in Python of Machine Learning or AI Strong knowledge of PyTorch, Hugging Face, TensorFlow Experience of MLOps would be a benefit Experience of working in a small but growing team. What is in it for you? As a talented Python Machine Learning Engineer you can expect: Great salary - Up to £75k base commission and Package (neg for the right person) Flexible working An opportunity to work with some of the brightest minds in the tech sector If you are an ambitious Python Machine Learning Engineer hit apply and we will do the rest.Please apply with your CV and we will be in touch for a confidential chat.Noa Recruitment specialise in helping Software and Web Professionals and technical talent find great careers. If this role doesn't sound like you, but you know a great person who might be interested then please do share these details with them.
Apr 23, 2024
Full time
Senior Machine Learning Engineer - Remote UK - £ 7 5000 We are helping an innovative tech business scale their ML software team. Due to continued growth and demand for their products they now urgently need a Senior Python Machine Learning Engineer to join them ASAP. This role would suit a Python Machine Learning Engineer who has a strong background in Machine Learning and ideally MLOps.This role is fully remote within the UK but if you do want to use their office they are based in Milton Keynes. To be a successful, the ideal Python Machine Learning Engineer candidate will have: Strong background in Python of Machine Learning or AI Strong knowledge of PyTorch, Hugging Face, TensorFlow Experience of MLOps would be a benefit Experience of working in a small but growing team. What is in it for you? As a talented Python Machine Learning Engineer you can expect: Great salary - Up to £75k base commission and Package (neg for the right person) Flexible working An opportunity to work with some of the brightest minds in the tech sector If you are an ambitious Python Machine Learning Engineer hit apply and we will do the rest.Please apply with your CV and we will be in touch for a confidential chat.Noa Recruitment specialise in helping Software and Web Professionals and technical talent find great careers. If this role doesn't sound like you, but you know a great person who might be interested then please do share these details with them.
Senior Data Scientist Joining Capco means joining an organisation that is committed to an inclusive working environment where you're encouraged to . We celebrate individuality and recognize that diversity and inclusion, in all forms, is critical to success. It's important to us that we recruit and develop as diverse a range of talent as we can and we believe that everyone brings something different to the table - so we'd love to know what makes you different. Such differences may mean we need to make changes to our process to allow you the best possible platform to succeed, and we are happy to cater to any reasonable adjustments you may require. You will find the section to let us know of these at the bottom of your application form or you can mention it directly to your recruiter at any stage and they will be happy to help. ABOUT US Capco is global technology and business consultancy with a focus on the financial services sector. We are passionate about helping our clients succeed in an ever-changing industry, combining innovative thinking with unique expert know-how. The solutions we offer our customers every day are as diverse as our employees. We are/have: Experts across the Capital Markets, Insurance, Payments, Retail Banking and Wealth & Asset Management domains. Deep knowledge in various financial services offerings including Finance, Risk and Compliance, Financial Crime, Core Banking etc. Committed to growing our business and hiring the best talent to help us get there Focused on maintaining our nimble, agile and entrepreneurial culture. What we're looking for: As part of our on-going global expansion strategy, we are currently hiring Senior Data Scientists as Capco continues to grow it's UK Data Practice in our London office. The Senior Data Scientist should be experienced in using statistical, algorithmic, mining and/or visualisation techniques to address complex business problems. You will be an SME on data science / ML solutions and use case development. The successful candidate will advise the Data Scientists and Engineers on technical requirements around the model design, model architecture, model calibration, solution design, and solutions output In addition to this, they will be expected to leverage their expertise to train and grow Data Scientists at a range of experience levels. Responsibilities Develop prototype and proof of concept solutions making use of cutting-edge AI, machine learning and statistical approaches to solve real-world and business problems Technically lead multiple pods of data science & engineers to develop solutions from a business problem into POCs, MVPs, or fully-fledged solutions while collaborating closely with domain experts. Help transition from development environment to production Act as a subject matter expert in data science and machine learning and coach other data scientists Essential skills Analytics, modelling or software development experience including coding/software development skills. In particular: Hands-on experience in building and implementing data science and machine learning solutions to tackle business problems Comfort with rapid prototyping and disciplined software development processes Experience with Python, ML libraries (e.g. spaCy, NumPy, SciPy, Transformers, etc.)data tools and technologies (Spark, Hadoop, Hive, Redshift, SQL), and toolkits for ML and deep learning (SparkML, Tensorflow, Keras) Demonstrated ability to work on multi-disciplinary teams with diverse skillsets Deploying machine learning models and systems to production (DevOps, MLOps, CI/CD) Experience of working in cloud environments (Azure, GCP, AWS) You are also expected to have: Higher level degree (MsC/PhD) in a numerate discipline. Excellent leadership skills Able to articulate complex data science concepts to both technical and non-technical audiences Banking and/or Financial Services experience. Bonus points: Entries into Kaggle Competitions Creativity, resourcefulness and a collaborative spirit Data at Capco Capco's global Data Practice of 800+ practitioners are an established team of data strategists, analysts, scientists, architects, and engineers who help client teams harness the power of data to drive insight, optimise performance, and commercialise data opportunities. We enable financial institutions to become data-driven by helping transform their understanding, and use of data to derive value. We translate strategy into action - designing, implementing, and mobilising innovative data capabilities with a focus on efficiency and scalability, and partnering with leading vendors and industry bodies. Collaboration, enthusiasm, and encouragement are key in ensuring we maintain our culture through working in an environment where clients become colleagues. We are looking for a candidate who will empower the team, drive high standards, grow the capability, and deliver customer focused outcomes. Our clients and peers have voted us as the A-Team Best Consultancy in Data Management in consecutive years, valuing our ability to identify and develop top data talent. In addition to this, we have proudly won both the Best Consultancy (2022) in the British Banking Awards and the Best ESG Data & Technology Consultancy as part of the annual ESG Insight Awards. WHY JOIN CAPCO? You will work on engaging projects with some of the largest banks in the world, on projects that will transform the financial services industry. We offer: A work culture focused on innovation and creating lasting value for our clientsand employees Ongoing learning opportunities to help you acquire new skills or deepenexisting expertise A flat, non-hierarchical structure that will enable you to work with senior partnersand directly with clients A diverse, inclusive, meritocratic culture Enhanced and competitive family friendly benefits, including maternity / adoption / shared parental leave and paid leave for sickness, pregnancy loss, fertility treatment, menopause and bereavement.
Apr 23, 2024
Full time
Senior Data Scientist Joining Capco means joining an organisation that is committed to an inclusive working environment where you're encouraged to . We celebrate individuality and recognize that diversity and inclusion, in all forms, is critical to success. It's important to us that we recruit and develop as diverse a range of talent as we can and we believe that everyone brings something different to the table - so we'd love to know what makes you different. Such differences may mean we need to make changes to our process to allow you the best possible platform to succeed, and we are happy to cater to any reasonable adjustments you may require. You will find the section to let us know of these at the bottom of your application form or you can mention it directly to your recruiter at any stage and they will be happy to help. ABOUT US Capco is global technology and business consultancy with a focus on the financial services sector. We are passionate about helping our clients succeed in an ever-changing industry, combining innovative thinking with unique expert know-how. The solutions we offer our customers every day are as diverse as our employees. We are/have: Experts across the Capital Markets, Insurance, Payments, Retail Banking and Wealth & Asset Management domains. Deep knowledge in various financial services offerings including Finance, Risk and Compliance, Financial Crime, Core Banking etc. Committed to growing our business and hiring the best talent to help us get there Focused on maintaining our nimble, agile and entrepreneurial culture. What we're looking for: As part of our on-going global expansion strategy, we are currently hiring Senior Data Scientists as Capco continues to grow it's UK Data Practice in our London office. The Senior Data Scientist should be experienced in using statistical, algorithmic, mining and/or visualisation techniques to address complex business problems. You will be an SME on data science / ML solutions and use case development. The successful candidate will advise the Data Scientists and Engineers on technical requirements around the model design, model architecture, model calibration, solution design, and solutions output In addition to this, they will be expected to leverage their expertise to train and grow Data Scientists at a range of experience levels. Responsibilities Develop prototype and proof of concept solutions making use of cutting-edge AI, machine learning and statistical approaches to solve real-world and business problems Technically lead multiple pods of data science & engineers to develop solutions from a business problem into POCs, MVPs, or fully-fledged solutions while collaborating closely with domain experts. Help transition from development environment to production Act as a subject matter expert in data science and machine learning and coach other data scientists Essential skills Analytics, modelling or software development experience including coding/software development skills. In particular: Hands-on experience in building and implementing data science and machine learning solutions to tackle business problems Comfort with rapid prototyping and disciplined software development processes Experience with Python, ML libraries (e.g. spaCy, NumPy, SciPy, Transformers, etc.)data tools and technologies (Spark, Hadoop, Hive, Redshift, SQL), and toolkits for ML and deep learning (SparkML, Tensorflow, Keras) Demonstrated ability to work on multi-disciplinary teams with diverse skillsets Deploying machine learning models and systems to production (DevOps, MLOps, CI/CD) Experience of working in cloud environments (Azure, GCP, AWS) You are also expected to have: Higher level degree (MsC/PhD) in a numerate discipline. Excellent leadership skills Able to articulate complex data science concepts to both technical and non-technical audiences Banking and/or Financial Services experience. Bonus points: Entries into Kaggle Competitions Creativity, resourcefulness and a collaborative spirit Data at Capco Capco's global Data Practice of 800+ practitioners are an established team of data strategists, analysts, scientists, architects, and engineers who help client teams harness the power of data to drive insight, optimise performance, and commercialise data opportunities. We enable financial institutions to become data-driven by helping transform their understanding, and use of data to derive value. We translate strategy into action - designing, implementing, and mobilising innovative data capabilities with a focus on efficiency and scalability, and partnering with leading vendors and industry bodies. Collaboration, enthusiasm, and encouragement are key in ensuring we maintain our culture through working in an environment where clients become colleagues. We are looking for a candidate who will empower the team, drive high standards, grow the capability, and deliver customer focused outcomes. Our clients and peers have voted us as the A-Team Best Consultancy in Data Management in consecutive years, valuing our ability to identify and develop top data talent. In addition to this, we have proudly won both the Best Consultancy (2022) in the British Banking Awards and the Best ESG Data & Technology Consultancy as part of the annual ESG Insight Awards. WHY JOIN CAPCO? You will work on engaging projects with some of the largest banks in the world, on projects that will transform the financial services industry. We offer: A work culture focused on innovation and creating lasting value for our clientsand employees Ongoing learning opportunities to help you acquire new skills or deepenexisting expertise A flat, non-hierarchical structure that will enable you to work with senior partnersand directly with clients A diverse, inclusive, meritocratic culture Enhanced and competitive family friendly benefits, including maternity / adoption / shared parental leave and paid leave for sickness, pregnancy loss, fertility treatment, menopause and bereavement.
My Client are seeking a Senior Data Scientist to join the team on a permanent basis specialising in ontology, knowledge engineering, knowledge graphs, and more. This role is not for the faint-hearted but for those energised by challenges, inspired by innovation, and driven by real-world impact. The role can pay up to £80,000 per annum for the right person so please click apply if you feel the below suits your skill set: Qualifications: Education: MSc/PhD in Data Science, Computer Science, or related field Experience: 5-7 years of hands-on experience in the industrySkills: Deep knowledge in First-Order Logic, Descriptive Logic, Web Ontology Language (OWL) Proficiency in graph neural networks (GNN), network science, and semantic networks Mastery over task-specific finetuning, including data extraction, classification, and generation A comprehensive understanding of traditional statistics, Machine Learning (ML), and multi-objective optimisation techniques Excellent communication and written skills Passion for continuous learning and collaboration as a team player Tools: Python: Structured workflows using environments, Conda, Git, GitFlow, etc.Databases: Traditional (SQL) and Graph Databases (openCypher, Gremlin, SPARQL) Technologies: MLOps/LLMOps, Protege, Cloud environments (Azure/AWS)
Aug 15, 2023
Full time
My Client are seeking a Senior Data Scientist to join the team on a permanent basis specialising in ontology, knowledge engineering, knowledge graphs, and more. This role is not for the faint-hearted but for those energised by challenges, inspired by innovation, and driven by real-world impact. The role can pay up to £80,000 per annum for the right person so please click apply if you feel the below suits your skill set: Qualifications: Education: MSc/PhD in Data Science, Computer Science, or related field Experience: 5-7 years of hands-on experience in the industrySkills: Deep knowledge in First-Order Logic, Descriptive Logic, Web Ontology Language (OWL) Proficiency in graph neural networks (GNN), network science, and semantic networks Mastery over task-specific finetuning, including data extraction, classification, and generation A comprehensive understanding of traditional statistics, Machine Learning (ML), and multi-objective optimisation techniques Excellent communication and written skills Passion for continuous learning and collaboration as a team player Tools: Python: Structured workflows using environments, Conda, Git, GitFlow, etc.Databases: Traditional (SQL) and Graph Databases (openCypher, Gremlin, SPARQL) Technologies: MLOps/LLMOps, Protege, Cloud environments (Azure/AWS)
Job Introduction BBC R&D has recently established an Automation Applied Research Area focussed on the use of Machine Learning across the BBC. Automation works closely with other BBC R&D Applied Research Areas, BBC Product and Technology Groups and senior business stakeholders across the BBC to accelerate Machine Learning based innovation. Reporting to the Head of Automation, this role will lead a team of experts exploring the ML platforms, tools, performance, and sustainability that will underpin the BBC's approach to Machine Learning innovation. It will ensure that best practice and correct technology choices are downstreamed into R&D ML applications as well as supporting the wider BBC in making the right strategic decisions for its future ML technology. BBC R&D has five applied research areas focussed on Audiences, Automation, Distribution, Infrastructure and Production who are looking to solve some of the most interesting challenges in Media and Broadcasting; as well as our Commercial, Partnerships & Engagement team who ensure we're collaborating with the right external partners and optimising commercial returns through the exploitation of our Intellectual Property and grant funding. Our work supports the BBC's current ambition as well as informing future strategy. If you're excited by the prospect of working in an innovative environment with smart and supportive colleagues, then BBC R&D is the place for you. Role Responsibility This is a hands-on role. Your key responsibilities will be: Build and lead a team of ML engineers to develop an infrastructure to manage ML lifecycle through experimentation, deployment, and testing. Own the Automation MLOps strategy, roadmap, and backlog. Provide leadership and guidance on the delivery of ML models from prototypes to production, mentor and coach team members on ML engineering best practises; work alongside researchers to enable BBC to benefit more rapidly from fundamental ML research. Contribute to the design of ML systems and infrastructure to shape how ML is used across the BBC. Develop relationships with pan-BBC and external contributors and stakeholders. You will need to bring to life long-term ambitions to secure required support and buy-in for tangible and intangible benefits and outcomes. Focus on ensuring our ML technology delivers on performance, cost and sustainability goals and is supportive of the BBC's responsible and ethical ML objectives. Work with our Technology Strategy and Governance team to identify and communicate strategic investment decisions required to mature the BBC's ML technology in line with business needs. Are you the right candidate? Solid understanding of machine learning concepts and algorithms Experience deploying machine learning solutions Expert knowledge of Python programming and machine learning libraries (Scikit-learn, TensorFlow, Keras, PyTorch, MxNet, etc.) Experience implementing ML automation, MLOps (scalable deployment practices aimed to deploy and maintain machine learning models in production reliably and efficiently) and related tools (e.g., MLflow, Kubeflow, Airflow, Sagemaker) Experience working in accordance with DevOps principles, and with industry deployment best practices using CI/CD tools and infrastructure as code (e.g., Docker, Kubernetes, Terraform) Experience in at least one cloud platform (e.g., AWS, GCP, Azure) and associated machine learning services, e.g., Amazon SageMaker, Azure ML, Databricks. Package Description Band: E Contract type: Permanent - Full time Location: UK wide We're happy to discuss flexible working. Please indicate your choice under the flexible working question in the application . There is no obligation to raise this at the application stage but if you wish to do so, you are welcome to. Flexible working will be part of the discussion at offer stage. Excellent career progression - the BBC offers great opportunities for employees to seek new challenges and work in different areas of the organisation. Unrivalled training and development opportunities - our in-house Academy hosts a wide range of internal and external courses and certification. Benefits - We offer a competitive salary package, a flexible 35-hour working week for work-life balance and 26 days (1 of which is a corporation day) with the option to buy an extra 5 days, a defined pension scheme and discounted dental, health care, gym and much more. The situation regarding the coronavirus outbreak is developing quickly and the BBC is keen to continue to ensure the safety and wellbeing of people across the BBC, while continuing to protect our services. To reduce the risk access to BBC buildings is limited to those essential to our broadcast output. From Wednesday 18 th March until further notice all assessments and interviews will be conducted remotely. For more information go to Mae'r sefyllfa gyda'r coronafeirws yn datblygu'n gyflym, ac mae'r BBC yn awyddus i barhau i sicrhau diogelwch a lles pobl ar draws y BBC, gan barhau i warchod ein gwasanaethau hefyd. I leihau'r risg, dim ond y bobl sy'n hanfodol i'n hallbwn darlledu fydd yn cael mynediad i adeiladau'r BBC. O ddydd Mercher 18 fed Mawrth ymlaen, bydd pob asesiad a chyfweliad yn cael ei gynnal o bell, nes rhoddir gwybod yn wahanol. I gael mwy o wybodaeth, ewch i About the BBC We don't focus simply on what we do - we also care how we do it. Our values and the way we behave are important to us. Please make sure you've read about our values and behaviours in the document attached below. Diversity matters at the BBC. We have a working environment where we value and respect every individual's unique contribution, enabling all of our employees to thrive and achieve their full potential. We want to attract the broadest range of talented people to be part of the BBC - whether that's to contribute to our programming or our wide range of non-production roles. The more diverse our workforce, the better able we are to respond to and reflect our audiences in all their diversity. We are committed to equality of opportunity and welcome applications from individuals, regardless of age, gender, ethnicity, disability, sexual orientation, gender identity, socio-economic background, religion and/or belief. We will consider flexible working requests for all roles, unless operational requirements prevent otherwise. To find out more about Diversity and Inclusion at the BBC, please click here
Sep 23, 2022
Full time
Job Introduction BBC R&D has recently established an Automation Applied Research Area focussed on the use of Machine Learning across the BBC. Automation works closely with other BBC R&D Applied Research Areas, BBC Product and Technology Groups and senior business stakeholders across the BBC to accelerate Machine Learning based innovation. Reporting to the Head of Automation, this role will lead a team of experts exploring the ML platforms, tools, performance, and sustainability that will underpin the BBC's approach to Machine Learning innovation. It will ensure that best practice and correct technology choices are downstreamed into R&D ML applications as well as supporting the wider BBC in making the right strategic decisions for its future ML technology. BBC R&D has five applied research areas focussed on Audiences, Automation, Distribution, Infrastructure and Production who are looking to solve some of the most interesting challenges in Media and Broadcasting; as well as our Commercial, Partnerships & Engagement team who ensure we're collaborating with the right external partners and optimising commercial returns through the exploitation of our Intellectual Property and grant funding. Our work supports the BBC's current ambition as well as informing future strategy. If you're excited by the prospect of working in an innovative environment with smart and supportive colleagues, then BBC R&D is the place for you. Role Responsibility This is a hands-on role. Your key responsibilities will be: Build and lead a team of ML engineers to develop an infrastructure to manage ML lifecycle through experimentation, deployment, and testing. Own the Automation MLOps strategy, roadmap, and backlog. Provide leadership and guidance on the delivery of ML models from prototypes to production, mentor and coach team members on ML engineering best practises; work alongside researchers to enable BBC to benefit more rapidly from fundamental ML research. Contribute to the design of ML systems and infrastructure to shape how ML is used across the BBC. Develop relationships with pan-BBC and external contributors and stakeholders. You will need to bring to life long-term ambitions to secure required support and buy-in for tangible and intangible benefits and outcomes. Focus on ensuring our ML technology delivers on performance, cost and sustainability goals and is supportive of the BBC's responsible and ethical ML objectives. Work with our Technology Strategy and Governance team to identify and communicate strategic investment decisions required to mature the BBC's ML technology in line with business needs. Are you the right candidate? Solid understanding of machine learning concepts and algorithms Experience deploying machine learning solutions Expert knowledge of Python programming and machine learning libraries (Scikit-learn, TensorFlow, Keras, PyTorch, MxNet, etc.) Experience implementing ML automation, MLOps (scalable deployment practices aimed to deploy and maintain machine learning models in production reliably and efficiently) and related tools (e.g., MLflow, Kubeflow, Airflow, Sagemaker) Experience working in accordance with DevOps principles, and with industry deployment best practices using CI/CD tools and infrastructure as code (e.g., Docker, Kubernetes, Terraform) Experience in at least one cloud platform (e.g., AWS, GCP, Azure) and associated machine learning services, e.g., Amazon SageMaker, Azure ML, Databricks. Package Description Band: E Contract type: Permanent - Full time Location: UK wide We're happy to discuss flexible working. Please indicate your choice under the flexible working question in the application . There is no obligation to raise this at the application stage but if you wish to do so, you are welcome to. Flexible working will be part of the discussion at offer stage. Excellent career progression - the BBC offers great opportunities for employees to seek new challenges and work in different areas of the organisation. Unrivalled training and development opportunities - our in-house Academy hosts a wide range of internal and external courses and certification. Benefits - We offer a competitive salary package, a flexible 35-hour working week for work-life balance and 26 days (1 of which is a corporation day) with the option to buy an extra 5 days, a defined pension scheme and discounted dental, health care, gym and much more. The situation regarding the coronavirus outbreak is developing quickly and the BBC is keen to continue to ensure the safety and wellbeing of people across the BBC, while continuing to protect our services. To reduce the risk access to BBC buildings is limited to those essential to our broadcast output. From Wednesday 18 th March until further notice all assessments and interviews will be conducted remotely. For more information go to Mae'r sefyllfa gyda'r coronafeirws yn datblygu'n gyflym, ac mae'r BBC yn awyddus i barhau i sicrhau diogelwch a lles pobl ar draws y BBC, gan barhau i warchod ein gwasanaethau hefyd. I leihau'r risg, dim ond y bobl sy'n hanfodol i'n hallbwn darlledu fydd yn cael mynediad i adeiladau'r BBC. O ddydd Mercher 18 fed Mawrth ymlaen, bydd pob asesiad a chyfweliad yn cael ei gynnal o bell, nes rhoddir gwybod yn wahanol. I gael mwy o wybodaeth, ewch i About the BBC We don't focus simply on what we do - we also care how we do it. Our values and the way we behave are important to us. Please make sure you've read about our values and behaviours in the document attached below. Diversity matters at the BBC. We have a working environment where we value and respect every individual's unique contribution, enabling all of our employees to thrive and achieve their full potential. We want to attract the broadest range of talented people to be part of the BBC - whether that's to contribute to our programming or our wide range of non-production roles. The more diverse our workforce, the better able we are to respond to and reflect our audiences in all their diversity. We are committed to equality of opportunity and welcome applications from individuals, regardless of age, gender, ethnicity, disability, sexual orientation, gender identity, socio-economic background, religion and/or belief. We will consider flexible working requests for all roles, unless operational requirements prevent otherwise. To find out more about Diversity and Inclusion at the BBC, please click here
Site Name: UK - Hertfordshire - Stevenage, USA - Connecticut - Hartford, USA - Delaware - Dover, USA - Maryland - Rockville, USA - Massachusetts - Cambridge, USA - Massachusetts - Waltham, USA - New Jersey - Trenton, USA - Pennsylvania - Upper Providence Posted Date: Jun 6 2022 The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. Achieving delivery of the right data to the right people at the right time needs design and implementation of data flows and data products which leverage internal and external data assets and tools to drive discovery and development is a key objective for the Data Science and Data Engineering (DSDE) team within GSK's Pharmaceutical R&D organization. There are five key drivers for this approach, which are closely aligned with GSK's corporate priorities of Innovation, Performance and Trust: Automation of end-to-end data flows :Faster and reliable ingestion of high throughput data in genetics, genomics and multi-omics, to extract value of investments in new technology (instrument to analysis-ready data in Enabling governance by design of external and internal data :with engineered practical solutions for controlled use and monitoring Innovative disease-specific and domain-expert specific data products : to enable computational scientists and their research unit collaborators to get faster to key insights leading to faster biopharmaceutical development cycles. Supporting e2ecode traceability and data provenance :Increasing assurance of data integrity through automation, integration. Improving engineering efficiency :Extensible, reusable, scalable,updateable,maintainable, virtualized traceable data and code would be driven by data engineering innovation and better resource utilization. We are looking for experienced Senior DevOps Engineers to join our growing Data Ops team. As a Senior Dev Ops Engineer is a highly technical individual contributor, building modern, cloud-native, DevOps-first systems for standardizing and templatizingbiomedical and scientificdata engineering, with demonstrable experience across the following areas: Deliver declarative components for common data ingestion, transformation and publishing techniques Define and implement data governance aligned to modern standards Establish scalable, automated processes for data engineering teams across GSK Thought leader and partner with wider DSDE data engineering teams to advise on implementation and best practices Cloud Infrastructure-as-Code Define Service and Flow orchestration Data as a configurable resource(including configuration-driven access to scientific data modelling tools) Observability (monitoring, alerting, logging, tracing, etc.) Enable quality engineering through KPIs and code coverage and quality checks Standardise GitOps/declarative software development lifecycle Audit as a service Senior DevOpsEngineers take full ownership of delivering high-performing, high-impactbiomedical and scientificdataopsproducts and services, froma description of apattern thatcustomer Data Engineers are trying touseall the way through tofinal delivery (and ongoing monitoring and operations)of a templated project and all associated automation. They arestandard-bearers for software engineering and quality coding practices within theteam andareexpected to mentor more junior engineers; they may even coordinate the work of more junior engineers on a large project.Theydevise useful metrics for ensuring their services are meeting customer demand and having animpact anditerate to deliver and improve on those metrics in an agile fashion. Successful Senior DevOpsEngineers are developing expertise with the types of data and types of tools that are leveraged in the biomedical and scientific data engineering space, andhas the following skills and experience(withsignificant depth in one or more of these areas): Demonstrable experience deploying robust modularised/container-based solutions to production (ideally GCP) and leveraging the Cloud NativeComputing Foundation (CNCF) ecosystem Significant depth in DevOps principles and tools (e.g.GitOps, Jenkins,CircleCI, Azure DevOps, etc.), and how to integrate these tools with other productivity tools (e.g. Jira, Slack, Microsoft Teams) to build a comprehensive workflow Programming in Python. Scala orGo Embedding agile software engineering (task/issue management, testing, documentation, software development lifecycle, source control, etc.) Leveraging major cloud providers, both via Kubernetesorvia vendor-specific services Authentication and Authorization flows and associated technologies (e.g.OAuth2 + JWT) Common distributed data tools (e.g.Spark, Hive) The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: We are looking for professionals with these required skills to achieve our goals: Masters in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering, etc, plus 5 years job experience (or PhD plus 3 years job experience) Experience with DevOps tools and concepts (e.g. Jira, GitLabs / Jenkins / CircleCI / Azure DevOps /etc.)Excellent with common distributed data tools in a production setting (Spark, Kafka, etc) Experience with specialized data architecture (e.g. optimizing physical layout for access patterns, including bloom filters, optimizing against self-describing formats such as ORC or Parquet, etc.) Experience with search / indexing systems (e.g. Elasticsearch) Expertise with agile development in Python, Scala, Go, and/or C++ Experience building reusable components on top of the CNCF ecosystem including Kubernetes Metrics-first mindset Experience mentoring junior engineers into deep technical expertise Preferred Qualifications: If you have the following characteristics, it would be a plus: Experience with agile software development Experience with building and designing a DevOps-first way of working Experience with building reusable components on top of the CNCF ecosystem including Kubernetes (or similar ecosystem ) LI-GSK Why GSK? Our values and expectationsare at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness As a company driven by our values of Patient focus, Transparency, Respect and Integrity, we know inclusion and diversity are essential for us to be able to succeed. We want all our colleagues to thrive at GSK bringing their unique experiences, ensuring they feel good and to keep growing their careers. As a candidate for a role, we want you to feel the same way. As an Equal Opportunity Employer, we are open to all talent. In the US, we also adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to race/ethnicity, colour, national origin, religion, gender, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class ( US only). We believe in an agile working culture for all our roles. If flexibility is important to you, we encourage you to explore with our hiring team what the opportunities are. Please don't hesitate to contact us if you'd like to discuss any adjustments to our process which might help you demonstrate your strengths and capabilities. You can either call us on , or send an email As you apply, we will ask you to share some personal information which is entirely voluntary..... click apply for full job details
Sep 23, 2022
Full time
Site Name: UK - Hertfordshire - Stevenage, USA - Connecticut - Hartford, USA - Delaware - Dover, USA - Maryland - Rockville, USA - Massachusetts - Cambridge, USA - Massachusetts - Waltham, USA - New Jersey - Trenton, USA - Pennsylvania - Upper Providence Posted Date: Jun 6 2022 The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. Achieving delivery of the right data to the right people at the right time needs design and implementation of data flows and data products which leverage internal and external data assets and tools to drive discovery and development is a key objective for the Data Science and Data Engineering (DSDE) team within GSK's Pharmaceutical R&D organization. There are five key drivers for this approach, which are closely aligned with GSK's corporate priorities of Innovation, Performance and Trust: Automation of end-to-end data flows :Faster and reliable ingestion of high throughput data in genetics, genomics and multi-omics, to extract value of investments in new technology (instrument to analysis-ready data in Enabling governance by design of external and internal data :with engineered practical solutions for controlled use and monitoring Innovative disease-specific and domain-expert specific data products : to enable computational scientists and their research unit collaborators to get faster to key insights leading to faster biopharmaceutical development cycles. Supporting e2ecode traceability and data provenance :Increasing assurance of data integrity through automation, integration. Improving engineering efficiency :Extensible, reusable, scalable,updateable,maintainable, virtualized traceable data and code would be driven by data engineering innovation and better resource utilization. We are looking for experienced Senior DevOps Engineers to join our growing Data Ops team. As a Senior Dev Ops Engineer is a highly technical individual contributor, building modern, cloud-native, DevOps-first systems for standardizing and templatizingbiomedical and scientificdata engineering, with demonstrable experience across the following areas: Deliver declarative components for common data ingestion, transformation and publishing techniques Define and implement data governance aligned to modern standards Establish scalable, automated processes for data engineering teams across GSK Thought leader and partner with wider DSDE data engineering teams to advise on implementation and best practices Cloud Infrastructure-as-Code Define Service and Flow orchestration Data as a configurable resource(including configuration-driven access to scientific data modelling tools) Observability (monitoring, alerting, logging, tracing, etc.) Enable quality engineering through KPIs and code coverage and quality checks Standardise GitOps/declarative software development lifecycle Audit as a service Senior DevOpsEngineers take full ownership of delivering high-performing, high-impactbiomedical and scientificdataopsproducts and services, froma description of apattern thatcustomer Data Engineers are trying touseall the way through tofinal delivery (and ongoing monitoring and operations)of a templated project and all associated automation. They arestandard-bearers for software engineering and quality coding practices within theteam andareexpected to mentor more junior engineers; they may even coordinate the work of more junior engineers on a large project.Theydevise useful metrics for ensuring their services are meeting customer demand and having animpact anditerate to deliver and improve on those metrics in an agile fashion. Successful Senior DevOpsEngineers are developing expertise with the types of data and types of tools that are leveraged in the biomedical and scientific data engineering space, andhas the following skills and experience(withsignificant depth in one or more of these areas): Demonstrable experience deploying robust modularised/container-based solutions to production (ideally GCP) and leveraging the Cloud NativeComputing Foundation (CNCF) ecosystem Significant depth in DevOps principles and tools (e.g.GitOps, Jenkins,CircleCI, Azure DevOps, etc.), and how to integrate these tools with other productivity tools (e.g. Jira, Slack, Microsoft Teams) to build a comprehensive workflow Programming in Python. Scala orGo Embedding agile software engineering (task/issue management, testing, documentation, software development lifecycle, source control, etc.) Leveraging major cloud providers, both via Kubernetesorvia vendor-specific services Authentication and Authorization flows and associated technologies (e.g.OAuth2 + JWT) Common distributed data tools (e.g.Spark, Hive) The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: We are looking for professionals with these required skills to achieve our goals: Masters in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering, etc, plus 5 years job experience (or PhD plus 3 years job experience) Experience with DevOps tools and concepts (e.g. Jira, GitLabs / Jenkins / CircleCI / Azure DevOps /etc.)Excellent with common distributed data tools in a production setting (Spark, Kafka, etc) Experience with specialized data architecture (e.g. optimizing physical layout for access patterns, including bloom filters, optimizing against self-describing formats such as ORC or Parquet, etc.) Experience with search / indexing systems (e.g. Elasticsearch) Expertise with agile development in Python, Scala, Go, and/or C++ Experience building reusable components on top of the CNCF ecosystem including Kubernetes Metrics-first mindset Experience mentoring junior engineers into deep technical expertise Preferred Qualifications: If you have the following characteristics, it would be a plus: Experience with agile software development Experience with building and designing a DevOps-first way of working Experience with building reusable components on top of the CNCF ecosystem including Kubernetes (or similar ecosystem ) LI-GSK Why GSK? Our values and expectationsare at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness As a company driven by our values of Patient focus, Transparency, Respect and Integrity, we know inclusion and diversity are essential for us to be able to succeed. We want all our colleagues to thrive at GSK bringing their unique experiences, ensuring they feel good and to keep growing their careers. As a candidate for a role, we want you to feel the same way. As an Equal Opportunity Employer, we are open to all talent. In the US, we also adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to race/ethnicity, colour, national origin, religion, gender, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class ( US only). We believe in an agile working culture for all our roles. If flexibility is important to you, we encourage you to explore with our hiring team what the opportunities are. Please don't hesitate to contact us if you'd like to discuss any adjustments to our process which might help you demonstrate your strengths and capabilities. You can either call us on , or send an email As you apply, we will ask you to share some personal information which is entirely voluntary..... click apply for full job details
Site Name: UK - Hertfordshire - Stevenage, USA - Connecticut - Hartford, USA - Delaware - Dover, USA - Maryland - Rockville, USA - Massachusetts - Waltham, USA - Pennsylvania - Upper Providence, Warren NJ Posted Date: Aug The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. Achieving delivery of the right data to the right people at the right time needs design and implementation of data flows and data products which leverage internal and external data assets and tools to drive discovery and development is a key objective for the Data Science and Data Engineering (DS D E) team within GSK's Pharmaceutical R&D organisation . There are five key drivers for this approach, which are closely aligned with GSK's corporate priorities of Innovation, Performance and Trust: Automation of end-to-end data flows: Faster and reliable ingestion of high throughput data in genetics, genomics and multi-omics, to extract value of investments in new technology (instrument to analysis-ready data in Enabling governance by design of external and internal data: with engineered practical solutions for controlled use and monitoring Innovative disease-specific and domain-expert specific data products : to enable computational scientists and their research unit collaborators to get faster to key insights leading to faster biopharmaceutical development cycles. Supporting e2 e code traceability and data provenance: Increasing assurance of data integrity through automation, integration Improving engineering efficiency: Extensible, reusable, scalable,updateable,maintainable, virtualized traceable data and code would b e driven by data engineering innovation and better resource utilization. We are looking for an experienced Sr. Data Ops Engineer to join our growing Data Ops team. As a Sr. Data Ops Engineer is a highly technical individual contributor, building modern, cloud-native, DevOps-first systems for standardizing and templatizingbiomedical and scientificdata engineering, with demonstrable experience across the following areas : Deliver declarative components for common data ingestion, transformation and publishing techniques Define and implement data governance aligned to modern standards Establish scalable, automated processes for data engineering team s across GSK Thought leader and partner with wider DSDE data engineering teams to advise on implementation and best practices Cloud Infrastructure-as-Code D efine Service and Flow orchestration Data as a configurable resource(including configuration-driven access to scientific data modelling tools) Ob servabilty (monitoring, alerting, logging, tracing, ...) Enable quality engineering through KPIs and c ode coverage and quality checks Standardise GitOps /declarative software development lifecycle Audit as a service Sr. DataOpsEngineerstake full ownership of delivering high-performing, high-impactbiomedical and scientificdataopsproducts and services, froma description of apattern thatcustomer Data Engineers are trying touseall the way through tofinal delivery (and ongoing monitoring and operations)of a templated project and all associated automation. They arestandard-bearers for software engineering and quality coding practices within theteam andareexpected to mentor more junior engineers; they may even coordinate the work of more junior engineers on a large project.Theydevise useful metrics for ensuring their services are meeting customer demand and having animpact anditerate to deliver and improve on those metrics in an agile fashion. A successfulSr.DataOpsEngineeris developing expertise with the types of data and types of tools that are leveraged in the biomedical and scientific data engineering space, andhas the following skills and experience(withsignificant depth in one or more of these areas): Demonstrable experience deploying robust modularised/ container based solutions to production (ideally GCP) and leveraging the Cloud NativeComputing Foundation (CNCF) ecosystem Significant depth in DevOps principles and tools ( e.g. GitOps , Jenkins, CircleCI , Azure DevOps, ...), and how to integrate these tools with other productivity tools (e.g. Jira, Slack, Microsoft Teams) to build a comprehensive workflow P rogramming in Python. Scala or Go Embedding agile s oftware engineering ( task/issue management, testing, documentation, software development lifecycle, source control, ) Leveraging major cloud providers, both via Kubernetes or via vendor-specific services Authentication and Authorization flows and associated technologies ( e.g. OAuth2 + JWT) Common distributed data tools ( e.g. Spark, Hive) The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: Bachelors degree in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering, etc, plus 7 years job experience or Masters degree with 5 Years of experience (or PhD plus 3 years job experience) Deep experience with DevOps tools and concepts ( e.g. Jira, GitLabs / Jenkins / CircleCI / Azure DevOps / ...) Excellent with common distributed data tools in a production setting (Spark, Kafka, etc) Experience with specialized data architecture ( e.g. optimizing physical layout for access patterns, including bloom filters, optimizing against self-describing formats such as ORC or Parquet, etc) Experience with search / indexing systems ( e.g. Elasticsearch) Deep expertise with agile development in Python, Scala, Go, and/or C++ Experience building reusable components on top of the CNCF ecosystem including Kubernetes Metrics-first mindset Experience mentoring junior engineers into deep technical expertise Preferred Qualifications: If you have the following characteristics, it would be a plus: Experience with agile software development Experience building and designing a DevOps-first way of working Demonstrated experience building reusable components on top of the CNCF ecosystem including Kubernetes (or similar ecosystem ) LI-GSK Why GSK? Our values and expectations are at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness As a company driven by our values of Patient focus, Transparency, Respect and Integrity, we know inclusion and diversity are essential for us to be able to succeed. We want all our colleagues to thrive at GSK bringing their unique experiences, ensuring they feel good and to keep growing their careers. As a candidate for a role, we want you to feel the same way. As an Equal Opportunity Employer, we are open to all talent. In the US, we also adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to neurodiversity, race/ethnicity, colour, national origin, religion, gender, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class ( US only). We believe in an agile working culture for all our roles. If flexibility is important to you, we encourage you to explore with our hiring team what the opportunities are. Please don't hesitate to contact us if you'd like to discuss any adjustments to our process which might help you demonstrate your strengths and capabilities. You can either call us on , or send an email As you apply, we will ask you to share some personal information which is entirely voluntary..... click apply for full job details
Sep 23, 2022
Full time
Site Name: UK - Hertfordshire - Stevenage, USA - Connecticut - Hartford, USA - Delaware - Dover, USA - Maryland - Rockville, USA - Massachusetts - Waltham, USA - Pennsylvania - Upper Providence, Warren NJ Posted Date: Aug The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. Achieving delivery of the right data to the right people at the right time needs design and implementation of data flows and data products which leverage internal and external data assets and tools to drive discovery and development is a key objective for the Data Science and Data Engineering (DS D E) team within GSK's Pharmaceutical R&D organisation . There are five key drivers for this approach, which are closely aligned with GSK's corporate priorities of Innovation, Performance and Trust: Automation of end-to-end data flows: Faster and reliable ingestion of high throughput data in genetics, genomics and multi-omics, to extract value of investments in new technology (instrument to analysis-ready data in Enabling governance by design of external and internal data: with engineered practical solutions for controlled use and monitoring Innovative disease-specific and domain-expert specific data products : to enable computational scientists and their research unit collaborators to get faster to key insights leading to faster biopharmaceutical development cycles. Supporting e2 e code traceability and data provenance: Increasing assurance of data integrity through automation, integration Improving engineering efficiency: Extensible, reusable, scalable,updateable,maintainable, virtualized traceable data and code would b e driven by data engineering innovation and better resource utilization. We are looking for an experienced Sr. Data Ops Engineer to join our growing Data Ops team. As a Sr. Data Ops Engineer is a highly technical individual contributor, building modern, cloud-native, DevOps-first systems for standardizing and templatizingbiomedical and scientificdata engineering, with demonstrable experience across the following areas : Deliver declarative components for common data ingestion, transformation and publishing techniques Define and implement data governance aligned to modern standards Establish scalable, automated processes for data engineering team s across GSK Thought leader and partner with wider DSDE data engineering teams to advise on implementation and best practices Cloud Infrastructure-as-Code D efine Service and Flow orchestration Data as a configurable resource(including configuration-driven access to scientific data modelling tools) Ob servabilty (monitoring, alerting, logging, tracing, ...) Enable quality engineering through KPIs and c ode coverage and quality checks Standardise GitOps /declarative software development lifecycle Audit as a service Sr. DataOpsEngineerstake full ownership of delivering high-performing, high-impactbiomedical and scientificdataopsproducts and services, froma description of apattern thatcustomer Data Engineers are trying touseall the way through tofinal delivery (and ongoing monitoring and operations)of a templated project and all associated automation. They arestandard-bearers for software engineering and quality coding practices within theteam andareexpected to mentor more junior engineers; they may even coordinate the work of more junior engineers on a large project.Theydevise useful metrics for ensuring their services are meeting customer demand and having animpact anditerate to deliver and improve on those metrics in an agile fashion. A successfulSr.DataOpsEngineeris developing expertise with the types of data and types of tools that are leveraged in the biomedical and scientific data engineering space, andhas the following skills and experience(withsignificant depth in one or more of these areas): Demonstrable experience deploying robust modularised/ container based solutions to production (ideally GCP) and leveraging the Cloud NativeComputing Foundation (CNCF) ecosystem Significant depth in DevOps principles and tools ( e.g. GitOps , Jenkins, CircleCI , Azure DevOps, ...), and how to integrate these tools with other productivity tools (e.g. Jira, Slack, Microsoft Teams) to build a comprehensive workflow P rogramming in Python. Scala or Go Embedding agile s oftware engineering ( task/issue management, testing, documentation, software development lifecycle, source control, ) Leveraging major cloud providers, both via Kubernetes or via vendor-specific services Authentication and Authorization flows and associated technologies ( e.g. OAuth2 + JWT) Common distributed data tools ( e.g. Spark, Hive) The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: Bachelors degree in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering, etc, plus 7 years job experience or Masters degree with 5 Years of experience (or PhD plus 3 years job experience) Deep experience with DevOps tools and concepts ( e.g. Jira, GitLabs / Jenkins / CircleCI / Azure DevOps / ...) Excellent with common distributed data tools in a production setting (Spark, Kafka, etc) Experience with specialized data architecture ( e.g. optimizing physical layout for access patterns, including bloom filters, optimizing against self-describing formats such as ORC or Parquet, etc) Experience with search / indexing systems ( e.g. Elasticsearch) Deep expertise with agile development in Python, Scala, Go, and/or C++ Experience building reusable components on top of the CNCF ecosystem including Kubernetes Metrics-first mindset Experience mentoring junior engineers into deep technical expertise Preferred Qualifications: If you have the following characteristics, it would be a plus: Experience with agile software development Experience building and designing a DevOps-first way of working Demonstrated experience building reusable components on top of the CNCF ecosystem including Kubernetes (or similar ecosystem ) LI-GSK Why GSK? Our values and expectations are at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness As a company driven by our values of Patient focus, Transparency, Respect and Integrity, we know inclusion and diversity are essential for us to be able to succeed. We want all our colleagues to thrive at GSK bringing their unique experiences, ensuring they feel good and to keep growing their careers. As a candidate for a role, we want you to feel the same way. As an Equal Opportunity Employer, we are open to all talent. In the US, we also adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to neurodiversity, race/ethnicity, colour, national origin, religion, gender, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class ( US only). We believe in an agile working culture for all our roles. If flexibility is important to you, we encourage you to explore with our hiring team what the opportunities are. Please don't hesitate to contact us if you'd like to discuss any adjustments to our process which might help you demonstrate your strengths and capabilities. You can either call us on , or send an email As you apply, we will ask you to share some personal information which is entirely voluntary..... click apply for full job details
Support Engineer (Cloud Kubernetes) We are seeking a Fully Remote Support Engineer with exposure to Linux and Cloud environments to join a Series E funded team at the centre of Ai data science innovation, helping global teams to further medical discovery, improve autonomous vehicle tech all the way through to improving music recommendation engines. As a Cloud and Kubernetes focused Support Engineer, you will join a customer success and support engineering group, who work closely with customers to solve complex platform issues and urgent queries at a senior level. They are looking for those interested in working with container and cloud tech, MLOps and Linux environments but importantly, who love troubleshooting a range of challenges as their platform deploys to broad multi-cloud, Kubernetes, and open source tech environments. The role will suit anyone from a strong customer facing background, with ability to diagnose and resolve complex production issues, and we are preferably looking for those who have supporting a complex product that is deployed into customer environments. Skills required for the Support Engineer: Technical knowledge which will include experience supporting Linux environments, exposure to cloud technologies (AWS, GCP or Azure) or container tooling (Kubernetes, Docker) Demonstrable customer/client facing experience, preferably supporting a software product or platform to customers directly Experience supporting complex environments and/or software products and any exposure to Data Science products would be beneficial but certainly not essential Salary : £80,000 - £105,000 plus Stock and Benefits Location: Permanent Full Remote with a London office if desired. No travel required!
Feb 05, 2022
Full time
Support Engineer (Cloud Kubernetes) We are seeking a Fully Remote Support Engineer with exposure to Linux and Cloud environments to join a Series E funded team at the centre of Ai data science innovation, helping global teams to further medical discovery, improve autonomous vehicle tech all the way through to improving music recommendation engines. As a Cloud and Kubernetes focused Support Engineer, you will join a customer success and support engineering group, who work closely with customers to solve complex platform issues and urgent queries at a senior level. They are looking for those interested in working with container and cloud tech, MLOps and Linux environments but importantly, who love troubleshooting a range of challenges as their platform deploys to broad multi-cloud, Kubernetes, and open source tech environments. The role will suit anyone from a strong customer facing background, with ability to diagnose and resolve complex production issues, and we are preferably looking for those who have supporting a complex product that is deployed into customer environments. Skills required for the Support Engineer: Technical knowledge which will include experience supporting Linux environments, exposure to cloud technologies (AWS, GCP or Azure) or container tooling (Kubernetes, Docker) Demonstrable customer/client facing experience, preferably supporting a software product or platform to customers directly Experience supporting complex environments and/or software products and any exposure to Data Science products would be beneficial but certainly not essential Salary : £80,000 - £105,000 plus Stock and Benefits Location: Permanent Full Remote with a London office if desired. No travel required!
Job description We currently have an opportunity for an Enterprise Data Architect to join our IT team in London. The Enterprise Data Architect will ensure A&O has focus on and maintains an enterprise data governance framework for data policies, standards, process and practices, across A&O, to achieve the required level of consistency and quality to meet A&O business needs. The role will be the custodian of data, setting the vision for A&O's use of data and will manage A&Os data catalogue to improve the quality and value of core data assets, respond to regulatory requirements as well as support strategic functional requirements. The role will manage the way that people and programs engage with data, ensuring that data can be turned into valuable insights that inform business decisions. The role holder will consistently communicate the business benefit of data standards and will champion and govern those standards across the firm. The role holder will ensure internal procedures keep pace with evolving data regulation and compliance. Role and responsibilities Manage and govern a unified view of A&O data, its data linage and provenance Govern and manage the A&O data catalogue across all data assets. Ensure data is discoverable, well understood, and trusted Drive innovation and growth through the use of data, unlocking data for insightful business decisions Help establish best practice, rules and ownership of A&Os taxonomy and ontology Build and maintain good working-relationships across A&Os Data Stewards to ensure internal stakeholders and leaders are informed and aligned Create a Data Governance and Quality function, including processes and tools to achieve A&Os data objectives Define a proactive approach to the management of data models, definitions, data governance, ethics and data processing rules to provide timely, appropriate, accurate and up-to-date information at the point of need Actively use the trends in data quality and data governance to drive positive change in people, process, technology and governance Develop and be responsible for the standards, policies and procedures for the on-going implementation of data governance Ensure alignment between data governance best practice, Data Privacy procedures and requirements and the IT Information Security strategy Work closely with and set direction for the Trusted Data Platform team, staying close to opportunities and challenges that exist within A&O, provide guidance on new products, approaches and supplier relationships that could impact data collection, processing and analytics Key requirements Business Competencies Aptitude for and experience of creating, managing, motivating and developing teams Commercial acumen including an understanding of the overall picture of how the IT service costs and value add to the business Excellent communication, interpersonal and influencing skills, including the ability to communicate both on technical and business levels. Excellent customer-facing skills with a good grasp of key drivers and requirements within the Business. High level of personal credibility, impact and influence with proven ability to work effectively and persuasively at all levels of the business Knowledge & Experience Knowledge and awareness of business and technology issues related to the management of enterprise wide data Expert level data modelling experience. Including a deep understanding of relational, taxonomical and ontological modelling approaches Strong recent experience of developing and managing an enterprise data catalogue Proven experience of developing and implementing a Data Governance Framework and a Data Quality Management service Keen eye for detail with a genuine interest in understanding the journey data takes throughout A&O Demonstrable experience in building, delivering and managing detailed data quality measurement frameworks Comfortable presenting complex data models, flows and relationships to non-data peers and colleagues An expert in information management practices including information lifecycle management, data profiling, master data management, data audits and requirements gathering Clear knowledge and experience relating to the application of data governance, data privacy and data ethics principles to support and drive data strategy Evidence of establishing governance processes, with engaged teams, leading and implementing enterprise-wide data governance and demonstrable improvements in data quality Preferable: Knowledge and experience relating to MLOps, testing and quality management Preferable: Experienced in managing, versioning and automating data and systems Preferable: Experience of unstructured and semi-structured data and associated processing approaches. NLP and Machine learning experience a distinct advantage. Preferable: Experienced at using relevant frameworks such as GDPR and other data privacy regulations to drive data governance ethics Preferable: Experience using automated approaches to manage and support data privacy compliance Is a collaborative team player fostering strong working relationships, with strong culture awareness Strong leadership and influencing skills; interacting with senior stakeholders; highly personable Strong ability to extract information by questioning, active listening and interviewing Results orientated to ensure change and delivery project metrics are meaningful and supported with robust business data Analytical with strong numeracy and good statistical skills Experience of working in Agile project environments Ability to work with technical, (developers, data engineers, data scientists) and nontechnical staff Allen & Overy LLP is committed to being an inclusive employer and we are happy to consider flexible working arrangements. Additional information - External It's Time Allen & Overy is a leading global law firm operating in over thirty countries. By turning our insight, technology and talent into ground-breaking solutions, we've earned a place at the forefront of our industry. Our lawyers are leaders in their field - and the same goes for our support teams. Ambitious, driven and open to fresh perspectives, we find innovative new ways to deliver our services and maintain our reputation for excellence, in all that we do. The nature of law is changing and with that change brings unique opportunities. With our collaborative working culture, flexibility, and a commitment to your progress, we build rewarding careers. By joining our global team, you are supported by colleagues from around the world. If you're ready for a new challenge, it's time to seize the opportunity.
Sep 15, 2021
Full time
Job description We currently have an opportunity for an Enterprise Data Architect to join our IT team in London. The Enterprise Data Architect will ensure A&O has focus on and maintains an enterprise data governance framework for data policies, standards, process and practices, across A&O, to achieve the required level of consistency and quality to meet A&O business needs. The role will be the custodian of data, setting the vision for A&O's use of data and will manage A&Os data catalogue to improve the quality and value of core data assets, respond to regulatory requirements as well as support strategic functional requirements. The role will manage the way that people and programs engage with data, ensuring that data can be turned into valuable insights that inform business decisions. The role holder will consistently communicate the business benefit of data standards and will champion and govern those standards across the firm. The role holder will ensure internal procedures keep pace with evolving data regulation and compliance. Role and responsibilities Manage and govern a unified view of A&O data, its data linage and provenance Govern and manage the A&O data catalogue across all data assets. Ensure data is discoverable, well understood, and trusted Drive innovation and growth through the use of data, unlocking data for insightful business decisions Help establish best practice, rules and ownership of A&Os taxonomy and ontology Build and maintain good working-relationships across A&Os Data Stewards to ensure internal stakeholders and leaders are informed and aligned Create a Data Governance and Quality function, including processes and tools to achieve A&Os data objectives Define a proactive approach to the management of data models, definitions, data governance, ethics and data processing rules to provide timely, appropriate, accurate and up-to-date information at the point of need Actively use the trends in data quality and data governance to drive positive change in people, process, technology and governance Develop and be responsible for the standards, policies and procedures for the on-going implementation of data governance Ensure alignment between data governance best practice, Data Privacy procedures and requirements and the IT Information Security strategy Work closely with and set direction for the Trusted Data Platform team, staying close to opportunities and challenges that exist within A&O, provide guidance on new products, approaches and supplier relationships that could impact data collection, processing and analytics Key requirements Business Competencies Aptitude for and experience of creating, managing, motivating and developing teams Commercial acumen including an understanding of the overall picture of how the IT service costs and value add to the business Excellent communication, interpersonal and influencing skills, including the ability to communicate both on technical and business levels. Excellent customer-facing skills with a good grasp of key drivers and requirements within the Business. High level of personal credibility, impact and influence with proven ability to work effectively and persuasively at all levels of the business Knowledge & Experience Knowledge and awareness of business and technology issues related to the management of enterprise wide data Expert level data modelling experience. Including a deep understanding of relational, taxonomical and ontological modelling approaches Strong recent experience of developing and managing an enterprise data catalogue Proven experience of developing and implementing a Data Governance Framework and a Data Quality Management service Keen eye for detail with a genuine interest in understanding the journey data takes throughout A&O Demonstrable experience in building, delivering and managing detailed data quality measurement frameworks Comfortable presenting complex data models, flows and relationships to non-data peers and colleagues An expert in information management practices including information lifecycle management, data profiling, master data management, data audits and requirements gathering Clear knowledge and experience relating to the application of data governance, data privacy and data ethics principles to support and drive data strategy Evidence of establishing governance processes, with engaged teams, leading and implementing enterprise-wide data governance and demonstrable improvements in data quality Preferable: Knowledge and experience relating to MLOps, testing and quality management Preferable: Experienced in managing, versioning and automating data and systems Preferable: Experience of unstructured and semi-structured data and associated processing approaches. NLP and Machine learning experience a distinct advantage. Preferable: Experienced at using relevant frameworks such as GDPR and other data privacy regulations to drive data governance ethics Preferable: Experience using automated approaches to manage and support data privacy compliance Is a collaborative team player fostering strong working relationships, with strong culture awareness Strong leadership and influencing skills; interacting with senior stakeholders; highly personable Strong ability to extract information by questioning, active listening and interviewing Results orientated to ensure change and delivery project metrics are meaningful and supported with robust business data Analytical with strong numeracy and good statistical skills Experience of working in Agile project environments Ability to work with technical, (developers, data engineers, data scientists) and nontechnical staff Allen & Overy LLP is committed to being an inclusive employer and we are happy to consider flexible working arrangements. Additional information - External It's Time Allen & Overy is a leading global law firm operating in over thirty countries. By turning our insight, technology and talent into ground-breaking solutions, we've earned a place at the forefront of our industry. Our lawyers are leaders in their field - and the same goes for our support teams. Ambitious, driven and open to fresh perspectives, we find innovative new ways to deliver our services and maintain our reputation for excellence, in all that we do. The nature of law is changing and with that change brings unique opportunities. With our collaborative working culture, flexibility, and a commitment to your progress, we build rewarding careers. By joining our global team, you are supported by colleagues from around the world. If you're ready for a new challenge, it's time to seize the opportunity.