Description
We are looking for a Data Engineer to help us build and maintain scalable and resilient pipelines that will ingest, process, and deliver the data needed for predictive and descriptive analytics. These data pipelines will further connect to machine learning pipelines to facilitate automatic retraining of our models.
We are a diverse group of data scientists, data engineers, software engineers, machine learning engineers from over 30 different countries. We are smart and fast moving, operating in small teams, with freedom for independent work and fast decision making.
To empower scientists and radically improve how science is published, evaluated and disseminated to researchers, innovators and the public, we have built our own state-of-the-art Artificial Intelligence Review Assistant (AIRA), backed by cutting-edge machine learning algorithms.
Key Responsibilities
Work in a team of machine learning engineers responsible for the productization of prototypes developed by data scientists.
Collaborate with data scientists, machine learning engineers, and other data engineers to design scalable, reliable, and maintainable ETL processes that ensure data scientists and automated ML processes have the necessary data available
Research and adopt the best DataOps & MLOps standards to design and develop scalable end-to-end data pipelines.
Identify opportunities for data process automation.
Establish and enforce best practices (e.g. in development, quality assurance, optimization, release, and monitoring).
Requirements
Degree in Computer Science or similar
Proven experience as a Data Engineer
Proficiency in Python
Experience with a Cloud Platform (e.g. Azure, AWS, GCP)
Experience with a workflow engine (e.g. Data Factory, Airflow)
Experience with SQL and NoSQL (e.g. MongoDB) databases
Experience with Hadoop & Spark
Great communication, teamwork, problem-solving, and organizational skills.
Nice To Have
Understanding of supervised and unsupervised machine learning algorithms
Stream-processing frameworks (e.g. Kafka)
Benefits
Competitive salary.
Participation in Frontiers annual bonus scheme
25 leave days + 4 well-being days (pro rata and expiring each year on 31st of December)
Great work-life balance.
Opportunity to work remotely
Fresh fruit, snacks and coffee.
English classes.
Team building/sport activities and monthly social events.
Lots of opportunities to work with exciting technologies and solve challenging problems
Who we are
Frontiers is an award-winning open science platform and leading open access scholarly publisher. We are one of the largest and most cited publishers globally. Our journals span science, health, humanities and social sciences, engineering, and sustainability and we continue to expand into new academic disciplines so more researchers can publish open access.
Dec 23, 2021
Full time
Description
We are looking for a Data Engineer to help us build and maintain scalable and resilient pipelines that will ingest, process, and deliver the data needed for predictive and descriptive analytics. These data pipelines will further connect to machine learning pipelines to facilitate automatic retraining of our models.
We are a diverse group of data scientists, data engineers, software engineers, machine learning engineers from over 30 different countries. We are smart and fast moving, operating in small teams, with freedom for independent work and fast decision making.
To empower scientists and radically improve how science is published, evaluated and disseminated to researchers, innovators and the public, we have built our own state-of-the-art Artificial Intelligence Review Assistant (AIRA), backed by cutting-edge machine learning algorithms.
Key Responsibilities
Work in a team of machine learning engineers responsible for the productization of prototypes developed by data scientists.
Collaborate with data scientists, machine learning engineers, and other data engineers to design scalable, reliable, and maintainable ETL processes that ensure data scientists and automated ML processes have the necessary data available
Research and adopt the best DataOps & MLOps standards to design and develop scalable end-to-end data pipelines.
Identify opportunities for data process automation.
Establish and enforce best practices (e.g. in development, quality assurance, optimization, release, and monitoring).
Requirements
Degree in Computer Science or similar
Proven experience as a Data Engineer
Proficiency in Python
Experience with a Cloud Platform (e.g. Azure, AWS, GCP)
Experience with a workflow engine (e.g. Data Factory, Airflow)
Experience with SQL and NoSQL (e.g. MongoDB) databases
Experience with Hadoop & Spark
Great communication, teamwork, problem-solving, and organizational skills.
Nice To Have
Understanding of supervised and unsupervised machine learning algorithms
Stream-processing frameworks (e.g. Kafka)
Benefits
Competitive salary.
Participation in Frontiers annual bonus scheme
25 leave days + 4 well-being days (pro rata and expiring each year on 31st of December)
Great work-life balance.
Opportunity to work remotely
Fresh fruit, snacks and coffee.
English classes.
Team building/sport activities and monthly social events.
Lots of opportunities to work with exciting technologies and solve challenging problems
Who we are
Frontiers is an award-winning open science platform and leading open access scholarly publisher. We are one of the largest and most cited publishers globally. Our journals span science, health, humanities and social sciences, engineering, and sustainability and we continue to expand into new academic disciplines so more researchers can publish open access.
As Google Cloud's premier partner in AI, Datatonic provides world-class businesses with cutting-edge data solutions in the cloud. We help clients take leading technology to the limits by combining our expertise in machine learning, data engineering, and analytics. With Google Cloud as our foundation, we help businesses future-proof their solutions, deepen their understanding of consumers, increase competitive advantage and unlock operational efficiencies. Our team consists of experts in machine learning, data science, software engineering, mathematics, and design. We share a passion for data & analysis, operate at the cutting edge, and believe in a pragmatic approach to solving hard problems. THE ROLE As a Machine Learning Engineer, you'll know how to engineer beautiful code in Python and take pride in what you produce. You'll be an advocate of high-quality engineering and best-practice in production software as well as rapid prototypes. Whilst the position is a hands-on technical role, we'd be particularly interested to find candidates with a desire to lead projects and take an active role in leading client discussions. Your responsibilities will involve building trusted relationships with prospects, finding creative ways to use machine learning to solve problems, scoping projects, and overseeing the delivery of these engagements. To be successful, you will need strong ML & Data Science fundamentals and will know the right tools and approach for each ML use case. You'll be comfortable with model optimisation and deployment tools and practices. Furthermore, you'll also need excellent communication and consulting skills, with the desire to meet real business needs and deliver innovative solutions using AI & Cloud. Your responsibilities will include: Taking vague requirements and translating them into models that solve real-world problems Running machine learning experiments using a programming language with machine learning libraries. Optimizing solutions for performance and scalability. Implementing custom machine learning code. Data engineering, i.e. ensuring a good data flow between database and backend systems. Data science, i.e. analyzing data and coming up with use cases MLOps, i.e. automating ML workflows with testing, reproducibility and metadata/feature storage; Designing ML architectures on Google Cloud Requirementa 1-3 years experience as an ML Engineer, ideally from a consulting background. This candidate should be comfortable working with python as a backend language, delivering code in well-tested CI/CD pipelines. Familiarity with cloud environments (Google Cloud, AWS, Azure). Experience with Software Engineering. Good knowledge of SQL. Knowledge of scaling up computations (GPUs, distributed computing, ). Familiarity exposing ML components through web services or wrappers (e.g. Flask in Python). Strong communication and presentation skills. Benefits: The Basics: 25 days holiday + bank holidays, Private Healthcare, Discounted Gym Membership (Nuffield + Hussle), Life Insurance, Income Protection, Employee Assistance Program, Pension Scheme 3% employee contribution on qualifying earnings, rising 1% per year of service to max 10%. Extras: £100 home equipment allowance, Cycle to Work & Tech Scheme Learning: Datatonic encourages continuous learning at all levels, with the freedom to explore the latest tools & technologies. You will receive an individual training budget and/or conference allowance Team: Career Development, Impact, Innovation. A personalised development plan to ensure you hit your professional goals with a clear roadmap for progression. Experiment and bring forward ideas, create impactful and meaningful work in a creative & collaborative environment - even just for fun! Office: A modern, collaborative, working space set in the innovation hub of Canary Wharf with panoramic views of London. Plus regular social events & team off-sites. Datatonic is an equal-opportunity employer. We're committed to building an inclusive team that welcomes a diversity of perspectives, people, and backgrounds regardless of race, colour, national origin, gender, sexual orientation, age, religion, disability, citizenship, veteran status, or any other protected status. Our current team is made up of folks from very different backgrounds, including a former librarian, oceanographer and comic bookstore owner! If you are on the fence about whether you meet our requirements, we encourage you to apply anyway. Please reach out to us directly at if you need assistance or accommodation due to disability.
Apr 30, 2024
Full time
As Google Cloud's premier partner in AI, Datatonic provides world-class businesses with cutting-edge data solutions in the cloud. We help clients take leading technology to the limits by combining our expertise in machine learning, data engineering, and analytics. With Google Cloud as our foundation, we help businesses future-proof their solutions, deepen their understanding of consumers, increase competitive advantage and unlock operational efficiencies. Our team consists of experts in machine learning, data science, software engineering, mathematics, and design. We share a passion for data & analysis, operate at the cutting edge, and believe in a pragmatic approach to solving hard problems. THE ROLE As a Machine Learning Engineer, you'll know how to engineer beautiful code in Python and take pride in what you produce. You'll be an advocate of high-quality engineering and best-practice in production software as well as rapid prototypes. Whilst the position is a hands-on technical role, we'd be particularly interested to find candidates with a desire to lead projects and take an active role in leading client discussions. Your responsibilities will involve building trusted relationships with prospects, finding creative ways to use machine learning to solve problems, scoping projects, and overseeing the delivery of these engagements. To be successful, you will need strong ML & Data Science fundamentals and will know the right tools and approach for each ML use case. You'll be comfortable with model optimisation and deployment tools and practices. Furthermore, you'll also need excellent communication and consulting skills, with the desire to meet real business needs and deliver innovative solutions using AI & Cloud. Your responsibilities will include: Taking vague requirements and translating them into models that solve real-world problems Running machine learning experiments using a programming language with machine learning libraries. Optimizing solutions for performance and scalability. Implementing custom machine learning code. Data engineering, i.e. ensuring a good data flow between database and backend systems. Data science, i.e. analyzing data and coming up with use cases MLOps, i.e. automating ML workflows with testing, reproducibility and metadata/feature storage; Designing ML architectures on Google Cloud Requirementa 1-3 years experience as an ML Engineer, ideally from a consulting background. This candidate should be comfortable working with python as a backend language, delivering code in well-tested CI/CD pipelines. Familiarity with cloud environments (Google Cloud, AWS, Azure). Experience with Software Engineering. Good knowledge of SQL. Knowledge of scaling up computations (GPUs, distributed computing, ). Familiarity exposing ML components through web services or wrappers (e.g. Flask in Python). Strong communication and presentation skills. Benefits: The Basics: 25 days holiday + bank holidays, Private Healthcare, Discounted Gym Membership (Nuffield + Hussle), Life Insurance, Income Protection, Employee Assistance Program, Pension Scheme 3% employee contribution on qualifying earnings, rising 1% per year of service to max 10%. Extras: £100 home equipment allowance, Cycle to Work & Tech Scheme Learning: Datatonic encourages continuous learning at all levels, with the freedom to explore the latest tools & technologies. You will receive an individual training budget and/or conference allowance Team: Career Development, Impact, Innovation. A personalised development plan to ensure you hit your professional goals with a clear roadmap for progression. Experiment and bring forward ideas, create impactful and meaningful work in a creative & collaborative environment - even just for fun! Office: A modern, collaborative, working space set in the innovation hub of Canary Wharf with panoramic views of London. Plus regular social events & team off-sites. Datatonic is an equal-opportunity employer. We're committed to building an inclusive team that welcomes a diversity of perspectives, people, and backgrounds regardless of race, colour, national origin, gender, sexual orientation, age, religion, disability, citizenship, veteran status, or any other protected status. Our current team is made up of folks from very different backgrounds, including a former librarian, oceanographer and comic bookstore owner! If you are on the fence about whether you meet our requirements, we encourage you to apply anyway. Please reach out to us directly at if you need assistance or accommodation due to disability.
Senior Data Scientist Joining Capco means joining an organisation that is committed to an inclusive working environment where you're encouraged to . We celebrate individuality and recognize that diversity and inclusion, in all forms, is critical to success. It's important to us that we recruit and develop as diverse a range of talent as we can and we believe that everyone brings something different to the table - so we'd love to know what makes you different. Such differences may mean we need to make changes to our process to allow you the best possible platform to succeed, and we are happy to cater to any reasonable adjustments you may require. You will find the section to let us know of these at the bottom of your application form or you can mention it directly to your recruiter at any stage and they will be happy to help. ABOUT US Capco is global technology and business consultancy with a focus on the financial services sector. We are passionate about helping our clients succeed in an ever-changing industry, combining innovative thinking with unique expert know-how. The solutions we offer our customers every day are as diverse as our employees. We are/have: Experts across the Capital Markets, Insurance, Payments, Retail Banking and Wealth & Asset Management domains. Deep knowledge in various financial services offerings including Finance, Risk and Compliance, Financial Crime, Core Banking etc. Committed to growing our business and hiring the best talent to help us get there Focused on maintaining our nimble, agile and entrepreneurial culture. What we're looking for: As part of our on-going global expansion strategy, we are currently hiring Senior Data Scientists as Capco continues to grow it's UK Data Practice in our London office. The Senior Data Scientist should be experienced in using statistical, algorithmic, mining and/or visualisation techniques to address complex business problems. You will be an SME on data science / ML solutions and use case development. The successful candidate will advise the Data Scientists and Engineers on technical requirements around the model design, model architecture, model calibration, solution design, and solutions output In addition to this, they will be expected to leverage their expertise to train and grow Data Scientists at a range of experience levels. Responsibilities Develop prototype and proof of concept solutions making use of cutting-edge AI, machine learning and statistical approaches to solve real-world and business problems Technically lead multiple pods of data science & engineers to develop solutions from a business problem into POCs, MVPs, or fully-fledged solutions while collaborating closely with domain experts. Help transition from development environment to production Act as a subject matter expert in data science and machine learning and coach other data scientists Essential skills Analytics, modelling or software development experience including coding/software development skills. In particular: Hands-on experience in building and implementing data science and machine learning solutions to tackle business problems Comfort with rapid prototyping and disciplined software development processes Experience with Python, ML libraries (e.g. spaCy, NumPy, SciPy, Transformers, etc.)data tools and technologies (Spark, Hadoop, Hive, Redshift, SQL), and toolkits for ML and deep learning (SparkML, Tensorflow, Keras) Demonstrated ability to work on multi-disciplinary teams with diverse skillsets Deploying machine learning models and systems to production (DevOps, MLOps, CI/CD) Experience of working in cloud environments (Azure, GCP, AWS) You are also expected to have: Higher level degree (MsC/PhD) in a numerate discipline. Excellent leadership skills Able to articulate complex data science concepts to both technical and non-technical audiences Banking and/or Financial Services experience. Bonus points: Entries into Kaggle Competitions Creativity, resourcefulness and a collaborative spirit Data at Capco Capco's global Data Practice of 800+ practitioners are an established team of data strategists, analysts, scientists, architects, and engineers who help client teams harness the power of data to drive insight, optimise performance, and commercialise data opportunities. We enable financial institutions to become data-driven by helping transform their understanding, and use of data to derive value. We translate strategy into action - designing, implementing, and mobilising innovative data capabilities with a focus on efficiency and scalability, and partnering with leading vendors and industry bodies. Collaboration, enthusiasm, and encouragement are key in ensuring we maintain our culture through working in an environment where clients become colleagues. We are looking for a candidate who will empower the team, drive high standards, grow the capability, and deliver customer focused outcomes. Our clients and peers have voted us as the A-Team Best Consultancy in Data Management in consecutive years, valuing our ability to identify and develop top data talent. In addition to this, we have proudly won both the Best Consultancy (2022) in the British Banking Awards and the Best ESG Data & Technology Consultancy as part of the annual ESG Insight Awards. WHY JOIN CAPCO? You will work on engaging projects with some of the largest banks in the world, on projects that will transform the financial services industry. We offer: A work culture focused on innovation and creating lasting value for our clientsand employees Ongoing learning opportunities to help you acquire new skills or deepenexisting expertise A flat, non-hierarchical structure that will enable you to work with senior partnersand directly with clients A diverse, inclusive, meritocratic culture Enhanced and competitive family friendly benefits, including maternity / adoption / shared parental leave and paid leave for sickness, pregnancy loss, fertility treatment, menopause and bereavement.
Apr 30, 2024
Full time
Senior Data Scientist Joining Capco means joining an organisation that is committed to an inclusive working environment where you're encouraged to . We celebrate individuality and recognize that diversity and inclusion, in all forms, is critical to success. It's important to us that we recruit and develop as diverse a range of talent as we can and we believe that everyone brings something different to the table - so we'd love to know what makes you different. Such differences may mean we need to make changes to our process to allow you the best possible platform to succeed, and we are happy to cater to any reasonable adjustments you may require. You will find the section to let us know of these at the bottom of your application form or you can mention it directly to your recruiter at any stage and they will be happy to help. ABOUT US Capco is global technology and business consultancy with a focus on the financial services sector. We are passionate about helping our clients succeed in an ever-changing industry, combining innovative thinking with unique expert know-how. The solutions we offer our customers every day are as diverse as our employees. We are/have: Experts across the Capital Markets, Insurance, Payments, Retail Banking and Wealth & Asset Management domains. Deep knowledge in various financial services offerings including Finance, Risk and Compliance, Financial Crime, Core Banking etc. Committed to growing our business and hiring the best talent to help us get there Focused on maintaining our nimble, agile and entrepreneurial culture. What we're looking for: As part of our on-going global expansion strategy, we are currently hiring Senior Data Scientists as Capco continues to grow it's UK Data Practice in our London office. The Senior Data Scientist should be experienced in using statistical, algorithmic, mining and/or visualisation techniques to address complex business problems. You will be an SME on data science / ML solutions and use case development. The successful candidate will advise the Data Scientists and Engineers on technical requirements around the model design, model architecture, model calibration, solution design, and solutions output In addition to this, they will be expected to leverage their expertise to train and grow Data Scientists at a range of experience levels. Responsibilities Develop prototype and proof of concept solutions making use of cutting-edge AI, machine learning and statistical approaches to solve real-world and business problems Technically lead multiple pods of data science & engineers to develop solutions from a business problem into POCs, MVPs, or fully-fledged solutions while collaborating closely with domain experts. Help transition from development environment to production Act as a subject matter expert in data science and machine learning and coach other data scientists Essential skills Analytics, modelling or software development experience including coding/software development skills. In particular: Hands-on experience in building and implementing data science and machine learning solutions to tackle business problems Comfort with rapid prototyping and disciplined software development processes Experience with Python, ML libraries (e.g. spaCy, NumPy, SciPy, Transformers, etc.)data tools and technologies (Spark, Hadoop, Hive, Redshift, SQL), and toolkits for ML and deep learning (SparkML, Tensorflow, Keras) Demonstrated ability to work on multi-disciplinary teams with diverse skillsets Deploying machine learning models and systems to production (DevOps, MLOps, CI/CD) Experience of working in cloud environments (Azure, GCP, AWS) You are also expected to have: Higher level degree (MsC/PhD) in a numerate discipline. Excellent leadership skills Able to articulate complex data science concepts to both technical and non-technical audiences Banking and/or Financial Services experience. Bonus points: Entries into Kaggle Competitions Creativity, resourcefulness and a collaborative spirit Data at Capco Capco's global Data Practice of 800+ practitioners are an established team of data strategists, analysts, scientists, architects, and engineers who help client teams harness the power of data to drive insight, optimise performance, and commercialise data opportunities. We enable financial institutions to become data-driven by helping transform their understanding, and use of data to derive value. We translate strategy into action - designing, implementing, and mobilising innovative data capabilities with a focus on efficiency and scalability, and partnering with leading vendors and industry bodies. Collaboration, enthusiasm, and encouragement are key in ensuring we maintain our culture through working in an environment where clients become colleagues. We are looking for a candidate who will empower the team, drive high standards, grow the capability, and deliver customer focused outcomes. Our clients and peers have voted us as the A-Team Best Consultancy in Data Management in consecutive years, valuing our ability to identify and develop top data talent. In addition to this, we have proudly won both the Best Consultancy (2022) in the British Banking Awards and the Best ESG Data & Technology Consultancy as part of the annual ESG Insight Awards. WHY JOIN CAPCO? You will work on engaging projects with some of the largest banks in the world, on projects that will transform the financial services industry. We offer: A work culture focused on innovation and creating lasting value for our clientsand employees Ongoing learning opportunities to help you acquire new skills or deepenexisting expertise A flat, non-hierarchical structure that will enable you to work with senior partnersand directly with clients A diverse, inclusive, meritocratic culture Enhanced and competitive family friendly benefits, including maternity / adoption / shared parental leave and paid leave for sickness, pregnancy loss, fertility treatment, menopause and bereavement.
Site Name: London The Stanley Building Posted Date: Mar At GSK, we want to supercharge our data capability to better understand our patients and accelerate our ability to discover vaccines and medicines. The Onyx Research Data Platform organization represents a major investment by GSK R&D and Digital & Tech, designed to deliver a step-change in our ability to leverage data, knowledge, and prediction to find new medicines. We are a full-stack shop consisting of product and portfolio leadership, data engineering, infrastructure and DevOps, data / metadata / knowledge platforms, and AI/ML and analysis platforms, all geared toward: Building a next-generation, metadata- and automation-driven data experience for GSK's scientists, engineers, and decision-makers, increasing productivity and reducing time spent on "data mechanics" Providing best-in-class AI/ML and data analysis environments to accelerate our predictive capabilities and attract top-tier talent Aggressively engineering our data at scale, as one unified asset, to unlock the value of our unique collection of data and predictions in real-time Onyx Product Management is at the heart of our mission, ensuring that everything from our infrastructure, to platforms, to end-user facing data assets and environments is designed to maximize our impact on R&D. The Product Management team partners with R&D stakeholders and Onyx leadership to develop a strategic roadmap for all customer-facing aspects of Onyx, including data assets, ontology, Knowledge Graph / semantic search, data / computing / analysis platforms, and data-powered applications. We are seeking a highly skilled and experienced Manager for our Computing Platforms Products. In this role, you will be responsible for developing the product strategy of our Computing Platform to meet customer needs. You will partner closely with Onyx's organizations, including AI/ML, a diversity of R&D teams utilizing data to accelerate drug discovery (genomics sciences, computational biology, imaging, computational chemistry, to name a few), along with the Onyx portfolio management and engineering function heads to deliver industry-leading solutions that power R&D workloads. You will drive the product roadmap, guide product development initiatives, and ensure the successful launch and adoption of our Compute platform including the migration of existing GSK applications to the platform. Together, you will facilitate joint planning and execution of the product roadmap, ensuring a balance between strategic development and customer-facing deliverables. You will also play a key role in devising, tracking, and publicizing metrics that measure the impact and performance of Onyx Compute Platform Products. You will be responsible for understanding the business areas using Onyx, Platform capabilities, translating customer needs into requirements aligned with standard frameworks such as ontologies and engineering pipelines, and ensure our R&D teams receive the solutions they need to succeed. In this role you will Product Strategy: Develop and execute a comprehensive product strategy for our AI/ML compute platform, aligning with the Onyx's overall goals and objectives. Roadmap Development: Define and prioritize features, enhancements, and functionalities for the platform based on user analysis, customer feedback, and business requirements. Cross-functional Collaboration: Collaborate closely with engineering, AI/ML, and portfolio/program teams to ensure successful product development and deployment. Stakeholder Engagement: Collaborate with customers, partners, and internal stakeholders to understand their needs, gather feedback, and incorporate it into product planning and development processes. Product Launch: Plan and oversee product launches, ensuring effective communication, documentation, and training to drive product adoption and success. Performance Measurement: Define key product metrics, establish monitoring systems, and regularly evaluate and report on the performance and success of the compute platform. Product Ambassador: Serve as an ambassador of the compute platform, effectively communicating its value and benefits to GSK Research and Development leadership and identifying potential customers. Industry Expertise: Stay up to date with the latest advancements and trends in AI, machine learning, and compute platforms, applying industry knowledge to drive innovation and competitive advantage. Why you? Qualifications & Skills: We are looking for professionals with these required skills to achieve our goals: Significant technical product management experience Experience of cloud computing management for scientific computing, data science and/or artificial intelligence model training with a major cloud provider (AWS, Google Cloud, Azure etc) Strong relevant experience in Data Science, Scientific Computing, Machine Learning/AI, Computer Science, Platform Engineering, or related discipline. Excellent communication, collaboration, and stakeholder management skills. Strong leadership abilities and a self-driven, proactive approach. Ability to thrive in a fast-paced, dynamic environment and manage multiple priorities effectively. Preferred Qualifications & Skills: If you have the following characteristics, it would be a plus: Technical Knowledge: Experience with andstrong understanding of on-prem and cloud computing, and software development practices; familiarity with MLOps and distributed computing is highly desirable. Experience with containers and virtual machines including Kubernetes, Slurm or other orchestration tools. Knowledge of modern infrastructure including Infrastructure-as-code tools (e.g. Terraform, Ansible ) Familiar with software engineering ways of working and engagement model Strong proficiency in utilizing various product management tools, including Jira and Confluence. Proven track record of managing developer platforms, tools, and services. Strong proficiency in utilizing various product management tools, including Jira and Confluence. Prior product management experience of enterprise AI/ML platform is strongly preferred. Experience with bioinformatics/genomics database, biological datasets, Pharma R&D is a plus, but not required. Strategic Thinker: Proven track record in developing and executing product strategies that drive business growth and customer satisfaction. Stakeholder Skills: Demonstrated ability to keep cross-functional teams, set clear objectives, and foster a collaborative and innovative work environment. Can lead without authority. Customer Focus: A customer-centric mindset with a deep understanding of customer needs and the ability to translate them into effective product solutions. Analytical and Data-Driven: Strong analytical skills with the ability to gather and interpret data, perform market research, and make data-driven decisions. Excellent Communication: Exceptional written and verbal communication skills, with the ability to effectively present complex ideas and concepts to both technical and non-technical audiences. Adaptability: Thrives in a fast-paced, dynamic environment and can adapt quickly to changing priorities and business needs. Closing Date for Applications: Monday 8th April 2024 (COB) Please take a copy of the Job Description, as this will not be available post closure of the advert. When applying for this role, please use the 'cover letter' of the online application or your CV to describe how you meet the competencies for this role, as outlined in the job requirements above. The information that you have provided in your cover letter and CV will be used to assess your application. During the course of your application, you will be requested to complete voluntary information which will be used in monitoring the effectiveness of our equality and diversity policies. Your information will be treated as confidential and will not be used in any part of the selection process. If you require a reasonable adjustment to the application / selection process to enable you to demonstrate your ability to perform the job requirements, please contact . This will help us to understand any modifications we may need to make to support you throughout our selection process. Why Us? GSK is a global biopharma company with a special purpose - to unite science, technology and talent to get ahead of disease together - so we can positively impact the health of billions of people and deliver stronger, more sustainable shareholder returns - as an organization where people can thrive. Getting ahead means preventing disease as well as treating it, and we aim to positively impact the health of 2.5 billion people by the end of 2030. Our success absolutely depends on our people. While getting ahead of disease together is about our ambition for patients and shareholders, it's also about making GSK a place where people can thrive. We want GSK to be a workplace where everyone can feel a sense of belonging and thrive as set out in our Equal and Inclusive Treatment of Employees policy. We're committed to being more proactive at all levels so that our workforce reflects the communities we work and hire in, and our GSK leadership reflects our GSK workforce. As an Equal Opportunity Employer, we are open to all talent. In the US, we also adhere to Affirmative Action principles . click apply for full job details
Apr 29, 2024
Full time
Site Name: London The Stanley Building Posted Date: Mar At GSK, we want to supercharge our data capability to better understand our patients and accelerate our ability to discover vaccines and medicines. The Onyx Research Data Platform organization represents a major investment by GSK R&D and Digital & Tech, designed to deliver a step-change in our ability to leverage data, knowledge, and prediction to find new medicines. We are a full-stack shop consisting of product and portfolio leadership, data engineering, infrastructure and DevOps, data / metadata / knowledge platforms, and AI/ML and analysis platforms, all geared toward: Building a next-generation, metadata- and automation-driven data experience for GSK's scientists, engineers, and decision-makers, increasing productivity and reducing time spent on "data mechanics" Providing best-in-class AI/ML and data analysis environments to accelerate our predictive capabilities and attract top-tier talent Aggressively engineering our data at scale, as one unified asset, to unlock the value of our unique collection of data and predictions in real-time Onyx Product Management is at the heart of our mission, ensuring that everything from our infrastructure, to platforms, to end-user facing data assets and environments is designed to maximize our impact on R&D. The Product Management team partners with R&D stakeholders and Onyx leadership to develop a strategic roadmap for all customer-facing aspects of Onyx, including data assets, ontology, Knowledge Graph / semantic search, data / computing / analysis platforms, and data-powered applications. We are seeking a highly skilled and experienced Manager for our Computing Platforms Products. In this role, you will be responsible for developing the product strategy of our Computing Platform to meet customer needs. You will partner closely with Onyx's organizations, including AI/ML, a diversity of R&D teams utilizing data to accelerate drug discovery (genomics sciences, computational biology, imaging, computational chemistry, to name a few), along with the Onyx portfolio management and engineering function heads to deliver industry-leading solutions that power R&D workloads. You will drive the product roadmap, guide product development initiatives, and ensure the successful launch and adoption of our Compute platform including the migration of existing GSK applications to the platform. Together, you will facilitate joint planning and execution of the product roadmap, ensuring a balance between strategic development and customer-facing deliverables. You will also play a key role in devising, tracking, and publicizing metrics that measure the impact and performance of Onyx Compute Platform Products. You will be responsible for understanding the business areas using Onyx, Platform capabilities, translating customer needs into requirements aligned with standard frameworks such as ontologies and engineering pipelines, and ensure our R&D teams receive the solutions they need to succeed. In this role you will Product Strategy: Develop and execute a comprehensive product strategy for our AI/ML compute platform, aligning with the Onyx's overall goals and objectives. Roadmap Development: Define and prioritize features, enhancements, and functionalities for the platform based on user analysis, customer feedback, and business requirements. Cross-functional Collaboration: Collaborate closely with engineering, AI/ML, and portfolio/program teams to ensure successful product development and deployment. Stakeholder Engagement: Collaborate with customers, partners, and internal stakeholders to understand their needs, gather feedback, and incorporate it into product planning and development processes. Product Launch: Plan and oversee product launches, ensuring effective communication, documentation, and training to drive product adoption and success. Performance Measurement: Define key product metrics, establish monitoring systems, and regularly evaluate and report on the performance and success of the compute platform. Product Ambassador: Serve as an ambassador of the compute platform, effectively communicating its value and benefits to GSK Research and Development leadership and identifying potential customers. Industry Expertise: Stay up to date with the latest advancements and trends in AI, machine learning, and compute platforms, applying industry knowledge to drive innovation and competitive advantage. Why you? Qualifications & Skills: We are looking for professionals with these required skills to achieve our goals: Significant technical product management experience Experience of cloud computing management for scientific computing, data science and/or artificial intelligence model training with a major cloud provider (AWS, Google Cloud, Azure etc) Strong relevant experience in Data Science, Scientific Computing, Machine Learning/AI, Computer Science, Platform Engineering, or related discipline. Excellent communication, collaboration, and stakeholder management skills. Strong leadership abilities and a self-driven, proactive approach. Ability to thrive in a fast-paced, dynamic environment and manage multiple priorities effectively. Preferred Qualifications & Skills: If you have the following characteristics, it would be a plus: Technical Knowledge: Experience with andstrong understanding of on-prem and cloud computing, and software development practices; familiarity with MLOps and distributed computing is highly desirable. Experience with containers and virtual machines including Kubernetes, Slurm or other orchestration tools. Knowledge of modern infrastructure including Infrastructure-as-code tools (e.g. Terraform, Ansible ) Familiar with software engineering ways of working and engagement model Strong proficiency in utilizing various product management tools, including Jira and Confluence. Proven track record of managing developer platforms, tools, and services. Strong proficiency in utilizing various product management tools, including Jira and Confluence. Prior product management experience of enterprise AI/ML platform is strongly preferred. Experience with bioinformatics/genomics database, biological datasets, Pharma R&D is a plus, but not required. Strategic Thinker: Proven track record in developing and executing product strategies that drive business growth and customer satisfaction. Stakeholder Skills: Demonstrated ability to keep cross-functional teams, set clear objectives, and foster a collaborative and innovative work environment. Can lead without authority. Customer Focus: A customer-centric mindset with a deep understanding of customer needs and the ability to translate them into effective product solutions. Analytical and Data-Driven: Strong analytical skills with the ability to gather and interpret data, perform market research, and make data-driven decisions. Excellent Communication: Exceptional written and verbal communication skills, with the ability to effectively present complex ideas and concepts to both technical and non-technical audiences. Adaptability: Thrives in a fast-paced, dynamic environment and can adapt quickly to changing priorities and business needs. Closing Date for Applications: Monday 8th April 2024 (COB) Please take a copy of the Job Description, as this will not be available post closure of the advert. When applying for this role, please use the 'cover letter' of the online application or your CV to describe how you meet the competencies for this role, as outlined in the job requirements above. The information that you have provided in your cover letter and CV will be used to assess your application. During the course of your application, you will be requested to complete voluntary information which will be used in monitoring the effectiveness of our equality and diversity policies. Your information will be treated as confidential and will not be used in any part of the selection process. If you require a reasonable adjustment to the application / selection process to enable you to demonstrate your ability to perform the job requirements, please contact . This will help us to understand any modifications we may need to make to support you throughout our selection process. Why Us? GSK is a global biopharma company with a special purpose - to unite science, technology and talent to get ahead of disease together - so we can positively impact the health of billions of people and deliver stronger, more sustainable shareholder returns - as an organization where people can thrive. Getting ahead means preventing disease as well as treating it, and we aim to positively impact the health of 2.5 billion people by the end of 2030. Our success absolutely depends on our people. While getting ahead of disease together is about our ambition for patients and shareholders, it's also about making GSK a place where people can thrive. We want GSK to be a workplace where everyone can feel a sense of belonging and thrive as set out in our Equal and Inclusive Treatment of Employees policy. We're committed to being more proactive at all levels so that our workforce reflects the communities we work and hire in, and our GSK leadership reflects our GSK workforce. As an Equal Opportunity Employer, we are open to all talent. In the US, we also adhere to Affirmative Action principles . click apply for full job details
Senior Machine Learning Engineer - Remote As a Senior Machine Learning Engineer do you want to work with a leading provider of customer engagement solutions? As a Senior Machine Learning Engineer you will have the opportunity to join a high-growth organisation providing customised software solutions that accelerates and simplifies the journey of digital transformation. Your role: As a Senior Machine Learning Engineer, your role will be pivotal in designing, developing, testing, maintaining, and supporting software components using Python. You will be leading the development of AI projects taking ownership and driving innovation of development tasks. You'll be maintaining and enhancing CI/CD pipelines alongside architecting and managing cloud infrastructure using Terraform We'd love to see these skills from you: Proficiency in Python Strong NLP experience - NumPy, Pandas etc. Commercial experience leveraging open-source models, finetuning LLMs & RAG pipelines Expertise in learning algorithms, neural networks and ML frameworks (TensorFlow, PyTorch etc.) MLOps experience Nice to have: Familiarity with Git or other Version Control Systems Computer Vision Library exposure Understanding of Big Data Technologies (Hadoop, Spark etc) Experience with Cloud platforms (AWS, GCP or Azure) This is a fully remote role, but may require very occasional travel (once a month or less) to their Bedford office. Salary up to £75,000. The client is unable to provide sponsorship for this position and you must be located in the UK to apply. If you are interested, please apply. Contact Danielle Blake on OR for more information. Understanding Recruitment is passionate about equity, diversity and inclusion. We seek individuals from the widest talent pool and encourage underrepresented talent to apply for vacancies with us. We are committed to recruitment processes that are fair for all, regardless of background and personal characteristics.
Apr 27, 2024
Full time
Senior Machine Learning Engineer - Remote As a Senior Machine Learning Engineer do you want to work with a leading provider of customer engagement solutions? As a Senior Machine Learning Engineer you will have the opportunity to join a high-growth organisation providing customised software solutions that accelerates and simplifies the journey of digital transformation. Your role: As a Senior Machine Learning Engineer, your role will be pivotal in designing, developing, testing, maintaining, and supporting software components using Python. You will be leading the development of AI projects taking ownership and driving innovation of development tasks. You'll be maintaining and enhancing CI/CD pipelines alongside architecting and managing cloud infrastructure using Terraform We'd love to see these skills from you: Proficiency in Python Strong NLP experience - NumPy, Pandas etc. Commercial experience leveraging open-source models, finetuning LLMs & RAG pipelines Expertise in learning algorithms, neural networks and ML frameworks (TensorFlow, PyTorch etc.) MLOps experience Nice to have: Familiarity with Git or other Version Control Systems Computer Vision Library exposure Understanding of Big Data Technologies (Hadoop, Spark etc) Experience with Cloud platforms (AWS, GCP or Azure) This is a fully remote role, but may require very occasional travel (once a month or less) to their Bedford office. Salary up to £75,000. The client is unable to provide sponsorship for this position and you must be located in the UK to apply. If you are interested, please apply. Contact Danielle Blake on OR for more information. Understanding Recruitment is passionate about equity, diversity and inclusion. We seek individuals from the widest talent pool and encourage underrepresented talent to apply for vacancies with us. We are committed to recruitment processes that are fair for all, regardless of background and personal characteristics.
Python Engineer Python Engineer - Hybrid UK - £65000 We are helping an innovative tech business scale their embedded software team. Due to continued growth and demand for their products they now urgently need a Python Engineer to join them ASAP. This role would suit a Python Engineer who has a bias towards front end (react) and wants to heavily influence the future direction of team. This role is fully remote within the UK. To be a successful, the ideal Python Engineer candidate will have: Strong background in Python of Machine Learning or AI Strong knowledge of PyTorch, Hugging Face, TensorFlow Experience of MLOps would be a benefit Experience of working in a small but growing team. What is in it for you? As a talented Python Engineer you can expect: Great salary - Up to £65k base commission and Package (neg for the right person) Flexible working An opportunity to work with some of the brightest minds in the tech sector If you are an ambitious Python Engineer hit apply and we will do the rest. Please apply with your CV and we will be in touch for a confidential chat.Noa Recruitment specialise in helping Software and Web Professionals and technical talent find great careers. If this role doesn't sound like you, but you know a great person who might be interested then please do share these details with them.
Apr 25, 2024
Full time
Python Engineer Python Engineer - Hybrid UK - £65000 We are helping an innovative tech business scale their embedded software team. Due to continued growth and demand for their products they now urgently need a Python Engineer to join them ASAP. This role would suit a Python Engineer who has a bias towards front end (react) and wants to heavily influence the future direction of team. This role is fully remote within the UK. To be a successful, the ideal Python Engineer candidate will have: Strong background in Python of Machine Learning or AI Strong knowledge of PyTorch, Hugging Face, TensorFlow Experience of MLOps would be a benefit Experience of working in a small but growing team. What is in it for you? As a talented Python Engineer you can expect: Great salary - Up to £65k base commission and Package (neg for the right person) Flexible working An opportunity to work with some of the brightest minds in the tech sector If you are an ambitious Python Engineer hit apply and we will do the rest. Please apply with your CV and we will be in touch for a confidential chat.Noa Recruitment specialise in helping Software and Web Professionals and technical talent find great careers. If this role doesn't sound like you, but you know a great person who might be interested then please do share these details with them.
Job description Site Name: London The Stanley Building Posted Date: Apr At GSK, we want to supercharge our data capability to better understand our patients and accelerate our ability to discover vaccines and medicines. The Onyx Research Data Platform organization represents a major investment by GSK R&D and Digital & Tech, designed to deliver a step-change in our ability to leverage data, knowledge, and prediction to find new medicines. We are a full-stack shop consisting of product and portfolio leadership, data engineering, infrastructure and DevOps, data / metadata / knowledge platforms, and AI/ML and analysis platforms, all geared toward: Building a next-generation, metadata- and automation-driven data experience for GSK's scientists, engineers, and decision-makers, increasing productivity and reducing time spent on "data mechanics" Providing best-in-class AI/ML and data analysis environments to accelerate our predictive capabilities and attract top-tier talent Aggressively engineering our data at scale, as one unified asset, to unlock the value of our unique collection of data and predictions in real-time Onyx Product Management is at the heart of our mission, ensuring that everything from our infrastructure, to platforms, to end-user facing data assets and environments is designed to maximize our impact on R&D. The Product Management team partners with R&D stakeholders and Onyx leadership to develop a strategic roadmap for all customer-facing aspects of Onyx, including data assets, ontology, Knowledge Graph / semantic search, data / computing / analysis platforms, and data-powered applications. We are seeking a highly skilled and experienced Manager for our Computing Platforms Products. In this role, you will be responsible for developing the product strategy of our Computing Platform to meet customer needs. You will partner closely with Onyx's organizations, including AI/ML, a diversity of R&D teams utilizing data to accelerate drug discovery (genomics sciences, computational biology, imaging, computational chemistry, to name a few), along with the Onyx portfolio management and engineering function heads to deliver industry-leading solutions that power R&D workloads. You will drive the product roadmap, guide product development initiatives, and ensure the successful launch and adoption of our Compute platform including the migration of existing GSK applications to the platform. Together, you will facilitate joint planning and execution of the product roadmap, ensuring a balance between strategic development and customer-facing deliverables. You will also play a key role in devising, tracking, and publicizing metrics that measure the impact and performance of Onyx Compute Platform Products. You will be responsible for understanding the business areas using Onyx, Platform capabilities, translating customer needs into requirements aligned with standard frameworks such as ontologies and engineering pipelines, and ensure our R&D teams receive the solutions they need to succeed. In this role you will Product Strategy: Develop and execute a comprehensive product strategy for our AI/ML compute platform, aligning with the Onyx's overall goals and objectives. Roadmap Development: Define and prioritize features, enhancements, and functionalities for the platform based on user analysis, customer feedback, and business requirements. Cross-functional Collaboration: Collaborate closely with engineering, AI/ML, and portfolio/program teams to ensure successful product development and deployment. Stakeholder Engagement: Collaborate with customers, partners, and internal stakeholders to understand their needs, gather feedback, and incorporate it into product planning and development processes. Product Launch: Plan and oversee product launches, ensuring effective communication, documentation, and training to drive product adoption and success. Performance Measurement: Define key product metrics, establish monitoring systems, and regularly evaluate and report on the performance and success of the compute platform. Product Ambassador: Serve as an ambassador of the compute platform, effectively communicating its value and benefits to GSK Research and Development leadership and identifying potential customers. Industry Expertise: Stay up to date with the latest advancements and trends in AI, machine learning, and compute platforms, applying industry knowledge to drive innovation and competitive advantage. Why you? Qualifications & Skills: We are looking for professionals with these required skills to achieve our goals: Experience of cloud computing management for scientific computing, data science and/or artificial intelligence model training with a major cloud provider (AWS, Google Cloud, Azure etc) Strong relevant experience in Data Science, Scientific Computing, Machine Learning/AI, Computer Science, Platform Engineering, or related discipline. Excellent communication, collaboration, and stakeholder management skills. Strong leadership abilities and a self-driven, proactive approach. Ability to thrive in a fast-paced, dynamic environment and manage multiple priorities effectively. Preferred Qualifications & Skills: If you have the following characteristics, it would be a plus: Technical Knowledge: Experience with and strong understanding of on-prem and cloud computing, and software development practices; familiarity with MLOps and distributed computing is highly desirable. Experience with containers and virtual machines including Kubernetes, Slurm or other orchestration tools. Knowledge of modern infrastructure including Infrastructure-as-code tools (e.g. Terraform, Ansible ) Familiar with software engineering ways of working and engagement model Strong proficiency in utilizing various product management tools, including Jira and Confluence. Proven track record of managing developer platforms, tools, and services. Strong proficiency in utilizing various product management tools, including Jira and Confluence. Prior product management experience of enterprise AI/ML platform is strongly preferred. Experience with bioinformatics/genomics database, biological datasets, Pharma R&D is a plus, but not required. Strategic Thinker: Proven track record in developing and executing product strategies that drive business growth and customer satisfaction. Stakeholder Skills: Demonstrated ability to keep cross-functional teams, set clear objectives, and foster a collaborative and innovative work environment. Can lead without authority. Customer Focus: A customer-centric mindset with a deep understanding of customer needs and the ability to translate them into effective product solutions. Analytical and Data-Driven: Strong analytical skills with the ability to gather and interpret data, perform market research, and make data-driven decisions. Excellent Communication: Exceptional written and verbal communication skills, with the ability to effectively present complex ideas and concepts to both technical and non-technical audiences. Adaptability: Thrives in a fast-paced, dynamic environment and can adapt quickly to changing priorities and business needs. Closing Date for Applications: Monday 6th May 2024 (COB) Please take a copy of the Job Description, as this will not be available post closure of the advert. When applying for this role, please use the 'cover letter' of the online application or your CV to describe how you meet the competencies for this role, as outlined in the job requirements above. The information that you have provided in your cover letter and CV will be used to assess your application. During the course of your application, you will be requested to complete voluntary information which will be used in monitoring the effectiveness of our equality and diversity policies. Your information will be treated as confidential and will not be used in any part of the selection process. If you require a reasonable adjustment to the application / selection process to enable you to demonstrate your ability to perform the job requirements, please contact . This will help us to understand any modifications we may need to make to support you throughout our selection process. Why GSK? Uniting science, technology and talent to get ahead of disease together. GSK is a global biopharma company with a special purpose - to unite science, technology and talent to get ahead of disease together - so we can positively impact the health of billions of people and deliver stronger, more sustainable shareholder returns - as an organisation where people can thrive. We prevent and treat disease with vaccines, specialty and general medicines. We focus on the science of the immune system and the use of new platform and data technologies, investing in four core therapeutic areas (infectious diseases, HIV, respiratory/ immunology and oncology). Our success absolutely depends on our people. While getting ahead of disease together is about our ambition for patients and shareholders, it's also about making GSK a place where people can thrive. We want GSK to be a place where people feel inspired, encouraged and challenged to be the best they can be. A place where they can be themselves - feeling welcome, valued, and included. Where they can keep growing and look after their wellbeing. So, if you share our ambition . click apply for full job details
Apr 25, 2024
Full time
Job description Site Name: London The Stanley Building Posted Date: Apr At GSK, we want to supercharge our data capability to better understand our patients and accelerate our ability to discover vaccines and medicines. The Onyx Research Data Platform organization represents a major investment by GSK R&D and Digital & Tech, designed to deliver a step-change in our ability to leverage data, knowledge, and prediction to find new medicines. We are a full-stack shop consisting of product and portfolio leadership, data engineering, infrastructure and DevOps, data / metadata / knowledge platforms, and AI/ML and analysis platforms, all geared toward: Building a next-generation, metadata- and automation-driven data experience for GSK's scientists, engineers, and decision-makers, increasing productivity and reducing time spent on "data mechanics" Providing best-in-class AI/ML and data analysis environments to accelerate our predictive capabilities and attract top-tier talent Aggressively engineering our data at scale, as one unified asset, to unlock the value of our unique collection of data and predictions in real-time Onyx Product Management is at the heart of our mission, ensuring that everything from our infrastructure, to platforms, to end-user facing data assets and environments is designed to maximize our impact on R&D. The Product Management team partners with R&D stakeholders and Onyx leadership to develop a strategic roadmap for all customer-facing aspects of Onyx, including data assets, ontology, Knowledge Graph / semantic search, data / computing / analysis platforms, and data-powered applications. We are seeking a highly skilled and experienced Manager for our Computing Platforms Products. In this role, you will be responsible for developing the product strategy of our Computing Platform to meet customer needs. You will partner closely with Onyx's organizations, including AI/ML, a diversity of R&D teams utilizing data to accelerate drug discovery (genomics sciences, computational biology, imaging, computational chemistry, to name a few), along with the Onyx portfolio management and engineering function heads to deliver industry-leading solutions that power R&D workloads. You will drive the product roadmap, guide product development initiatives, and ensure the successful launch and adoption of our Compute platform including the migration of existing GSK applications to the platform. Together, you will facilitate joint planning and execution of the product roadmap, ensuring a balance between strategic development and customer-facing deliverables. You will also play a key role in devising, tracking, and publicizing metrics that measure the impact and performance of Onyx Compute Platform Products. You will be responsible for understanding the business areas using Onyx, Platform capabilities, translating customer needs into requirements aligned with standard frameworks such as ontologies and engineering pipelines, and ensure our R&D teams receive the solutions they need to succeed. In this role you will Product Strategy: Develop and execute a comprehensive product strategy for our AI/ML compute platform, aligning with the Onyx's overall goals and objectives. Roadmap Development: Define and prioritize features, enhancements, and functionalities for the platform based on user analysis, customer feedback, and business requirements. Cross-functional Collaboration: Collaborate closely with engineering, AI/ML, and portfolio/program teams to ensure successful product development and deployment. Stakeholder Engagement: Collaborate with customers, partners, and internal stakeholders to understand their needs, gather feedback, and incorporate it into product planning and development processes. Product Launch: Plan and oversee product launches, ensuring effective communication, documentation, and training to drive product adoption and success. Performance Measurement: Define key product metrics, establish monitoring systems, and regularly evaluate and report on the performance and success of the compute platform. Product Ambassador: Serve as an ambassador of the compute platform, effectively communicating its value and benefits to GSK Research and Development leadership and identifying potential customers. Industry Expertise: Stay up to date with the latest advancements and trends in AI, machine learning, and compute platforms, applying industry knowledge to drive innovation and competitive advantage. Why you? Qualifications & Skills: We are looking for professionals with these required skills to achieve our goals: Experience of cloud computing management for scientific computing, data science and/or artificial intelligence model training with a major cloud provider (AWS, Google Cloud, Azure etc) Strong relevant experience in Data Science, Scientific Computing, Machine Learning/AI, Computer Science, Platform Engineering, or related discipline. Excellent communication, collaboration, and stakeholder management skills. Strong leadership abilities and a self-driven, proactive approach. Ability to thrive in a fast-paced, dynamic environment and manage multiple priorities effectively. Preferred Qualifications & Skills: If you have the following characteristics, it would be a plus: Technical Knowledge: Experience with and strong understanding of on-prem and cloud computing, and software development practices; familiarity with MLOps and distributed computing is highly desirable. Experience with containers and virtual machines including Kubernetes, Slurm or other orchestration tools. Knowledge of modern infrastructure including Infrastructure-as-code tools (e.g. Terraform, Ansible ) Familiar with software engineering ways of working and engagement model Strong proficiency in utilizing various product management tools, including Jira and Confluence. Proven track record of managing developer platforms, tools, and services. Strong proficiency in utilizing various product management tools, including Jira and Confluence. Prior product management experience of enterprise AI/ML platform is strongly preferred. Experience with bioinformatics/genomics database, biological datasets, Pharma R&D is a plus, but not required. Strategic Thinker: Proven track record in developing and executing product strategies that drive business growth and customer satisfaction. Stakeholder Skills: Demonstrated ability to keep cross-functional teams, set clear objectives, and foster a collaborative and innovative work environment. Can lead without authority. Customer Focus: A customer-centric mindset with a deep understanding of customer needs and the ability to translate them into effective product solutions. Analytical and Data-Driven: Strong analytical skills with the ability to gather and interpret data, perform market research, and make data-driven decisions. Excellent Communication: Exceptional written and verbal communication skills, with the ability to effectively present complex ideas and concepts to both technical and non-technical audiences. Adaptability: Thrives in a fast-paced, dynamic environment and can adapt quickly to changing priorities and business needs. Closing Date for Applications: Monday 6th May 2024 (COB) Please take a copy of the Job Description, as this will not be available post closure of the advert. When applying for this role, please use the 'cover letter' of the online application or your CV to describe how you meet the competencies for this role, as outlined in the job requirements above. The information that you have provided in your cover letter and CV will be used to assess your application. During the course of your application, you will be requested to complete voluntary information which will be used in monitoring the effectiveness of our equality and diversity policies. Your information will be treated as confidential and will not be used in any part of the selection process. If you require a reasonable adjustment to the application / selection process to enable you to demonstrate your ability to perform the job requirements, please contact . This will help us to understand any modifications we may need to make to support you throughout our selection process. Why GSK? Uniting science, technology and talent to get ahead of disease together. GSK is a global biopharma company with a special purpose - to unite science, technology and talent to get ahead of disease together - so we can positively impact the health of billions of people and deliver stronger, more sustainable shareholder returns - as an organisation where people can thrive. We prevent and treat disease with vaccines, specialty and general medicines. We focus on the science of the immune system and the use of new platform and data technologies, investing in four core therapeutic areas (infectious diseases, HIV, respiratory/ immunology and oncology). Our success absolutely depends on our people. While getting ahead of disease together is about our ambition for patients and shareholders, it's also about making GSK a place where people can thrive. We want GSK to be a place where people feel inspired, encouraged and challenged to be the best they can be. A place where they can be themselves - feeling welcome, valued, and included. Where they can keep growing and look after their wellbeing. So, if you share our ambition . click apply for full job details
Machine Learning EngineerEdinburgh (remote) £50,000-£65,000 About the Company: Our client is a visionary network intelligence company at the forefront of developing a cutting-edge deep traffic analysis platform for mobile network operators. Their innovative solutions help reduce operational costs, optimize energy consumption, and enhance user experiences. Recently recognized as an AI Visionary and one of the top 100 AI companies in 2022, this startup is driving transformative change in the telecom industry. Role Overview: We are seeking an enthusiastic Machine Learning Engineer to join our team and work on designing, training, testing, and integrating industry-grade machine learning models for deep mobile network traffic analysis. You will expand our portfolio of neural network architectures for traffic forecasting, anomaly prediction, and decomposition, collaborating with other ML engineers and receiving guidance from senior leadership. Key Responsibilities for a Machine Learning Engineer: Develop and debug code in Python, utilizing deep learning frameworks like TensorFlow 2 and Keras Perform in-depth data analysis tasks and design new neural architectures Participate in code reviews, maintain codebase quality, and ensure feature sustainability Investigate problems and devise solutions for large data sets Prototype implementations of new architectures to solve specific challenges Document results in design documents and technical reports Requirements for a Machine Learning Engineer: Strong mathematical/analytical skills and technical background in mobile networks Deep understanding of machine learning concepts, techniques, and optimization algorithms Experience with data visualization (matplotlib or similar) Proficiency in Git workflow and best coding practices Familiarity with cloud computing concepts Ability to read and summarize research papers Experience following a research methodology and creating experiments MSc or PhD in Artificial Intelligence, Computer Science, or equivalent Excellent communication and interpersonal skills Nice to Have: Experience with other AI platforms, especially for reinforcement learning Knowledge of machine learning lifecycle management (MLOps) If you are a driven and creative professional passionate about developing innovative AI solutions that revolutionize the telecom industry, click apply! McGregor Boyall is an equal opportunity employer and do not discriminate on any grounds.
Apr 25, 2024
Full time
Machine Learning EngineerEdinburgh (remote) £50,000-£65,000 About the Company: Our client is a visionary network intelligence company at the forefront of developing a cutting-edge deep traffic analysis platform for mobile network operators. Their innovative solutions help reduce operational costs, optimize energy consumption, and enhance user experiences. Recently recognized as an AI Visionary and one of the top 100 AI companies in 2022, this startup is driving transformative change in the telecom industry. Role Overview: We are seeking an enthusiastic Machine Learning Engineer to join our team and work on designing, training, testing, and integrating industry-grade machine learning models for deep mobile network traffic analysis. You will expand our portfolio of neural network architectures for traffic forecasting, anomaly prediction, and decomposition, collaborating with other ML engineers and receiving guidance from senior leadership. Key Responsibilities for a Machine Learning Engineer: Develop and debug code in Python, utilizing deep learning frameworks like TensorFlow 2 and Keras Perform in-depth data analysis tasks and design new neural architectures Participate in code reviews, maintain codebase quality, and ensure feature sustainability Investigate problems and devise solutions for large data sets Prototype implementations of new architectures to solve specific challenges Document results in design documents and technical reports Requirements for a Machine Learning Engineer: Strong mathematical/analytical skills and technical background in mobile networks Deep understanding of machine learning concepts, techniques, and optimization algorithms Experience with data visualization (matplotlib or similar) Proficiency in Git workflow and best coding practices Familiarity with cloud computing concepts Ability to read and summarize research papers Experience following a research methodology and creating experiments MSc or PhD in Artificial Intelligence, Computer Science, or equivalent Excellent communication and interpersonal skills Nice to Have: Experience with other AI platforms, especially for reinforcement learning Knowledge of machine learning lifecycle management (MLOps) If you are a driven and creative professional passionate about developing innovative AI solutions that revolutionize the telecom industry, click apply! McGregor Boyall is an equal opportunity employer and do not discriminate on any grounds.
SThree is delighted to announce that we are currently accepting applications for an experienced AI Engineer. This position presents an excellent opportunity to join a global audience of stakeholders. Additionally, it offers the potential for growth within an international company. As an AI Engineer specialising in Azure Services, you will be responsible for designing, implementing, and maintaining AI solutions within our organisation. Leveraging the Azure platform, including Azure OpenAI, Azure Vision and other Azure AI services, you will develop scalable, efficient, and effective AI models and systems to address business challenges, enhance decision-making, and drive innovation. Collaboration with cross-functional teams to integrate AI capabilities into our products and services will be key. About us SThree is the global STEM-specialist talent partner that connects sought-after specialists in life sciences, technology, engineering and mathematics with innovative organisations across the world. We are the number one destination for talent in the best STEM markets: Recruiting highly skilled professionals and discovering life-changing jobs for the unsung heroes who will positively shape our future. What are the day-to-day tasks? Design and develop AI models and solutions using Azure OpenAI, Azure Machine Learning, and Azure Cognitive Services to address specific business challenges. Implement and maintain scalable and efficient AI systems, ensuring they meet business requirements and performance benchmarks. Collaborate with business analysts, scientists, and IT teams to integrate AI solutions into existing systems and work flows Stay abreast of advancements in AI, machine learning, and Azure services, incorporating new technologies and methodologies to continually improve solution offerings. Provide expertise and guidance on AI best practices, contributing to the organisation's AI strategy and innovation efforts. Conduct data analysis and feature engineering to prepare data for use in AI models, utilising Azure Data Lake Develop robust testing and validation processes to ensure the accuracy and reliability of AI models. Ensure that operational issues are identified, recorded, monitored and resolved. Conducts investigations of significant operational outage and provides recommendations for problem mitigation. Provides appropriate status and other reports to specialists, users and managers. Align all operations procedures to service expectations, security requirements and other quality standards. Ensures that operational procedures are fit for purpose and updated. Oversee the planning, installation, maintenance and acceptance of new and updated components and services. Defines security procedures to be followed, and delegates tasks What skills and knowledge are we looking for? Programming Skills: Proficiency in programming languages such as Python, C#, or Java, with a deep understanding of software development principles. Experience with Azure AI solutions, including Azure OpenAI Service, Azure Cognitive Services, and Azure Machine Learning. Familiarity with Azure Databricks desirable Solid background in machine learning algorithms, data preprocessing, feature engineering, and model evaluation. Experience with deep learning frameworks like TensorFlow or PyTorch is desirable. Proficiency in handling large datasets, experience with Azure Data Factory, Azure SQL Database, and Cosmos DB. Understanding of CI/CD pipelines, containerisation (Docker, Kubernetes), implementing MLOps practices using Azure DevOps. Azure Cloud services relevant to AI, such as Azure Kubernetes Service (AKS), Azure GPU VMs, and Azure networking and security services tailored for AI applications. Qualifications: Degree in computer science/software engineering and/or 5+ years equivalent work experience within a cloud environment. Cloud Certifications desirable Qualifications such as the following would be advantageous, however not necessary: Microsoft Azure AI Engineer Fundamentals / Associate Microsoft Azure Data Engineer Microsoft Data Scientist Associate Benefits for our U.K. teams include: Choice to work flexibly from home and the office, in line with our hybrid working principles Bonus linked to company and personal performance Generous 28 days holiday, plus public holidays Annual leave purchase scheme Five days paid Caregiver/Dependant leave per annum Five paid days off per year for volunteering Private healthcare, discounted dental insurance and health care cashback scheme Opportunity to participate in the company share scheme Access to a range of retail discounts and saving What we stand for We're committed to ensuring for our colleagues, candidates and communities, that all processes are equitable, and everyone is treated with fairness and dignity where everyone belongs, is valued and is connected. If you need any assistance or reasonable adjustments in submitting your application, please let us know, and we'll be happy to help. What we stand for We create community and deliver change that transforms the future for everyone. With this in mind, we're committed to ensuring for our colleagues, candidates and communities, that all processes are equitable and everyone is treated with fairness and dignity where everyone belongs, is valued and is connected. If you need any assistance or reasonable adjustments in submitting your application, please let us know, and we'll be happy to help.
Apr 24, 2024
Full time
SThree is delighted to announce that we are currently accepting applications for an experienced AI Engineer. This position presents an excellent opportunity to join a global audience of stakeholders. Additionally, it offers the potential for growth within an international company. As an AI Engineer specialising in Azure Services, you will be responsible for designing, implementing, and maintaining AI solutions within our organisation. Leveraging the Azure platform, including Azure OpenAI, Azure Vision and other Azure AI services, you will develop scalable, efficient, and effective AI models and systems to address business challenges, enhance decision-making, and drive innovation. Collaboration with cross-functional teams to integrate AI capabilities into our products and services will be key. About us SThree is the global STEM-specialist talent partner that connects sought-after specialists in life sciences, technology, engineering and mathematics with innovative organisations across the world. We are the number one destination for talent in the best STEM markets: Recruiting highly skilled professionals and discovering life-changing jobs for the unsung heroes who will positively shape our future. What are the day-to-day tasks? Design and develop AI models and solutions using Azure OpenAI, Azure Machine Learning, and Azure Cognitive Services to address specific business challenges. Implement and maintain scalable and efficient AI systems, ensuring they meet business requirements and performance benchmarks. Collaborate with business analysts, scientists, and IT teams to integrate AI solutions into existing systems and work flows Stay abreast of advancements in AI, machine learning, and Azure services, incorporating new technologies and methodologies to continually improve solution offerings. Provide expertise and guidance on AI best practices, contributing to the organisation's AI strategy and innovation efforts. Conduct data analysis and feature engineering to prepare data for use in AI models, utilising Azure Data Lake Develop robust testing and validation processes to ensure the accuracy and reliability of AI models. Ensure that operational issues are identified, recorded, monitored and resolved. Conducts investigations of significant operational outage and provides recommendations for problem mitigation. Provides appropriate status and other reports to specialists, users and managers. Align all operations procedures to service expectations, security requirements and other quality standards. Ensures that operational procedures are fit for purpose and updated. Oversee the planning, installation, maintenance and acceptance of new and updated components and services. Defines security procedures to be followed, and delegates tasks What skills and knowledge are we looking for? Programming Skills: Proficiency in programming languages such as Python, C#, or Java, with a deep understanding of software development principles. Experience with Azure AI solutions, including Azure OpenAI Service, Azure Cognitive Services, and Azure Machine Learning. Familiarity with Azure Databricks desirable Solid background in machine learning algorithms, data preprocessing, feature engineering, and model evaluation. Experience with deep learning frameworks like TensorFlow or PyTorch is desirable. Proficiency in handling large datasets, experience with Azure Data Factory, Azure SQL Database, and Cosmos DB. Understanding of CI/CD pipelines, containerisation (Docker, Kubernetes), implementing MLOps practices using Azure DevOps. Azure Cloud services relevant to AI, such as Azure Kubernetes Service (AKS), Azure GPU VMs, and Azure networking and security services tailored for AI applications. Qualifications: Degree in computer science/software engineering and/or 5+ years equivalent work experience within a cloud environment. Cloud Certifications desirable Qualifications such as the following would be advantageous, however not necessary: Microsoft Azure AI Engineer Fundamentals / Associate Microsoft Azure Data Engineer Microsoft Data Scientist Associate Benefits for our U.K. teams include: Choice to work flexibly from home and the office, in line with our hybrid working principles Bonus linked to company and personal performance Generous 28 days holiday, plus public holidays Annual leave purchase scheme Five days paid Caregiver/Dependant leave per annum Five paid days off per year for volunteering Private healthcare, discounted dental insurance and health care cashback scheme Opportunity to participate in the company share scheme Access to a range of retail discounts and saving What we stand for We're committed to ensuring for our colleagues, candidates and communities, that all processes are equitable, and everyone is treated with fairness and dignity where everyone belongs, is valued and is connected. If you need any assistance or reasonable adjustments in submitting your application, please let us know, and we'll be happy to help. What we stand for We create community and deliver change that transforms the future for everyone. With this in mind, we're committed to ensuring for our colleagues, candidates and communities, that all processes are equitable and everyone is treated with fairness and dignity where everyone belongs, is valued and is connected. If you need any assistance or reasonable adjustments in submitting your application, please let us know, and we'll be happy to help.
Xpertise is seeking two talented Machine Learning Engineers to join our esteemed team in Birmingham. As part of our growing engineering division, you will play a pivotal role in designing, implementing, and optimizing machine learning models and data pipelines. With a strong emphasis on AWS technologies and MLOps practices, you'll have the opportunity to contribute to the development of scalable, production-grade solutions that drive business value. Key details: Salary: £55,000-95,000 (Mid-Lead) I'd consider experienced contractors with a rate of £400.00 per day (Outisde IR35) Benefits: 10-25% bonus + healthcare + 10% pension Location: Birmingham; can be remote-based, hybrid working or office-based Key experience desired / what you will learn: Experience developing, deploying, and maintaining machine learning models in production environments. Strong understanding of AWS cloud services, especially in building and managing data pipelines and machine learning workflows: S3, Redshift, Lambda, Glue, EMR, EKS (Kubernetes) Familiarity with MLOps / DevOps concepts and practices, including version control, CI/CD, and model monitoring. Proficiency in Python and relevant data manipulation and analysis libraries (e.g., pandas, NumPy). Experience with distributed computing frameworks like Apache Spark is a plus. Apache Spark and Airflow would be a bonus. Role overview: If you're looking to work with a team of ambitious software engineers, talented senior leaders all while working with the latest data, AI and cloud technologies, then this one's for you. They have big plans to disrupt the industry with this machine learning work, so it's a great time to join. Interested? Please apply with your CV and/or message Billy Hall for further details. Xpertise acts as an employment agency.
Apr 24, 2024
Full time
Xpertise is seeking two talented Machine Learning Engineers to join our esteemed team in Birmingham. As part of our growing engineering division, you will play a pivotal role in designing, implementing, and optimizing machine learning models and data pipelines. With a strong emphasis on AWS technologies and MLOps practices, you'll have the opportunity to contribute to the development of scalable, production-grade solutions that drive business value. Key details: Salary: £55,000-95,000 (Mid-Lead) I'd consider experienced contractors with a rate of £400.00 per day (Outisde IR35) Benefits: 10-25% bonus + healthcare + 10% pension Location: Birmingham; can be remote-based, hybrid working or office-based Key experience desired / what you will learn: Experience developing, deploying, and maintaining machine learning models in production environments. Strong understanding of AWS cloud services, especially in building and managing data pipelines and machine learning workflows: S3, Redshift, Lambda, Glue, EMR, EKS (Kubernetes) Familiarity with MLOps / DevOps concepts and practices, including version control, CI/CD, and model monitoring. Proficiency in Python and relevant data manipulation and analysis libraries (e.g., pandas, NumPy). Experience with distributed computing frameworks like Apache Spark is a plus. Apache Spark and Airflow would be a bonus. Role overview: If you're looking to work with a team of ambitious software engineers, talented senior leaders all while working with the latest data, AI and cloud technologies, then this one's for you. They have big plans to disrupt the industry with this machine learning work, so it's a great time to join. Interested? Please apply with your CV and/or message Billy Hall for further details. Xpertise acts as an employment agency.
Xpertise Recruitment Ltd
Newcastle Upon Tyne, Tyne And Wear
Xpertise is seeking two talented Machine Learning Engineers to join our esteemed team in Birmingham. As part of our growing engineering division, you will play a pivotal role in designing, implementing, and optimizing machine learning models and data pipelines. With a strong emphasis on AWS technologies and MLOps practices, you'll have the opportunity to contribute to the development of scalable, production-grade solutions that drive business value. Key details: Salary: £55,000-95,000 (Mid-Lead) I'd consider experienced contractors with a rate of £400.00 per day (Outisde IR35) Benefits: 10-25% bonus + healthcare + 10% pension Location: Newcastle or Birmingham; can be remote-based, hybrid working or office-based Key experience desired / what you will learn: Experience developing, deploying, and maintaining machine learning models in production environments. Strong understanding of AWS cloud services, especially in building and managing data pipelines and machine learning workflows: S3, Redshift, Lambda, Glue, EMR, EKS (Kubernetes) Familiarity with MLOps / DevOps concepts and practices, including version control, CI/CD, and model monitoring. Proficiency in Python and relevant data manipulation and analysis libraries (e.g., pandas, NumPy). Experience with distributed computing frameworks like Apache Spark is a plus. Apache Spark and Airflow would be a bonus. Role overview: If you're looking to work with a team of ambitious software engineers, talented senior leaders all while working with the latest data, AI and cloud technologies, then this one's for you. They have big plans to disrupt the industry with this machine learning work, so it's a great time to join. Interested? Please apply with your CV and/or message Billy Hall for further details. Xpertise acts as an employment agency.
Apr 24, 2024
Full time
Xpertise is seeking two talented Machine Learning Engineers to join our esteemed team in Birmingham. As part of our growing engineering division, you will play a pivotal role in designing, implementing, and optimizing machine learning models and data pipelines. With a strong emphasis on AWS technologies and MLOps practices, you'll have the opportunity to contribute to the development of scalable, production-grade solutions that drive business value. Key details: Salary: £55,000-95,000 (Mid-Lead) I'd consider experienced contractors with a rate of £400.00 per day (Outisde IR35) Benefits: 10-25% bonus + healthcare + 10% pension Location: Newcastle or Birmingham; can be remote-based, hybrid working or office-based Key experience desired / what you will learn: Experience developing, deploying, and maintaining machine learning models in production environments. Strong understanding of AWS cloud services, especially in building and managing data pipelines and machine learning workflows: S3, Redshift, Lambda, Glue, EMR, EKS (Kubernetes) Familiarity with MLOps / DevOps concepts and practices, including version control, CI/CD, and model monitoring. Proficiency in Python and relevant data manipulation and analysis libraries (e.g., pandas, NumPy). Experience with distributed computing frameworks like Apache Spark is a plus. Apache Spark and Airflow would be a bonus. Role overview: If you're looking to work with a team of ambitious software engineers, talented senior leaders all while working with the latest data, AI and cloud technologies, then this one's for you. They have big plans to disrupt the industry with this machine learning work, so it's a great time to join. Interested? Please apply with your CV and/or message Billy Hall for further details. Xpertise acts as an employment agency.
Senior Machine Learning Engineer - Remote UK - £ 7 5000 We are helping an innovative tech business scale their ML software team. Due to continued growth and demand for their products they now urgently need a Senior Python Machine Learning Engineer to join them ASAP. This role would suit a Python Machine Learning Engineer who has a strong background in Machine Learning and ideally MLOps.This role is fully remote within the UK but if you do want to use their office they are based in Milton Keynes. To be a successful, the ideal Python Machine Learning Engineer candidate will have: Strong background in Python of Machine Learning or AI Strong knowledge of PyTorch, Hugging Face, TensorFlow Experience of MLOps would be a benefit Experience of working in a small but growing team. What is in it for you? As a talented Python Machine Learning Engineer you can expect: Great salary - Up to £75k base commission and Package (neg for the right person) Flexible working An opportunity to work with some of the brightest minds in the tech sector If you are an ambitious Python Machine Learning Engineer hit apply and we will do the rest.Please apply with your CV and we will be in touch for a confidential chat.Noa Recruitment specialise in helping Software and Web Professionals and technical talent find great careers. If this role doesn't sound like you, but you know a great person who might be interested then please do share these details with them.
Apr 23, 2024
Full time
Senior Machine Learning Engineer - Remote UK - £ 7 5000 We are helping an innovative tech business scale their ML software team. Due to continued growth and demand for their products they now urgently need a Senior Python Machine Learning Engineer to join them ASAP. This role would suit a Python Machine Learning Engineer who has a strong background in Machine Learning and ideally MLOps.This role is fully remote within the UK but if you do want to use their office they are based in Milton Keynes. To be a successful, the ideal Python Machine Learning Engineer candidate will have: Strong background in Python of Machine Learning or AI Strong knowledge of PyTorch, Hugging Face, TensorFlow Experience of MLOps would be a benefit Experience of working in a small but growing team. What is in it for you? As a talented Python Machine Learning Engineer you can expect: Great salary - Up to £75k base commission and Package (neg for the right person) Flexible working An opportunity to work with some of the brightest minds in the tech sector If you are an ambitious Python Machine Learning Engineer hit apply and we will do the rest.Please apply with your CV and we will be in touch for a confidential chat.Noa Recruitment specialise in helping Software and Web Professionals and technical talent find great careers. If this role doesn't sound like you, but you know a great person who might be interested then please do share these details with them.
My Client are seeking a Senior Data Scientist to join the team on a permanent basis specialising in ontology, knowledge engineering, knowledge graphs, and more. This role is not for the faint-hearted but for those energised by challenges, inspired by innovation, and driven by real-world impact. The role can pay up to £80,000 per annum for the right person so please click apply if you feel the below suits your skill set: Qualifications: Education: MSc/PhD in Data Science, Computer Science, or related field Experience: 5-7 years of hands-on experience in the industrySkills: Deep knowledge in First-Order Logic, Descriptive Logic, Web Ontology Language (OWL) Proficiency in graph neural networks (GNN), network science, and semantic networks Mastery over task-specific finetuning, including data extraction, classification, and generation A comprehensive understanding of traditional statistics, Machine Learning (ML), and multi-objective optimisation techniques Excellent communication and written skills Passion for continuous learning and collaboration as a team player Tools: Python: Structured workflows using environments, Conda, Git, GitFlow, etc.Databases: Traditional (SQL) and Graph Databases (openCypher, Gremlin, SPARQL) Technologies: MLOps/LLMOps, Protege, Cloud environments (Azure/AWS)
Aug 15, 2023
Full time
My Client are seeking a Senior Data Scientist to join the team on a permanent basis specialising in ontology, knowledge engineering, knowledge graphs, and more. This role is not for the faint-hearted but for those energised by challenges, inspired by innovation, and driven by real-world impact. The role can pay up to £80,000 per annum for the right person so please click apply if you feel the below suits your skill set: Qualifications: Education: MSc/PhD in Data Science, Computer Science, or related field Experience: 5-7 years of hands-on experience in the industrySkills: Deep knowledge in First-Order Logic, Descriptive Logic, Web Ontology Language (OWL) Proficiency in graph neural networks (GNN), network science, and semantic networks Mastery over task-specific finetuning, including data extraction, classification, and generation A comprehensive understanding of traditional statistics, Machine Learning (ML), and multi-objective optimisation techniques Excellent communication and written skills Passion for continuous learning and collaboration as a team player Tools: Python: Structured workflows using environments, Conda, Git, GitFlow, etc.Databases: Traditional (SQL) and Graph Databases (openCypher, Gremlin, SPARQL) Technologies: MLOps/LLMOps, Protege, Cloud environments (Azure/AWS)
Job Introduction BBC R&D has recently established an Automation Applied Research Area focussed on the use of Machine Learning across the BBC. Automation works closely with other BBC R&D Applied Research Areas, BBC Product and Technology Groups and senior business stakeholders across the BBC to accelerate Machine Learning based innovation. Reporting to the Head of Automation, this role will lead a team of experts exploring the ML platforms, tools, performance, and sustainability that will underpin the BBC's approach to Machine Learning innovation. It will ensure that best practice and correct technology choices are downstreamed into R&D ML applications as well as supporting the wider BBC in making the right strategic decisions for its future ML technology. BBC R&D has five applied research areas focussed on Audiences, Automation, Distribution, Infrastructure and Production who are looking to solve some of the most interesting challenges in Media and Broadcasting; as well as our Commercial, Partnerships & Engagement team who ensure we're collaborating with the right external partners and optimising commercial returns through the exploitation of our Intellectual Property and grant funding. Our work supports the BBC's current ambition as well as informing future strategy. If you're excited by the prospect of working in an innovative environment with smart and supportive colleagues, then BBC R&D is the place for you. Role Responsibility This is a hands-on role. Your key responsibilities will be: Build and lead a team of ML engineers to develop an infrastructure to manage ML lifecycle through experimentation, deployment, and testing. Own the Automation MLOps strategy, roadmap, and backlog. Provide leadership and guidance on the delivery of ML models from prototypes to production, mentor and coach team members on ML engineering best practises; work alongside researchers to enable BBC to benefit more rapidly from fundamental ML research. Contribute to the design of ML systems and infrastructure to shape how ML is used across the BBC. Develop relationships with pan-BBC and external contributors and stakeholders. You will need to bring to life long-term ambitions to secure required support and buy-in for tangible and intangible benefits and outcomes. Focus on ensuring our ML technology delivers on performance, cost and sustainability goals and is supportive of the BBC's responsible and ethical ML objectives. Work with our Technology Strategy and Governance team to identify and communicate strategic investment decisions required to mature the BBC's ML technology in line with business needs. Are you the right candidate? Solid understanding of machine learning concepts and algorithms Experience deploying machine learning solutions Expert knowledge of Python programming and machine learning libraries (Scikit-learn, TensorFlow, Keras, PyTorch, MxNet, etc.) Experience implementing ML automation, MLOps (scalable deployment practices aimed to deploy and maintain machine learning models in production reliably and efficiently) and related tools (e.g., MLflow, Kubeflow, Airflow, Sagemaker) Experience working in accordance with DevOps principles, and with industry deployment best practices using CI/CD tools and infrastructure as code (e.g., Docker, Kubernetes, Terraform) Experience in at least one cloud platform (e.g., AWS, GCP, Azure) and associated machine learning services, e.g., Amazon SageMaker, Azure ML, Databricks. Package Description Band: E Contract type: Permanent - Full time Location: UK wide We're happy to discuss flexible working. Please indicate your choice under the flexible working question in the application . There is no obligation to raise this at the application stage but if you wish to do so, you are welcome to. Flexible working will be part of the discussion at offer stage. Excellent career progression - the BBC offers great opportunities for employees to seek new challenges and work in different areas of the organisation. Unrivalled training and development opportunities - our in-house Academy hosts a wide range of internal and external courses and certification. Benefits - We offer a competitive salary package, a flexible 35-hour working week for work-life balance and 26 days (1 of which is a corporation day) with the option to buy an extra 5 days, a defined pension scheme and discounted dental, health care, gym and much more. The situation regarding the coronavirus outbreak is developing quickly and the BBC is keen to continue to ensure the safety and wellbeing of people across the BBC, while continuing to protect our services. To reduce the risk access to BBC buildings is limited to those essential to our broadcast output. From Wednesday 18 th March until further notice all assessments and interviews will be conducted remotely. For more information go to Mae'r sefyllfa gyda'r coronafeirws yn datblygu'n gyflym, ac mae'r BBC yn awyddus i barhau i sicrhau diogelwch a lles pobl ar draws y BBC, gan barhau i warchod ein gwasanaethau hefyd. I leihau'r risg, dim ond y bobl sy'n hanfodol i'n hallbwn darlledu fydd yn cael mynediad i adeiladau'r BBC. O ddydd Mercher 18 fed Mawrth ymlaen, bydd pob asesiad a chyfweliad yn cael ei gynnal o bell, nes rhoddir gwybod yn wahanol. I gael mwy o wybodaeth, ewch i About the BBC We don't focus simply on what we do - we also care how we do it. Our values and the way we behave are important to us. Please make sure you've read about our values and behaviours in the document attached below. Diversity matters at the BBC. We have a working environment where we value and respect every individual's unique contribution, enabling all of our employees to thrive and achieve their full potential. We want to attract the broadest range of talented people to be part of the BBC - whether that's to contribute to our programming or our wide range of non-production roles. The more diverse our workforce, the better able we are to respond to and reflect our audiences in all their diversity. We are committed to equality of opportunity and welcome applications from individuals, regardless of age, gender, ethnicity, disability, sexual orientation, gender identity, socio-economic background, religion and/or belief. We will consider flexible working requests for all roles, unless operational requirements prevent otherwise. To find out more about Diversity and Inclusion at the BBC, please click here
Sep 23, 2022
Full time
Job Introduction BBC R&D has recently established an Automation Applied Research Area focussed on the use of Machine Learning across the BBC. Automation works closely with other BBC R&D Applied Research Areas, BBC Product and Technology Groups and senior business stakeholders across the BBC to accelerate Machine Learning based innovation. Reporting to the Head of Automation, this role will lead a team of experts exploring the ML platforms, tools, performance, and sustainability that will underpin the BBC's approach to Machine Learning innovation. It will ensure that best practice and correct technology choices are downstreamed into R&D ML applications as well as supporting the wider BBC in making the right strategic decisions for its future ML technology. BBC R&D has five applied research areas focussed on Audiences, Automation, Distribution, Infrastructure and Production who are looking to solve some of the most interesting challenges in Media and Broadcasting; as well as our Commercial, Partnerships & Engagement team who ensure we're collaborating with the right external partners and optimising commercial returns through the exploitation of our Intellectual Property and grant funding. Our work supports the BBC's current ambition as well as informing future strategy. If you're excited by the prospect of working in an innovative environment with smart and supportive colleagues, then BBC R&D is the place for you. Role Responsibility This is a hands-on role. Your key responsibilities will be: Build and lead a team of ML engineers to develop an infrastructure to manage ML lifecycle through experimentation, deployment, and testing. Own the Automation MLOps strategy, roadmap, and backlog. Provide leadership and guidance on the delivery of ML models from prototypes to production, mentor and coach team members on ML engineering best practises; work alongside researchers to enable BBC to benefit more rapidly from fundamental ML research. Contribute to the design of ML systems and infrastructure to shape how ML is used across the BBC. Develop relationships with pan-BBC and external contributors and stakeholders. You will need to bring to life long-term ambitions to secure required support and buy-in for tangible and intangible benefits and outcomes. Focus on ensuring our ML technology delivers on performance, cost and sustainability goals and is supportive of the BBC's responsible and ethical ML objectives. Work with our Technology Strategy and Governance team to identify and communicate strategic investment decisions required to mature the BBC's ML technology in line with business needs. Are you the right candidate? Solid understanding of machine learning concepts and algorithms Experience deploying machine learning solutions Expert knowledge of Python programming and machine learning libraries (Scikit-learn, TensorFlow, Keras, PyTorch, MxNet, etc.) Experience implementing ML automation, MLOps (scalable deployment practices aimed to deploy and maintain machine learning models in production reliably and efficiently) and related tools (e.g., MLflow, Kubeflow, Airflow, Sagemaker) Experience working in accordance with DevOps principles, and with industry deployment best practices using CI/CD tools and infrastructure as code (e.g., Docker, Kubernetes, Terraform) Experience in at least one cloud platform (e.g., AWS, GCP, Azure) and associated machine learning services, e.g., Amazon SageMaker, Azure ML, Databricks. Package Description Band: E Contract type: Permanent - Full time Location: UK wide We're happy to discuss flexible working. Please indicate your choice under the flexible working question in the application . There is no obligation to raise this at the application stage but if you wish to do so, you are welcome to. Flexible working will be part of the discussion at offer stage. Excellent career progression - the BBC offers great opportunities for employees to seek new challenges and work in different areas of the organisation. Unrivalled training and development opportunities - our in-house Academy hosts a wide range of internal and external courses and certification. Benefits - We offer a competitive salary package, a flexible 35-hour working week for work-life balance and 26 days (1 of which is a corporation day) with the option to buy an extra 5 days, a defined pension scheme and discounted dental, health care, gym and much more. The situation regarding the coronavirus outbreak is developing quickly and the BBC is keen to continue to ensure the safety and wellbeing of people across the BBC, while continuing to protect our services. To reduce the risk access to BBC buildings is limited to those essential to our broadcast output. From Wednesday 18 th March until further notice all assessments and interviews will be conducted remotely. For more information go to Mae'r sefyllfa gyda'r coronafeirws yn datblygu'n gyflym, ac mae'r BBC yn awyddus i barhau i sicrhau diogelwch a lles pobl ar draws y BBC, gan barhau i warchod ein gwasanaethau hefyd. I leihau'r risg, dim ond y bobl sy'n hanfodol i'n hallbwn darlledu fydd yn cael mynediad i adeiladau'r BBC. O ddydd Mercher 18 fed Mawrth ymlaen, bydd pob asesiad a chyfweliad yn cael ei gynnal o bell, nes rhoddir gwybod yn wahanol. I gael mwy o wybodaeth, ewch i About the BBC We don't focus simply on what we do - we also care how we do it. Our values and the way we behave are important to us. Please make sure you've read about our values and behaviours in the document attached below. Diversity matters at the BBC. We have a working environment where we value and respect every individual's unique contribution, enabling all of our employees to thrive and achieve their full potential. We want to attract the broadest range of talented people to be part of the BBC - whether that's to contribute to our programming or our wide range of non-production roles. The more diverse our workforce, the better able we are to respond to and reflect our audiences in all their diversity. We are committed to equality of opportunity and welcome applications from individuals, regardless of age, gender, ethnicity, disability, sexual orientation, gender identity, socio-economic background, religion and/or belief. We will consider flexible working requests for all roles, unless operational requirements prevent otherwise. To find out more about Diversity and Inclusion at the BBC, please click here
Robert Half have an exciting opportunity with a fast growing technology company in search of a MLOps Engineer to work alongside an established product and data team on creating robust frameworks and deploy machine learning models into production. My client is offering a great reward package, including: £60,000 to £80,000 experience dependant Remote working (offices in London and Edinburgh) Flexible working Annual success sharing bonus scheme No restriction on holiday allowance, trusted to manage workload and time Rewards scheme, pension scheme, cycle to work and much more The role includes: Sagemaker machine learning development pipelines Implementing monitoring and alerting for production model performance and accuracy Model optimization Deliver reusable software and modelling artefacts Continuous model training and deployment Collaborate with product managers, data scientists and engineering teams to integrate machine learning capabilities within the wider product offering The ideal candidate will have: Bachelor's degree or equivalent practical experience. Industry experience with machine learning pipelines Experience with kubernetes is a desirable Experience with one or more of Hive, Kafka, Impala or HDFS is also a desirable Working knowledge of AWS (Sage maker or cloudformation is desired) Proficiency with python and experience with ML frameworks such as PyTorch and TensorFlow The ability to forge strong relationships with clients and team members. Prove ability to deliver end-to-end solutions If you feel this role is for you, please apply and I'll be in touch with you in due course. Robert Half Ltd acts as an employment business for temporary positions and an employment agency for permanent positions. Robert Half is committed to equal opportunity and diversity. Suitable candidates with equivalent qualifications and more or less experience can apply. Where rates of pay or salary ranges are detailed these are dependent upon your experience, qualifications or training. If you wish to apply for this position please read our Privacy Notice which details how we may use, process, store and disclose your Personal Information: roberthalf.co.uk/privacy notice.
Nov 05, 2021
Full time
Robert Half have an exciting opportunity with a fast growing technology company in search of a MLOps Engineer to work alongside an established product and data team on creating robust frameworks and deploy machine learning models into production. My client is offering a great reward package, including: £60,000 to £80,000 experience dependant Remote working (offices in London and Edinburgh) Flexible working Annual success sharing bonus scheme No restriction on holiday allowance, trusted to manage workload and time Rewards scheme, pension scheme, cycle to work and much more The role includes: Sagemaker machine learning development pipelines Implementing monitoring and alerting for production model performance and accuracy Model optimization Deliver reusable software and modelling artefacts Continuous model training and deployment Collaborate with product managers, data scientists and engineering teams to integrate machine learning capabilities within the wider product offering The ideal candidate will have: Bachelor's degree or equivalent practical experience. Industry experience with machine learning pipelines Experience with kubernetes is a desirable Experience with one or more of Hive, Kafka, Impala or HDFS is also a desirable Working knowledge of AWS (Sage maker or cloudformation is desired) Proficiency with python and experience with ML frameworks such as PyTorch and TensorFlow The ability to forge strong relationships with clients and team members. Prove ability to deliver end-to-end solutions If you feel this role is for you, please apply and I'll be in touch with you in due course. Robert Half Ltd acts as an employment business for temporary positions and an employment agency for permanent positions. Robert Half is committed to equal opportunity and diversity. Suitable candidates with equivalent qualifications and more or less experience can apply. Where rates of pay or salary ranges are detailed these are dependent upon your experience, qualifications or training. If you wish to apply for this position please read our Privacy Notice which details how we may use, process, store and disclose your Personal Information: roberthalf.co.uk/privacy notice.
Job description We currently have an opportunity for an Enterprise Data Architect to join our IT team in London. The Enterprise Data Architect will ensure A&O has focus on and maintains an enterprise data governance framework for data policies, standards, process and practices, across A&O, to achieve the required level of consistency and quality to meet A&O business needs. The role will be the custodian of data, setting the vision for A&O's use of data and will manage A&Os data catalogue to improve the quality and value of core data assets, respond to regulatory requirements as well as support strategic functional requirements. The role will manage the way that people and programs engage with data, ensuring that data can be turned into valuable insights that inform business decisions. The role holder will consistently communicate the business benefit of data standards and will champion and govern those standards across the firm. The role holder will ensure internal procedures keep pace with evolving data regulation and compliance. Role and responsibilities Manage and govern a unified view of A&O data, its data linage and provenance Govern and manage the A&O data catalogue across all data assets. Ensure data is discoverable, well understood, and trusted Drive innovation and growth through the use of data, unlocking data for insightful business decisions Help establish best practice, rules and ownership of A&Os taxonomy and ontology Build and maintain good working-relationships across A&Os Data Stewards to ensure internal stakeholders and leaders are informed and aligned Create a Data Governance and Quality function, including processes and tools to achieve A&Os data objectives Define a proactive approach to the management of data models, definitions, data governance, ethics and data processing rules to provide timely, appropriate, accurate and up-to-date information at the point of need Actively use the trends in data quality and data governance to drive positive change in people, process, technology and governance Develop and be responsible for the standards, policies and procedures for the on-going implementation of data governance Ensure alignment between data governance best practice, Data Privacy procedures and requirements and the IT Information Security strategy Work closely with and set direction for the Trusted Data Platform team, staying close to opportunities and challenges that exist within A&O, provide guidance on new products, approaches and supplier relationships that could impact data collection, processing and analytics Key requirements Business Competencies Aptitude for and experience of creating, managing, motivating and developing teams Commercial acumen including an understanding of the overall picture of how the IT service costs and value add to the business Excellent communication, interpersonal and influencing skills, including the ability to communicate both on technical and business levels. Excellent customer-facing skills with a good grasp of key drivers and requirements within the Business. High level of personal credibility, impact and influence with proven ability to work effectively and persuasively at all levels of the business Knowledge & Experience Knowledge and awareness of business and technology issues related to the management of enterprise wide data Expert level data modelling experience. Including a deep understanding of relational, taxonomical and ontological modelling approaches Strong recent experience of developing and managing an enterprise data catalogue Proven experience of developing and implementing a Data Governance Framework and a Data Quality Management service Keen eye for detail with a genuine interest in understanding the journey data takes throughout A&O Demonstrable experience in building, delivering and managing detailed data quality measurement frameworks Comfortable presenting complex data models, flows and relationships to non-data peers and colleagues An expert in information management practices including information lifecycle management, data profiling, master data management, data audits and requirements gathering Clear knowledge and experience relating to the application of data governance, data privacy and data ethics principles to support and drive data strategy Evidence of establishing governance processes, with engaged teams, leading and implementing enterprise-wide data governance and demonstrable improvements in data quality Preferable: Knowledge and experience relating to MLOps, testing and quality management Preferable: Experienced in managing, versioning and automating data and systems Preferable: Experience of unstructured and semi-structured data and associated processing approaches. NLP and Machine learning experience a distinct advantage. Preferable: Experienced at using relevant frameworks such as GDPR and other data privacy regulations to drive data governance ethics Preferable: Experience using automated approaches to manage and support data privacy compliance Is a collaborative team player fostering strong working relationships, with strong culture awareness Strong leadership and influencing skills; interacting with senior stakeholders; highly personable Strong ability to extract information by questioning, active listening and interviewing Results orientated to ensure change and delivery project metrics are meaningful and supported with robust business data Analytical with strong numeracy and good statistical skills Experience of working in Agile project environments Ability to work with technical, (developers, data engineers, data scientists) and nontechnical staff Allen & Overy LLP is committed to being an inclusive employer and we are happy to consider flexible working arrangements. Additional information - External It's Time Allen & Overy is a leading global law firm operating in over thirty countries. By turning our insight, technology and talent into ground-breaking solutions, we've earned a place at the forefront of our industry. Our lawyers are leaders in their field - and the same goes for our support teams. Ambitious, driven and open to fresh perspectives, we find innovative new ways to deliver our services and maintain our reputation for excellence, in all that we do. The nature of law is changing and with that change brings unique opportunities. With our collaborative working culture, flexibility, and a commitment to your progress, we build rewarding careers. By joining our global team, you are supported by colleagues from around the world. If you're ready for a new challenge, it's time to seize the opportunity.
Sep 15, 2021
Full time
Job description We currently have an opportunity for an Enterprise Data Architect to join our IT team in London. The Enterprise Data Architect will ensure A&O has focus on and maintains an enterprise data governance framework for data policies, standards, process and practices, across A&O, to achieve the required level of consistency and quality to meet A&O business needs. The role will be the custodian of data, setting the vision for A&O's use of data and will manage A&Os data catalogue to improve the quality and value of core data assets, respond to regulatory requirements as well as support strategic functional requirements. The role will manage the way that people and programs engage with data, ensuring that data can be turned into valuable insights that inform business decisions. The role holder will consistently communicate the business benefit of data standards and will champion and govern those standards across the firm. The role holder will ensure internal procedures keep pace with evolving data regulation and compliance. Role and responsibilities Manage and govern a unified view of A&O data, its data linage and provenance Govern and manage the A&O data catalogue across all data assets. Ensure data is discoverable, well understood, and trusted Drive innovation and growth through the use of data, unlocking data for insightful business decisions Help establish best practice, rules and ownership of A&Os taxonomy and ontology Build and maintain good working-relationships across A&Os Data Stewards to ensure internal stakeholders and leaders are informed and aligned Create a Data Governance and Quality function, including processes and tools to achieve A&Os data objectives Define a proactive approach to the management of data models, definitions, data governance, ethics and data processing rules to provide timely, appropriate, accurate and up-to-date information at the point of need Actively use the trends in data quality and data governance to drive positive change in people, process, technology and governance Develop and be responsible for the standards, policies and procedures for the on-going implementation of data governance Ensure alignment between data governance best practice, Data Privacy procedures and requirements and the IT Information Security strategy Work closely with and set direction for the Trusted Data Platform team, staying close to opportunities and challenges that exist within A&O, provide guidance on new products, approaches and supplier relationships that could impact data collection, processing and analytics Key requirements Business Competencies Aptitude for and experience of creating, managing, motivating and developing teams Commercial acumen including an understanding of the overall picture of how the IT service costs and value add to the business Excellent communication, interpersonal and influencing skills, including the ability to communicate both on technical and business levels. Excellent customer-facing skills with a good grasp of key drivers and requirements within the Business. High level of personal credibility, impact and influence with proven ability to work effectively and persuasively at all levels of the business Knowledge & Experience Knowledge and awareness of business and technology issues related to the management of enterprise wide data Expert level data modelling experience. Including a deep understanding of relational, taxonomical and ontological modelling approaches Strong recent experience of developing and managing an enterprise data catalogue Proven experience of developing and implementing a Data Governance Framework and a Data Quality Management service Keen eye for detail with a genuine interest in understanding the journey data takes throughout A&O Demonstrable experience in building, delivering and managing detailed data quality measurement frameworks Comfortable presenting complex data models, flows and relationships to non-data peers and colleagues An expert in information management practices including information lifecycle management, data profiling, master data management, data audits and requirements gathering Clear knowledge and experience relating to the application of data governance, data privacy and data ethics principles to support and drive data strategy Evidence of establishing governance processes, with engaged teams, leading and implementing enterprise-wide data governance and demonstrable improvements in data quality Preferable: Knowledge and experience relating to MLOps, testing and quality management Preferable: Experienced in managing, versioning and automating data and systems Preferable: Experience of unstructured and semi-structured data and associated processing approaches. NLP and Machine learning experience a distinct advantage. Preferable: Experienced at using relevant frameworks such as GDPR and other data privacy regulations to drive data governance ethics Preferable: Experience using automated approaches to manage and support data privacy compliance Is a collaborative team player fostering strong working relationships, with strong culture awareness Strong leadership and influencing skills; interacting with senior stakeholders; highly personable Strong ability to extract information by questioning, active listening and interviewing Results orientated to ensure change and delivery project metrics are meaningful and supported with robust business data Analytical with strong numeracy and good statistical skills Experience of working in Agile project environments Ability to work with technical, (developers, data engineers, data scientists) and nontechnical staff Allen & Overy LLP is committed to being an inclusive employer and we are happy to consider flexible working arrangements. Additional information - External It's Time Allen & Overy is a leading global law firm operating in over thirty countries. By turning our insight, technology and talent into ground-breaking solutions, we've earned a place at the forefront of our industry. Our lawyers are leaders in their field - and the same goes for our support teams. Ambitious, driven and open to fresh perspectives, we find innovative new ways to deliver our services and maintain our reputation for excellence, in all that we do. The nature of law is changing and with that change brings unique opportunities. With our collaborative working culture, flexibility, and a commitment to your progress, we build rewarding careers. By joining our global team, you are supported by colleagues from around the world. If you're ready for a new challenge, it's time to seize the opportunity.