Senior Marketing Data Analytics Analyst (building MMM's with Python/r) West London based 4 days p/w onsite 1 wfh 6 month contract - Start ASAP (weeks notice) (Apply online only)pd rate inside to Umbrella Senior Marketing Analyst - Senior Marketing Data Analytics Analyst (building MMM's) Looking for someone with minimum 4 years experienced in Marketing Data analyst roles with at least 2 years experience building Marketing Mix Models (mmm's) using regression analysis, Python/r and SQL as well as using data visualisation tools such as Looker/Tableau. If you have additional experience in the media subscription sector such as streaming video services such as VoD/Avod or other subscription services it would be a bonus alternatively consultancies providing MMM would also be of interest. The team are specifically focused on programming strategy, marketing, customer acquisition and retention, partnerships, research, and analytics. looking for an enthusiastic Senior Analyst to join the Analytics & Insights team with expertise in Marketing Analytics. This role will be pivotal in helping us evaluate strategic marketing initiatives to drive acquisition and improve retention for the subscription business. This role will leverage excellent technical, storytelling and communication skills, as well as the ability to work cross-functionally to identify business needs and deliver marketing insights across EMEA. What You Will Do: Drive the development, design and implementation of Marketing Mix Models (MMM), Experimentation and Geo Testing to understand the incremental impact of our marketing investments. Collaborate closely with cross-functional teams including Marketing, Finance and Research to identify key measurement opportunities across markets and build marketing measurement roadmaps. Partner with internal Analytics teams globally on model alignment and on-going development of marketing measurement methodologies. Provide impactful recommendations and strategic guidance on media mix optimisations to influence budget allocation for future marketing campaigns. Liaise closely with third-party media owners and be across past and upcoming media plans. Required Experience Qualifications & Skills: Someone with minimum 4 years experienced in Marketing Data analyst roles with at least 2 years experience building Marketing Mix Models (mmm's) using regression analysis, Python/r and SQL as well as using data visualisation tools such as Looker/Tableau. If you have additional experience in the media subscription sector such as streaming video services such as VoD/Avod or other subscription services it would be a bonus alternatively consultancies providing MMM would also be of interest. Experience as a Marketing Analyst, Data Scientist or similar. Proven experience and strong knowledge of regression analysis Experience building multiple MMM's for at least 2 years, A/B tests and other statistical modelling techniques. Proficiency in Python and/or R. Excellent proficiency in SQL to and working with data warehousing technologies (e.g. Snowflake, Databricks). Ability to use data visualisation tools such as Looker/Tableau. Ability to convert technical outputs into succinct and compelling presentations with clear narrative and actionable recommendations. Strong clarity of communication, able to present to stakeholders. Excellent stakeholder management with proven experience of effectively engaging with senior stakeholders. Everybody is welcome Diversity and Inclusion Statement. PCR Digital "At PCR Digital, we are committed to ensuring that diversity, equity and inclusion play a role at all stages of our recruitment - it is important to us that our own company culture and the culture of our network is as varied and supportive as possible. We love people (it's why we do what we do), so, regardless of background, we welcome you to work with us or apply to any of our jobs if you feel that they are right for you." We also aim to ensure that our entire process is accessible. Please make us aware of any adjustments you may need throughout the selection, interview and general process and we will do all we can to ensure that any barriers are removed for you.
10/07/2025
Contractor
Senior Marketing Data Analytics Analyst (building MMM's with Python/r) West London based 4 days p/w onsite 1 wfh 6 month contract - Start ASAP (weeks notice) (Apply online only)pd rate inside to Umbrella Senior Marketing Analyst - Senior Marketing Data Analytics Analyst (building MMM's) Looking for someone with minimum 4 years experienced in Marketing Data analyst roles with at least 2 years experience building Marketing Mix Models (mmm's) using regression analysis, Python/r and SQL as well as using data visualisation tools such as Looker/Tableau. If you have additional experience in the media subscription sector such as streaming video services such as VoD/Avod or other subscription services it would be a bonus alternatively consultancies providing MMM would also be of interest. The team are specifically focused on programming strategy, marketing, customer acquisition and retention, partnerships, research, and analytics. looking for an enthusiastic Senior Analyst to join the Analytics & Insights team with expertise in Marketing Analytics. This role will be pivotal in helping us evaluate strategic marketing initiatives to drive acquisition and improve retention for the subscription business. This role will leverage excellent technical, storytelling and communication skills, as well as the ability to work cross-functionally to identify business needs and deliver marketing insights across EMEA. What You Will Do: Drive the development, design and implementation of Marketing Mix Models (MMM), Experimentation and Geo Testing to understand the incremental impact of our marketing investments. Collaborate closely with cross-functional teams including Marketing, Finance and Research to identify key measurement opportunities across markets and build marketing measurement roadmaps. Partner with internal Analytics teams globally on model alignment and on-going development of marketing measurement methodologies. Provide impactful recommendations and strategic guidance on media mix optimisations to influence budget allocation for future marketing campaigns. Liaise closely with third-party media owners and be across past and upcoming media plans. Required Experience Qualifications & Skills: Someone with minimum 4 years experienced in Marketing Data analyst roles with at least 2 years experience building Marketing Mix Models (mmm's) using regression analysis, Python/r and SQL as well as using data visualisation tools such as Looker/Tableau. If you have additional experience in the media subscription sector such as streaming video services such as VoD/Avod or other subscription services it would be a bonus alternatively consultancies providing MMM would also be of interest. Experience as a Marketing Analyst, Data Scientist or similar. Proven experience and strong knowledge of regression analysis Experience building multiple MMM's for at least 2 years, A/B tests and other statistical modelling techniques. Proficiency in Python and/or R. Excellent proficiency in SQL to and working with data warehousing technologies (e.g. Snowflake, Databricks). Ability to use data visualisation tools such as Looker/Tableau. Ability to convert technical outputs into succinct and compelling presentations with clear narrative and actionable recommendations. Strong clarity of communication, able to present to stakeholders. Excellent stakeholder management with proven experience of effectively engaging with senior stakeholders. Everybody is welcome Diversity and Inclusion Statement. PCR Digital "At PCR Digital, we are committed to ensuring that diversity, equity and inclusion play a role at all stages of our recruitment - it is important to us that our own company culture and the culture of our network is as varied and supportive as possible. We love people (it's why we do what we do), so, regardless of background, we welcome you to work with us or apply to any of our jobs if you feel that they are right for you." We also aim to ensure that our entire process is accessible. Please make us aware of any adjustments you may need throughout the selection, interview and general process and we will do all we can to ensure that any barriers are removed for you.
Job Description Join the Chief Data & Analytics Office (CDAO) at JPMorgan Chase and be part of a team that accelerates the firm's data and analytics journey. We focus on ensuring data quality and security while leveraging insights to promote decision-making and support commercial goals through AI and machine learning. As an AI ML Lead Software Engineer within the Chief Data & Analytics Office, you will become part of a mission to modernize compliance through scalable and explainable AI. We are building a system that answers the question: "Can I use this data?", not with guesswork, but with prediction/classification, logic, proof, and intelligent automation. Our work sits at the intersection of applied machine learning, AI reasoning systems, and data governance. We are designing the triage layer of an intelligent decision engine that combines ML-driven classification, LLM-assisted parsing, and formal logic-based verification. This is an opportunity to tackle complex, ambiguous problems that touch every part of the firm's data ecosystem and to build ML solutions that actually make decisions. Job Responsibilities: Architect and develop scalable Python-based systems that support ML-driven risk classification, tagging, and approval triage Integrate ML models into microservices and APIs for use within AI Judge workflows Lead engineering design reviews, establish coding standards, and ensure system robustness and security Build and maintain feature pipelines and model-serving infrastructure using cloud-native tools Work closely with ML scientists, data engineers, and product managers to align on requirements and delivery timelines Drive engineering quality, CI/CD integration, observability, and unit testing for AI-enabled software components Mentor junior engineers and uphold engineering excellence across the team Required Qualifications, Capabilities, and Skills: Master's degree in computer science, Software Engineering, or related field 6+ years of experience as a backend or AI/ML software engineer Proficiency in Python with deep experience in building distributed and containerized services (e.g., Flask/FastAPI, Docker, Kubernetes) Strong understanding of ML deployment workflows, feature engineering, and serving architectures Experience building and deploying APIs and ML inference services in production Familiarity with ML model management, versioning, and performance monitoring Strong engineering fundamentals: data structures, system design, testing, and performance optimization Excellent communication and collaboration skills across technical and non-technical teams Preferred Qualifications, Capabilities, and Skills: Experience with AWS cloud stack (S3, SageMaker, Lambda, ECS, etc.) Experience working with structured data, tabular models, and metadata-driven platforms Experience with regulated data systems, enterprise controls, or secure data processing workflows Contributions to open-source ML or backend tooling frameworks About Us J.P. Morgan is a global leader in financial services, providing strategic advice and products to the world's most prominent corporations, governments, wealthy individuals and institutional investors. Our first-class business in a first-class way approach to serving clients drives everything we do. We strive to build trusted, long-term partnerships to help our clients achieve their business objectives. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants' and employees' religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team J.P. Morgan's Commercial & Investment Bank is a global leader across banking, markets, securities services and payments. Corporations, governments and institutions throughout the world entrust us with their business in more than 100 countries. The Commercial & Investment Bank provides strategic advice, raises capital, manages risk and extends liquidity in markets around the world.
09/07/2025
Full time
Job Description Join the Chief Data & Analytics Office (CDAO) at JPMorgan Chase and be part of a team that accelerates the firm's data and analytics journey. We focus on ensuring data quality and security while leveraging insights to promote decision-making and support commercial goals through AI and machine learning. As an AI ML Lead Software Engineer within the Chief Data & Analytics Office, you will become part of a mission to modernize compliance through scalable and explainable AI. We are building a system that answers the question: "Can I use this data?", not with guesswork, but with prediction/classification, logic, proof, and intelligent automation. Our work sits at the intersection of applied machine learning, AI reasoning systems, and data governance. We are designing the triage layer of an intelligent decision engine that combines ML-driven classification, LLM-assisted parsing, and formal logic-based verification. This is an opportunity to tackle complex, ambiguous problems that touch every part of the firm's data ecosystem and to build ML solutions that actually make decisions. Job Responsibilities: Architect and develop scalable Python-based systems that support ML-driven risk classification, tagging, and approval triage Integrate ML models into microservices and APIs for use within AI Judge workflows Lead engineering design reviews, establish coding standards, and ensure system robustness and security Build and maintain feature pipelines and model-serving infrastructure using cloud-native tools Work closely with ML scientists, data engineers, and product managers to align on requirements and delivery timelines Drive engineering quality, CI/CD integration, observability, and unit testing for AI-enabled software components Mentor junior engineers and uphold engineering excellence across the team Required Qualifications, Capabilities, and Skills: Master's degree in computer science, Software Engineering, or related field 6+ years of experience as a backend or AI/ML software engineer Proficiency in Python with deep experience in building distributed and containerized services (e.g., Flask/FastAPI, Docker, Kubernetes) Strong understanding of ML deployment workflows, feature engineering, and serving architectures Experience building and deploying APIs and ML inference services in production Familiarity with ML model management, versioning, and performance monitoring Strong engineering fundamentals: data structures, system design, testing, and performance optimization Excellent communication and collaboration skills across technical and non-technical teams Preferred Qualifications, Capabilities, and Skills: Experience with AWS cloud stack (S3, SageMaker, Lambda, ECS, etc.) Experience working with structured data, tabular models, and metadata-driven platforms Experience with regulated data systems, enterprise controls, or secure data processing workflows Contributions to open-source ML or backend tooling frameworks About Us J.P. Morgan is a global leader in financial services, providing strategic advice and products to the world's most prominent corporations, governments, wealthy individuals and institutional investors. Our first-class business in a first-class way approach to serving clients drives everything we do. We strive to build trusted, long-term partnerships to help our clients achieve their business objectives. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants' and employees' religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team J.P. Morgan's Commercial & Investment Bank is a global leader across banking, markets, securities services and payments. Corporations, governments and institutions throughout the world entrust us with their business in more than 100 countries. The Commercial & Investment Bank provides strategic advice, raises capital, manages risk and extends liquidity in markets around the world.
Data Scientist - 80,000 - Hybrid - London Company Overview: My client is a trusted partner for major clients across a range of data and AI driven industries such as banking, insurance, health-care, retail, and more! With a global presence spanning continents and tens of thousands of professionals, there is no better place to grow and advance your career. Due to their commitment and motivation to stay at the top of their industry, there is constant investment in research and development to stay ahead of trends by working with, and developing their own, cutting-edge technology. My client believes their success stems from the calibre of people they employ and will do everything to support you and your development. Role Overview: As a Data Scientist you will be hands on in developing advanced statistical and machine learning models within the financial sector. Python is obviously a pre-requisite for this role, with experience required in processing and model development. You will develop end-to-end models and use a combination of NLP and OCR-based extraction techniques. In addition to the technical expertise you will bring and develop, this is a consultant role and you will be working with key stakeholders in large financial institutions, good communication is essential. Requirements: Python Expertise Supervised and Unsupervised Learning NLP and OCR Experience Exposure to Modern Cloud Platforms (EG. Databricks, AWS) Abillity to Communicate Technical Concepts Nice to Have: Financial Services Experience Interviews ongoing don't miss your chance to secure the future of your career! Contact removed) or on (phone number removed). Data Science, Data Scientist, AI, ML, Random Forest, Databricks, SageMaker, Regression, Gradient Boosting, NLP, Palantir, Insurance, Banking
08/07/2025
Full time
Data Scientist - 80,000 - Hybrid - London Company Overview: My client is a trusted partner for major clients across a range of data and AI driven industries such as banking, insurance, health-care, retail, and more! With a global presence spanning continents and tens of thousands of professionals, there is no better place to grow and advance your career. Due to their commitment and motivation to stay at the top of their industry, there is constant investment in research and development to stay ahead of trends by working with, and developing their own, cutting-edge technology. My client believes their success stems from the calibre of people they employ and will do everything to support you and your development. Role Overview: As a Data Scientist you will be hands on in developing advanced statistical and machine learning models within the financial sector. Python is obviously a pre-requisite for this role, with experience required in processing and model development. You will develop end-to-end models and use a combination of NLP and OCR-based extraction techniques. In addition to the technical expertise you will bring and develop, this is a consultant role and you will be working with key stakeholders in large financial institutions, good communication is essential. Requirements: Python Expertise Supervised and Unsupervised Learning NLP and OCR Experience Exposure to Modern Cloud Platforms (EG. Databricks, AWS) Abillity to Communicate Technical Concepts Nice to Have: Financial Services Experience Interviews ongoing don't miss your chance to secure the future of your career! Contact removed) or on (phone number removed). Data Science, Data Scientist, AI, ML, Random Forest, Databricks, SageMaker, Regression, Gradient Boosting, NLP, Palantir, Insurance, Banking
Job Title: Machine Learning Engineer Location: London (1 days per week onsite) - Flexible Salary: 45,000 DOE + Benefits Our Data Analytics business continues to grow and we are now looking for an experienced and technical Machine Learning (ML) Engineer to join one our offices with hybrid or remote UK working. This is an exciting role and would most likely suit someone with previous experience in a similar role where they have gained knowledge and experience of designing, building, optimising, deploying and managing business-critical machine learning models using Azure ML in Production environments. You must have good technical knowledge of Phyton, SQL, CI/CD and familiar with Power BI. A FTSE 250 company, they combine expertise and insight with advanced technology and analytics to address the needs of over 1,400 schemes and their sponsoring employers on an ongoing and project basis. We undertake administration for over one million members and provide advisory services to schemes and corporate sponsors in respect of schemes of all sizes, including 88 with assets over 1bn. We also provide wider ranging support to insurance companies in the life and bulk annuities sector. The Team The client is a specialist and multi-disciplinary team consisting of actuaries, data scientist and developers. Our role in this mission is to pioneer advancements in the field of pensions and beyond, leveraging state-of-the-art technology to extract valuable and timely insights from data. This enables the consultant to better advise Trustees and Corporate clients on a wide range of actuarial-related areas. The Role As a Machine Learning Engineer you will: Model development. Work collaboratively with actuarial analysts to develop machine learning and statistical models to predict outcomes, related to pension schemes, such as life expectancy, default risk, or investment returns. Identify appropriate machine learning algorithms and apply them to enhance predictions, automate decision-making processes, and improve client offerings. Machine Learning Operations. Responsible for designing, deploying, maintaining and refining statistical and machine learning models using Azure ML. Optimize model performance and computational efficiency. Ensure that applications run smoothly and handle large-scare data efficiently. Implement and maintain monitoring of model drifts, data-quality alerts, scheduled r-training pipelines. Data Management and Preprocessing. Collect, clean and preprocess large datasets to facilitate analysis and model training. Implement data pipelines and ETL processes to ensure data availability and quality. Software Development. Write clean, efficient and scalable code in Python. Utilize CI/CD practices for version control, testing and code review. Work closely with actuarial analysts, actuarial modelling team (AMT) and other colleagues to integrate data science findings into practical advice and strategies. Stay abreast of new trends and technologies in Data Science technologies and pensions to identify opportunities for innovation. Provide training and support to other team members on using machine learning tools and understanding analytical techniques. Interpret and explain machine learning concepts and findings to other members of the analytics team and non-technical stakeholders. Your profile Essential Criteria Previous experience in designing, building, optimising, deploying and managing business-critical machine learning models using Azure ML in Production environments. Experience in data wrangling using Python, SQL and ADF. Experience in CI/CD and DevOps/MLOps and version control. Familiarity with data visualization and reporting tools, ideally PowerBI. Good written and verbal communication and interpersonal skills. Ability to convey technical concepts to non-technical stakeholders. Experience in the pensions or similar regulated financial services industry is highly desirable. Experience in working within a multidisciplinary team would be beneficial. We offer an attractive reward package, typical benefits can include: Competitive salary Participation in annual discretionary Bonus Scheme 25 days holiday plus flexibility to buy or sell holiday Flexible Bank holidays Pension scheme, matching contribution structure Healthcare cash plan Flexible Benefits Scheme to support you in and out of work, helping you look after you and your family covering Security & Protection, Health & Wellbeing, Lifestyle Life Assurance cover, four times basic salary Rewards (offers High Street discounts and savings from retailers and services providers as well as offers available via phone) Employee Assistance Programme for you and your household Access to a digital GP service Paid volunteering day when participating in Company organised events Staff referral scheme when you introduce a friend In Technology Group Ltd is acting as an Employment Agency in relation to this vacancy.
04/07/2025
Full time
Job Title: Machine Learning Engineer Location: London (1 days per week onsite) - Flexible Salary: 45,000 DOE + Benefits Our Data Analytics business continues to grow and we are now looking for an experienced and technical Machine Learning (ML) Engineer to join one our offices with hybrid or remote UK working. This is an exciting role and would most likely suit someone with previous experience in a similar role where they have gained knowledge and experience of designing, building, optimising, deploying and managing business-critical machine learning models using Azure ML in Production environments. You must have good technical knowledge of Phyton, SQL, CI/CD and familiar with Power BI. A FTSE 250 company, they combine expertise and insight with advanced technology and analytics to address the needs of over 1,400 schemes and their sponsoring employers on an ongoing and project basis. We undertake administration for over one million members and provide advisory services to schemes and corporate sponsors in respect of schemes of all sizes, including 88 with assets over 1bn. We also provide wider ranging support to insurance companies in the life and bulk annuities sector. The Team The client is a specialist and multi-disciplinary team consisting of actuaries, data scientist and developers. Our role in this mission is to pioneer advancements in the field of pensions and beyond, leveraging state-of-the-art technology to extract valuable and timely insights from data. This enables the consultant to better advise Trustees and Corporate clients on a wide range of actuarial-related areas. The Role As a Machine Learning Engineer you will: Model development. Work collaboratively with actuarial analysts to develop machine learning and statistical models to predict outcomes, related to pension schemes, such as life expectancy, default risk, or investment returns. Identify appropriate machine learning algorithms and apply them to enhance predictions, automate decision-making processes, and improve client offerings. Machine Learning Operations. Responsible for designing, deploying, maintaining and refining statistical and machine learning models using Azure ML. Optimize model performance and computational efficiency. Ensure that applications run smoothly and handle large-scare data efficiently. Implement and maintain monitoring of model drifts, data-quality alerts, scheduled r-training pipelines. Data Management and Preprocessing. Collect, clean and preprocess large datasets to facilitate analysis and model training. Implement data pipelines and ETL processes to ensure data availability and quality. Software Development. Write clean, efficient and scalable code in Python. Utilize CI/CD practices for version control, testing and code review. Work closely with actuarial analysts, actuarial modelling team (AMT) and other colleagues to integrate data science findings into practical advice and strategies. Stay abreast of new trends and technologies in Data Science technologies and pensions to identify opportunities for innovation. Provide training and support to other team members on using machine learning tools and understanding analytical techniques. Interpret and explain machine learning concepts and findings to other members of the analytics team and non-technical stakeholders. Your profile Essential Criteria Previous experience in designing, building, optimising, deploying and managing business-critical machine learning models using Azure ML in Production environments. Experience in data wrangling using Python, SQL and ADF. Experience in CI/CD and DevOps/MLOps and version control. Familiarity with data visualization and reporting tools, ideally PowerBI. Good written and verbal communication and interpersonal skills. Ability to convey technical concepts to non-technical stakeholders. Experience in the pensions or similar regulated financial services industry is highly desirable. Experience in working within a multidisciplinary team would be beneficial. We offer an attractive reward package, typical benefits can include: Competitive salary Participation in annual discretionary Bonus Scheme 25 days holiday plus flexibility to buy or sell holiday Flexible Bank holidays Pension scheme, matching contribution structure Healthcare cash plan Flexible Benefits Scheme to support you in and out of work, helping you look after you and your family covering Security & Protection, Health & Wellbeing, Lifestyle Life Assurance cover, four times basic salary Rewards (offers High Street discounts and savings from retailers and services providers as well as offers available via phone) Employee Assistance Programme for you and your household Access to a digital GP service Paid volunteering day when participating in Company organised events Staff referral scheme when you introduce a friend In Technology Group Ltd is acting as an Employment Agency in relation to this vacancy.
C++ Developer - Investment Tools and Signals Newton Colmore is looking for an experienced C++ developer with a strong interest in finance and technology to join a newly established investment team, developing cutting edge tools and algorithms. You will be working closely with quantitative researchers, and traders and fellow developers to create software that drives the company's investment strategies and delivers alpha across a broad spectrum of markets and instruments. This is a key hire for this team and your key responsibilities will include the development and maintenance of C++ applications for alpha signal generation and equity research, mixed in with building new tools for advanced data processing and multithreaded statistical analysis. You will also have the opportunity to help shape the future of this team as it grows. In terms of what the client is ideally looking for, they are searching for strong C++ expertise, coupled with industry knowledge and a naturally curious mindset. The Ideal industry experience would be from a hedge fund, proprietary trading firm or from a more general fintech background. The team like to hire people who are coachable and the ideal experience level for this role is 2-to-4 years. There is a degree of flexibility on this for the right candidate. The company can offer tailored compensation packages, which includes bonuses and a series of other benefits and provide the opportunity to work on cutting-edge technology and alongside some of the brightest minds in the industry. Your application will be confidential, and your details will only be shared with our client after our initial call and only with your expressed permission, if you feel you are a strong fit. Newton Colmore is a specialist recruitment consultancy, and we search for and introduce world-class engineers, scientists, and developers to impactful companies, globally. Get in touch if you would like to know more.
04/07/2025
Full time
C++ Developer - Investment Tools and Signals Newton Colmore is looking for an experienced C++ developer with a strong interest in finance and technology to join a newly established investment team, developing cutting edge tools and algorithms. You will be working closely with quantitative researchers, and traders and fellow developers to create software that drives the company's investment strategies and delivers alpha across a broad spectrum of markets and instruments. This is a key hire for this team and your key responsibilities will include the development and maintenance of C++ applications for alpha signal generation and equity research, mixed in with building new tools for advanced data processing and multithreaded statistical analysis. You will also have the opportunity to help shape the future of this team as it grows. In terms of what the client is ideally looking for, they are searching for strong C++ expertise, coupled with industry knowledge and a naturally curious mindset. The Ideal industry experience would be from a hedge fund, proprietary trading firm or from a more general fintech background. The team like to hire people who are coachable and the ideal experience level for this role is 2-to-4 years. There is a degree of flexibility on this for the right candidate. The company can offer tailored compensation packages, which includes bonuses and a series of other benefits and provide the opportunity to work on cutting-edge technology and alongside some of the brightest minds in the industry. Your application will be confidential, and your details will only be shared with our client after our initial call and only with your expressed permission, if you feel you are a strong fit. Newton Colmore is a specialist recruitment consultancy, and we search for and introduce world-class engineers, scientists, and developers to impactful companies, globally. Get in touch if you would like to know more.
Site Name: UK - Hertfordshire - Stevenage, USA - Connecticut - Hartford, USA - Delaware - Dover, USA - Maryland - Rockville, USA - Massachusetts - Cambridge, USA - Massachusetts - Waltham, USA - New Jersey - Trenton, USA - Pennsylvania - Upper Providence Posted Date: Jun 6 2022 The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. Achieving delivery of the right data to the right people at the right time needs design and implementation of data flows and data products which leverage internal and external data assets and tools to drive discovery and development is a key objective for the Data Science and Data Engineering (DSDE) team within GSK's Pharmaceutical R&D organization. There are five key drivers for this approach, which are closely aligned with GSK's corporate priorities of Innovation, Performance and Trust: Automation of end-to-end data flows :Faster and reliable ingestion of high throughput data in genetics, genomics and multi-omics, to extract value of investments in new technology (instrument to analysis-ready data in Enabling governance by design of external and internal data :with engineered practical solutions for controlled use and monitoring Innovative disease-specific and domain-expert specific data products : to enable computational scientists and their research unit collaborators to get faster to key insights leading to faster biopharmaceutical development cycles. Supporting e2ecode traceability and data provenance :Increasing assurance of data integrity through automation, integration. Improving engineering efficiency :Extensible, reusable, scalable,updateable,maintainable, virtualized traceable data and code would be driven by data engineering innovation and better resource utilization. We are looking for experienced Senior DevOps Engineers to join our growing Data Ops team. As a Senior Dev Ops Engineer is a highly technical individual contributor, building modern, cloud-native, DevOps-first systems for standardizing and templatizingbiomedical and scientificdata engineering, with demonstrable experience across the following areas: Deliver declarative components for common data ingestion, transformation and publishing techniques Define and implement data governance aligned to modern standards Establish scalable, automated processes for data engineering teams across GSK Thought leader and partner with wider DSDE data engineering teams to advise on implementation and best practices Cloud Infrastructure-as-Code Define Service and Flow orchestration Data as a configurable resource(including configuration-driven access to scientific data modelling tools) Observability (monitoring, alerting, logging, tracing, etc.) Enable quality engineering through KPIs and code coverage and quality checks Standardise GitOps/declarative software development lifecycle Audit as a service Senior DevOpsEngineers take full ownership of delivering high-performing, high-impactbiomedical and scientificdataopsproducts and services, froma description of apattern thatcustomer Data Engineers are trying touseall the way through tofinal delivery (and ongoing monitoring and operations)of a templated project and all associated automation. They arestandard-bearers for software engineering and quality coding practices within theteam andareexpected to mentor more junior engineers; they may even coordinate the work of more junior engineers on a large project.Theydevise useful metrics for ensuring their services are meeting customer demand and having animpact anditerate to deliver and improve on those metrics in an agile fashion. Successful Senior DevOpsEngineers are developing expertise with the types of data and types of tools that are leveraged in the biomedical and scientific data engineering space, andhas the following skills and experience(withsignificant depth in one or more of these areas): Demonstrable experience deploying robust modularised/container-based solutions to production (ideally GCP) and leveraging the Cloud NativeComputing Foundation (CNCF) ecosystem Significant depth in DevOps principles and tools (e.g.GitOps, Jenkins,CircleCI, Azure DevOps, etc.), and how to integrate these tools with other productivity tools (e.g. Jira, Slack, Microsoft Teams) to build a comprehensive workflow Programming in Python. Scala orGo Embedding agile software engineering (task/issue management, testing, documentation, software development lifecycle, source control, etc.) Leveraging major cloud providers, both via Kubernetesorvia vendor-specific services Authentication and Authorization flows and associated technologies (e.g.OAuth2 + JWT) Common distributed data tools (e.g.Spark, Hive) The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: We are looking for professionals with these required skills to achieve our goals: Masters in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering, etc, plus 5 years job experience (or PhD plus 3 years job experience) Experience with DevOps tools and concepts (e.g. Jira, GitLabs / Jenkins / CircleCI / Azure DevOps /etc.)Excellent with common distributed data tools in a production setting (Spark, Kafka, etc) Experience with specialized data architecture (e.g. optimizing physical layout for access patterns, including bloom filters, optimizing against self-describing formats such as ORC or Parquet, etc.) Experience with search / indexing systems (e.g. Elasticsearch) Expertise with agile development in Python, Scala, Go, and/or C++ Experience building reusable components on top of the CNCF ecosystem including Kubernetes Metrics-first mindset Experience mentoring junior engineers into deep technical expertise Preferred Qualifications: If you have the following characteristics, it would be a plus: Experience with agile software development Experience with building and designing a DevOps-first way of working Experience with building reusable components on top of the CNCF ecosystem including Kubernetes (or similar ecosystem ) LI-GSK Why GSK? Our values and expectationsare at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness As a company driven by our values of Patient focus, Transparency, Respect and Integrity, we know inclusion and diversity are essential for us to be able to succeed. We want all our colleagues to thrive at GSK bringing their unique experiences, ensuring they feel good and to keep growing their careers. As a candidate for a role, we want you to feel the same way. As an Equal Opportunity Employer, we are open to all talent. In the US, we also adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to race/ethnicity, colour, national origin, religion, gender, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class ( US only). We believe in an agile working culture for all our roles. If flexibility is important to you, we encourage you to explore with our hiring team what the opportunities are. Please don't hesitate to contact us if you'd like to discuss any adjustments to our process which might help you demonstrate your strengths and capabilities. You can either call us on , or send an email As you apply, we will ask you to share some personal information which is entirely voluntary..... click apply for full job details
23/09/2022
Full time
Site Name: UK - Hertfordshire - Stevenage, USA - Connecticut - Hartford, USA - Delaware - Dover, USA - Maryland - Rockville, USA - Massachusetts - Cambridge, USA - Massachusetts - Waltham, USA - New Jersey - Trenton, USA - Pennsylvania - Upper Providence Posted Date: Jun 6 2022 The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. Achieving delivery of the right data to the right people at the right time needs design and implementation of data flows and data products which leverage internal and external data assets and tools to drive discovery and development is a key objective for the Data Science and Data Engineering (DSDE) team within GSK's Pharmaceutical R&D organization. There are five key drivers for this approach, which are closely aligned with GSK's corporate priorities of Innovation, Performance and Trust: Automation of end-to-end data flows :Faster and reliable ingestion of high throughput data in genetics, genomics and multi-omics, to extract value of investments in new technology (instrument to analysis-ready data in Enabling governance by design of external and internal data :with engineered practical solutions for controlled use and monitoring Innovative disease-specific and domain-expert specific data products : to enable computational scientists and their research unit collaborators to get faster to key insights leading to faster biopharmaceutical development cycles. Supporting e2ecode traceability and data provenance :Increasing assurance of data integrity through automation, integration. Improving engineering efficiency :Extensible, reusable, scalable,updateable,maintainable, virtualized traceable data and code would be driven by data engineering innovation and better resource utilization. We are looking for experienced Senior DevOps Engineers to join our growing Data Ops team. As a Senior Dev Ops Engineer is a highly technical individual contributor, building modern, cloud-native, DevOps-first systems for standardizing and templatizingbiomedical and scientificdata engineering, with demonstrable experience across the following areas: Deliver declarative components for common data ingestion, transformation and publishing techniques Define and implement data governance aligned to modern standards Establish scalable, automated processes for data engineering teams across GSK Thought leader and partner with wider DSDE data engineering teams to advise on implementation and best practices Cloud Infrastructure-as-Code Define Service and Flow orchestration Data as a configurable resource(including configuration-driven access to scientific data modelling tools) Observability (monitoring, alerting, logging, tracing, etc.) Enable quality engineering through KPIs and code coverage and quality checks Standardise GitOps/declarative software development lifecycle Audit as a service Senior DevOpsEngineers take full ownership of delivering high-performing, high-impactbiomedical and scientificdataopsproducts and services, froma description of apattern thatcustomer Data Engineers are trying touseall the way through tofinal delivery (and ongoing monitoring and operations)of a templated project and all associated automation. They arestandard-bearers for software engineering and quality coding practices within theteam andareexpected to mentor more junior engineers; they may even coordinate the work of more junior engineers on a large project.Theydevise useful metrics for ensuring their services are meeting customer demand and having animpact anditerate to deliver and improve on those metrics in an agile fashion. Successful Senior DevOpsEngineers are developing expertise with the types of data and types of tools that are leveraged in the biomedical and scientific data engineering space, andhas the following skills and experience(withsignificant depth in one or more of these areas): Demonstrable experience deploying robust modularised/container-based solutions to production (ideally GCP) and leveraging the Cloud NativeComputing Foundation (CNCF) ecosystem Significant depth in DevOps principles and tools (e.g.GitOps, Jenkins,CircleCI, Azure DevOps, etc.), and how to integrate these tools with other productivity tools (e.g. Jira, Slack, Microsoft Teams) to build a comprehensive workflow Programming in Python. Scala orGo Embedding agile software engineering (task/issue management, testing, documentation, software development lifecycle, source control, etc.) Leveraging major cloud providers, both via Kubernetesorvia vendor-specific services Authentication and Authorization flows and associated technologies (e.g.OAuth2 + JWT) Common distributed data tools (e.g.Spark, Hive) The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: We are looking for professionals with these required skills to achieve our goals: Masters in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering, etc, plus 5 years job experience (or PhD plus 3 years job experience) Experience with DevOps tools and concepts (e.g. Jira, GitLabs / Jenkins / CircleCI / Azure DevOps /etc.)Excellent with common distributed data tools in a production setting (Spark, Kafka, etc) Experience with specialized data architecture (e.g. optimizing physical layout for access patterns, including bloom filters, optimizing against self-describing formats such as ORC or Parquet, etc.) Experience with search / indexing systems (e.g. Elasticsearch) Expertise with agile development in Python, Scala, Go, and/or C++ Experience building reusable components on top of the CNCF ecosystem including Kubernetes Metrics-first mindset Experience mentoring junior engineers into deep technical expertise Preferred Qualifications: If you have the following characteristics, it would be a plus: Experience with agile software development Experience with building and designing a DevOps-first way of working Experience with building reusable components on top of the CNCF ecosystem including Kubernetes (or similar ecosystem ) LI-GSK Why GSK? Our values and expectationsare at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness As a company driven by our values of Patient focus, Transparency, Respect and Integrity, we know inclusion and diversity are essential for us to be able to succeed. We want all our colleagues to thrive at GSK bringing their unique experiences, ensuring they feel good and to keep growing their careers. As a candidate for a role, we want you to feel the same way. As an Equal Opportunity Employer, we are open to all talent. In the US, we also adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to race/ethnicity, colour, national origin, religion, gender, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class ( US only). We believe in an agile working culture for all our roles. If flexibility is important to you, we encourage you to explore with our hiring team what the opportunities are. Please don't hesitate to contact us if you'd like to discuss any adjustments to our process which might help you demonstrate your strengths and capabilities. You can either call us on , or send an email As you apply, we will ask you to share some personal information which is entirely voluntary..... click apply for full job details
Site Name: UK - Hertfordshire - Stevenage, USA - Connecticut - Hartford, USA - Delaware - Dover, USA - Maryland - Rockville, USA - Massachusetts - Waltham, USA - Pennsylvania - Upper Providence, Warren NJ Posted Date: Aug The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. Achieving delivery of the right data to the right people at the right time needs design and implementation of data flows and data products which leverage internal and external data assets and tools to drive discovery and development is a key objective for the Data Science and Data Engineering (DS D E) team within GSK's Pharmaceutical R&D organisation . There are five key drivers for this approach, which are closely aligned with GSK's corporate priorities of Innovation, Performance and Trust: Automation of end-to-end data flows: Faster and reliable ingestion of high throughput data in genetics, genomics and multi-omics, to extract value of investments in new technology (instrument to analysis-ready data in Enabling governance by design of external and internal data: with engineered practical solutions for controlled use and monitoring Innovative disease-specific and domain-expert specific data products : to enable computational scientists and their research unit collaborators to get faster to key insights leading to faster biopharmaceutical development cycles. Supporting e2 e code traceability and data provenance: Increasing assurance of data integrity through automation, integration Improving engineering efficiency: Extensible, reusable, scalable,updateable,maintainable, virtualized traceable data and code would b e driven by data engineering innovation and better resource utilization. We are looking for an experienced Sr. Data Ops Engineer to join our growing Data Ops team. As a Sr. Data Ops Engineer is a highly technical individual contributor, building modern, cloud-native, DevOps-first systems for standardizing and templatizingbiomedical and scientificdata engineering, with demonstrable experience across the following areas : Deliver declarative components for common data ingestion, transformation and publishing techniques Define and implement data governance aligned to modern standards Establish scalable, automated processes for data engineering team s across GSK Thought leader and partner with wider DSDE data engineering teams to advise on implementation and best practices Cloud Infrastructure-as-Code D efine Service and Flow orchestration Data as a configurable resource(including configuration-driven access to scientific data modelling tools) Ob servabilty (monitoring, alerting, logging, tracing, ...) Enable quality engineering through KPIs and c ode coverage and quality checks Standardise GitOps /declarative software development lifecycle Audit as a service Sr. DataOpsEngineerstake full ownership of delivering high-performing, high-impactbiomedical and scientificdataopsproducts and services, froma description of apattern thatcustomer Data Engineers are trying touseall the way through tofinal delivery (and ongoing monitoring and operations)of a templated project and all associated automation. They arestandard-bearers for software engineering and quality coding practices within theteam andareexpected to mentor more junior engineers; they may even coordinate the work of more junior engineers on a large project.Theydevise useful metrics for ensuring their services are meeting customer demand and having animpact anditerate to deliver and improve on those metrics in an agile fashion. A successfulSr.DataOpsEngineeris developing expertise with the types of data and types of tools that are leveraged in the biomedical and scientific data engineering space, andhas the following skills and experience(withsignificant depth in one or more of these areas): Demonstrable experience deploying robust modularised/ container based solutions to production (ideally GCP) and leveraging the Cloud NativeComputing Foundation (CNCF) ecosystem Significant depth in DevOps principles and tools ( e.g. GitOps , Jenkins, CircleCI , Azure DevOps, ...), and how to integrate these tools with other productivity tools (e.g. Jira, Slack, Microsoft Teams) to build a comprehensive workflow P rogramming in Python. Scala or Go Embedding agile s oftware engineering ( task/issue management, testing, documentation, software development lifecycle, source control, ) Leveraging major cloud providers, both via Kubernetes or via vendor-specific services Authentication and Authorization flows and associated technologies ( e.g. OAuth2 + JWT) Common distributed data tools ( e.g. Spark, Hive) The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: Bachelors degree in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering, etc, plus 7 years job experience or Masters degree with 5 Years of experience (or PhD plus 3 years job experience) Deep experience with DevOps tools and concepts ( e.g. Jira, GitLabs / Jenkins / CircleCI / Azure DevOps / ...) Excellent with common distributed data tools in a production setting (Spark, Kafka, etc) Experience with specialized data architecture ( e.g. optimizing physical layout for access patterns, including bloom filters, optimizing against self-describing formats such as ORC or Parquet, etc) Experience with search / indexing systems ( e.g. Elasticsearch) Deep expertise with agile development in Python, Scala, Go, and/or C++ Experience building reusable components on top of the CNCF ecosystem including Kubernetes Metrics-first mindset Experience mentoring junior engineers into deep technical expertise Preferred Qualifications: If you have the following characteristics, it would be a plus: Experience with agile software development Experience building and designing a DevOps-first way of working Demonstrated experience building reusable components on top of the CNCF ecosystem including Kubernetes (or similar ecosystem ) LI-GSK Why GSK? Our values and expectations are at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness As a company driven by our values of Patient focus, Transparency, Respect and Integrity, we know inclusion and diversity are essential for us to be able to succeed. We want all our colleagues to thrive at GSK bringing their unique experiences, ensuring they feel good and to keep growing their careers. As a candidate for a role, we want you to feel the same way. As an Equal Opportunity Employer, we are open to all talent. In the US, we also adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to neurodiversity, race/ethnicity, colour, national origin, religion, gender, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class ( US only). We believe in an agile working culture for all our roles. If flexibility is important to you, we encourage you to explore with our hiring team what the opportunities are. Please don't hesitate to contact us if you'd like to discuss any adjustments to our process which might help you demonstrate your strengths and capabilities. You can either call us on , or send an email As you apply, we will ask you to share some personal information which is entirely voluntary..... click apply for full job details
23/09/2022
Full time
Site Name: UK - Hertfordshire - Stevenage, USA - Connecticut - Hartford, USA - Delaware - Dover, USA - Maryland - Rockville, USA - Massachusetts - Waltham, USA - Pennsylvania - Upper Providence, Warren NJ Posted Date: Aug The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. Achieving delivery of the right data to the right people at the right time needs design and implementation of data flows and data products which leverage internal and external data assets and tools to drive discovery and development is a key objective for the Data Science and Data Engineering (DS D E) team within GSK's Pharmaceutical R&D organisation . There are five key drivers for this approach, which are closely aligned with GSK's corporate priorities of Innovation, Performance and Trust: Automation of end-to-end data flows: Faster and reliable ingestion of high throughput data in genetics, genomics and multi-omics, to extract value of investments in new technology (instrument to analysis-ready data in Enabling governance by design of external and internal data: with engineered practical solutions for controlled use and monitoring Innovative disease-specific and domain-expert specific data products : to enable computational scientists and their research unit collaborators to get faster to key insights leading to faster biopharmaceutical development cycles. Supporting e2 e code traceability and data provenance: Increasing assurance of data integrity through automation, integration Improving engineering efficiency: Extensible, reusable, scalable,updateable,maintainable, virtualized traceable data and code would b e driven by data engineering innovation and better resource utilization. We are looking for an experienced Sr. Data Ops Engineer to join our growing Data Ops team. As a Sr. Data Ops Engineer is a highly technical individual contributor, building modern, cloud-native, DevOps-first systems for standardizing and templatizingbiomedical and scientificdata engineering, with demonstrable experience across the following areas : Deliver declarative components for common data ingestion, transformation and publishing techniques Define and implement data governance aligned to modern standards Establish scalable, automated processes for data engineering team s across GSK Thought leader and partner with wider DSDE data engineering teams to advise on implementation and best practices Cloud Infrastructure-as-Code D efine Service and Flow orchestration Data as a configurable resource(including configuration-driven access to scientific data modelling tools) Ob servabilty (monitoring, alerting, logging, tracing, ...) Enable quality engineering through KPIs and c ode coverage and quality checks Standardise GitOps /declarative software development lifecycle Audit as a service Sr. DataOpsEngineerstake full ownership of delivering high-performing, high-impactbiomedical and scientificdataopsproducts and services, froma description of apattern thatcustomer Data Engineers are trying touseall the way through tofinal delivery (and ongoing monitoring and operations)of a templated project and all associated automation. They arestandard-bearers for software engineering and quality coding practices within theteam andareexpected to mentor more junior engineers; they may even coordinate the work of more junior engineers on a large project.Theydevise useful metrics for ensuring their services are meeting customer demand and having animpact anditerate to deliver and improve on those metrics in an agile fashion. A successfulSr.DataOpsEngineeris developing expertise with the types of data and types of tools that are leveraged in the biomedical and scientific data engineering space, andhas the following skills and experience(withsignificant depth in one or more of these areas): Demonstrable experience deploying robust modularised/ container based solutions to production (ideally GCP) and leveraging the Cloud NativeComputing Foundation (CNCF) ecosystem Significant depth in DevOps principles and tools ( e.g. GitOps , Jenkins, CircleCI , Azure DevOps, ...), and how to integrate these tools with other productivity tools (e.g. Jira, Slack, Microsoft Teams) to build a comprehensive workflow P rogramming in Python. Scala or Go Embedding agile s oftware engineering ( task/issue management, testing, documentation, software development lifecycle, source control, ) Leveraging major cloud providers, both via Kubernetes or via vendor-specific services Authentication and Authorization flows and associated technologies ( e.g. OAuth2 + JWT) Common distributed data tools ( e.g. Spark, Hive) The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: Bachelors degree in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering, etc, plus 7 years job experience or Masters degree with 5 Years of experience (or PhD plus 3 years job experience) Deep experience with DevOps tools and concepts ( e.g. Jira, GitLabs / Jenkins / CircleCI / Azure DevOps / ...) Excellent with common distributed data tools in a production setting (Spark, Kafka, etc) Experience with specialized data architecture ( e.g. optimizing physical layout for access patterns, including bloom filters, optimizing against self-describing formats such as ORC or Parquet, etc) Experience with search / indexing systems ( e.g. Elasticsearch) Deep expertise with agile development in Python, Scala, Go, and/or C++ Experience building reusable components on top of the CNCF ecosystem including Kubernetes Metrics-first mindset Experience mentoring junior engineers into deep technical expertise Preferred Qualifications: If you have the following characteristics, it would be a plus: Experience with agile software development Experience building and designing a DevOps-first way of working Demonstrated experience building reusable components on top of the CNCF ecosystem including Kubernetes (or similar ecosystem ) LI-GSK Why GSK? Our values and expectations are at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness As a company driven by our values of Patient focus, Transparency, Respect and Integrity, we know inclusion and diversity are essential for us to be able to succeed. We want all our colleagues to thrive at GSK bringing their unique experiences, ensuring they feel good and to keep growing their careers. As a candidate for a role, we want you to feel the same way. As an Equal Opportunity Employer, we are open to all talent. In the US, we also adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to neurodiversity, race/ethnicity, colour, national origin, religion, gender, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class ( US only). We believe in an agile working culture for all our roles. If flexibility is important to you, we encourage you to explore with our hiring team what the opportunities are. Please don't hesitate to contact us if you'd like to discuss any adjustments to our process which might help you demonstrate your strengths and capabilities. You can either call us on , or send an email As you apply, we will ask you to share some personal information which is entirely voluntary..... click apply for full job details
Do you want to be a part of a creative, stimulating, and informal culture at a growing, cutting-edge business operating in the alternative finance space? A full-time position is immediately available to join a seed-round start-up rating agency for digital assets. Background Agio Ratings () aims to be the rating agency for the digital asset market. The firm will use risk measurement techniques developed and tailored for the digital asset class. Our mission is to empower our clients to evaluate counterparty risk and make high quality investment decisions. Founded by a team of seasoned risk takers and data scientists, Agio intends to provide the best counterparty risk analysis available. By applying the rigour and discipline of conventional financial systems to the digital asset class, we will bridge the gap between traditional finance and emerging crypto markets. Position We are seeking a Backend Python Developer to join our Central London based team. This is an opportunity to be involved at all stages in the building and running of an end-to-end process to support the data needs of the business. Initially working with a small team, this role offers the chance to join at the start and grow with the business. This role will involve: Work with the CTO to build and maintain a cloud-based data platform to support the firm's downstream analytics and statistical models Develop automated ETL pipelines to ingest data from a variety of sources and store it for efficient consumption by both researchers and production platforms Work with our Data Scientists to identify and onboard new, proprietary, data sources to extend the firm's offering Develop an API abstraction layer to facilitate data retrieval by researchers and production models An ideal candidate has: An ability to self-start and work without close day-to-day supervision 5+ years of prior relevant experience Strong programming skills in Python, and its associated ecosystem, and computer science in general Experience with containerised application deployment: Docker, Kubernetes, container registries and so on Experience working on cloud computing platforms, in particular Azure Working knowledge of best practice CI/ CD pipeline automation and DevOps practices around the above Technologies likely to be used and therefore experience in which would be advantageous: Apache Airflow, MongoDB, Azure Kubernetes Service, Azure Container Registry, Helm and Helmfile for k8s manifests, minikube or MicroK8s for prototyping, Git / GitHub / GitHub Actions for source control, Poetry for Python dependency management Benefits The position enjoys the following benefits: Salary of £65k - £80k per annum plus options/ equity Flexible work patterns, including work from home and to your own schedule 33 days of paid annual leave, including public holidays, with the option to sell or buy a further 5 days
22/09/2022
Full time
Do you want to be a part of a creative, stimulating, and informal culture at a growing, cutting-edge business operating in the alternative finance space? A full-time position is immediately available to join a seed-round start-up rating agency for digital assets. Background Agio Ratings () aims to be the rating agency for the digital asset market. The firm will use risk measurement techniques developed and tailored for the digital asset class. Our mission is to empower our clients to evaluate counterparty risk and make high quality investment decisions. Founded by a team of seasoned risk takers and data scientists, Agio intends to provide the best counterparty risk analysis available. By applying the rigour and discipline of conventional financial systems to the digital asset class, we will bridge the gap between traditional finance and emerging crypto markets. Position We are seeking a Backend Python Developer to join our Central London based team. This is an opportunity to be involved at all stages in the building and running of an end-to-end process to support the data needs of the business. Initially working with a small team, this role offers the chance to join at the start and grow with the business. This role will involve: Work with the CTO to build and maintain a cloud-based data platform to support the firm's downstream analytics and statistical models Develop automated ETL pipelines to ingest data from a variety of sources and store it for efficient consumption by both researchers and production platforms Work with our Data Scientists to identify and onboard new, proprietary, data sources to extend the firm's offering Develop an API abstraction layer to facilitate data retrieval by researchers and production models An ideal candidate has: An ability to self-start and work without close day-to-day supervision 5+ years of prior relevant experience Strong programming skills in Python, and its associated ecosystem, and computer science in general Experience with containerised application deployment: Docker, Kubernetes, container registries and so on Experience working on cloud computing platforms, in particular Azure Working knowledge of best practice CI/ CD pipeline automation and DevOps practices around the above Technologies likely to be used and therefore experience in which would be advantageous: Apache Airflow, MongoDB, Azure Kubernetes Service, Azure Container Registry, Helm and Helmfile for k8s manifests, minikube or MicroK8s for prototyping, Git / GitHub / GitHub Actions for source control, Poetry for Python dependency management Benefits The position enjoys the following benefits: Salary of £65k - £80k per annum plus options/ equity Flexible work patterns, including work from home and to your own schedule 33 days of paid annual leave, including public holidays, with the option to sell or buy a further 5 days
Meta's mission is to give people the power to build community and bring the world closer together through our family of apps and services. we're building a different kind of company that connects billions of people around the world, gives them ways to share what matters most to them, and helps bring people closer together. Whether we're creating new products or helping a small business expand its reach, people at Meta are builders at heart. Our global teams are constantly iterating, solving problems, and working together to empower people around the world to build community and connect in meaningful ways. Together, we can help people build stronger communities - we're just getting started.We're looking for analytics leaders to work on our core products with a passion for social media to help drive informed business decisions for Meta. You will enjoy working with cutting edge technology, and the ability to see your insights turned into real products on a regular basis. The perfect candidate will have a background in a technical field, will have experience working with large data stores, and will have some experience building software. You will enjoy working with one of the richest data sets in the world, cutting edge technology, and the ability to see your insights turned into real products. You are scrappy, focused on results, a self-starter, and have demonstrated success in using analytics to drive the understanding, progression, and user engagement of a product. You have experience leading and growing a team of data scientists. You have a passion for mentoring members of your team to achieve strong results andcontinued engagement.The perfect candidate will have experience building and managing world-class analytics teams, will have worked with large and complex data, and will have a track record of driving data-informed decisions. Data Science Manager - Instagram Creator Relevance Responsibilities: Help build and lead a great data science team to deliver world-class products Apply your expertise in quantitative analysis, data mining, and the presentation of data to see beyond the numbers and understand how our users interact with our core products Partner with Product and Engineering teams to solve problems and identify trends and opportunities Inform, influence, support, and execute our product decisions Manage development of data resources, gather requirements, organize sources, and support product launches while keeping the team accountable and impactful Recruit and develop teams of experienced data scientists Minimum Qualifications: 5+ years of experience managing high performers in a formal capacity Experience in quantitative analysis at a technology company, consulting, investment banking, or product management including experience with SQL or other programming languages. ML expertise strongly preferred. Experience communicating and representing results of analysis to senior leadership Experience initiating and driving projects to completion with minimal guidance. Experience with applied statistics or experimentation (i.e. A/B testing) in an industry setting Preferred Qualifications: Experience with large data sets and distributed computing (Hive/Hadoop). Understanding of statistical analysis, experience with packages, such as R, MATLAB, SPSS, SAS, Stata, etc. Bachelor Degree in Computer Science, Math, Physics, Engineering, or related quantitative field.
21/09/2022
Full time
Meta's mission is to give people the power to build community and bring the world closer together through our family of apps and services. we're building a different kind of company that connects billions of people around the world, gives them ways to share what matters most to them, and helps bring people closer together. Whether we're creating new products or helping a small business expand its reach, people at Meta are builders at heart. Our global teams are constantly iterating, solving problems, and working together to empower people around the world to build community and connect in meaningful ways. Together, we can help people build stronger communities - we're just getting started.We're looking for analytics leaders to work on our core products with a passion for social media to help drive informed business decisions for Meta. You will enjoy working with cutting edge technology, and the ability to see your insights turned into real products on a regular basis. The perfect candidate will have a background in a technical field, will have experience working with large data stores, and will have some experience building software. You will enjoy working with one of the richest data sets in the world, cutting edge technology, and the ability to see your insights turned into real products. You are scrappy, focused on results, a self-starter, and have demonstrated success in using analytics to drive the understanding, progression, and user engagement of a product. You have experience leading and growing a team of data scientists. You have a passion for mentoring members of your team to achieve strong results andcontinued engagement.The perfect candidate will have experience building and managing world-class analytics teams, will have worked with large and complex data, and will have a track record of driving data-informed decisions. Data Science Manager - Instagram Creator Relevance Responsibilities: Help build and lead a great data science team to deliver world-class products Apply your expertise in quantitative analysis, data mining, and the presentation of data to see beyond the numbers and understand how our users interact with our core products Partner with Product and Engineering teams to solve problems and identify trends and opportunities Inform, influence, support, and execute our product decisions Manage development of data resources, gather requirements, organize sources, and support product launches while keeping the team accountable and impactful Recruit and develop teams of experienced data scientists Minimum Qualifications: 5+ years of experience managing high performers in a formal capacity Experience in quantitative analysis at a technology company, consulting, investment banking, or product management including experience with SQL or other programming languages. ML expertise strongly preferred. Experience communicating and representing results of analysis to senior leadership Experience initiating and driving projects to completion with minimal guidance. Experience with applied statistics or experimentation (i.e. A/B testing) in an industry setting Preferred Qualifications: Experience with large data sets and distributed computing (Hive/Hadoop). Understanding of statistical analysis, experience with packages, such as R, MATLAB, SPSS, SAS, Stata, etc. Bachelor Degree in Computer Science, Math, Physics, Engineering, or related quantitative field.
Meta's mission is to give people the power to build community and bring the world closer together through our family of apps and services. we're building a different kind of company that connects billions of people around the world, gives them ways to share what matters most to them, and helps bring people closer together. Whether we're creating new products or helping a small business expand its reach, people at Meta are builders at heart. Our global teams are constantly iterating, solving problems, and working together to empower people around the world to build community and connect in meaningful ways. Together, we can help people build stronger communities - we're just getting started.We're looking for analytics leaders to work on our core products with a passion for social media to help drive informed business decisions for Meta. You will enjoy working with cutting edge technology, and the ability to see your insights turned into real products on a regular basis. The perfect candidate will have a background in a technical field, will have experience working with large data stores, and will have some experience building software. You will enjoy working with one of the richest data sets in the world, cutting edge technology, and the ability to see your insights turned into real products. You are scrappy, focused on results, a self-starter, and have demonstrated success in using analytics to drive the understanding, progression, and user engagement of a product. You have experience leading and growing a team of data scientists. You have a passion for mentoring members of your team to achieve strong results andcontinued engagement.The perfect candidate will have experience building and managing world-class analytics teams, will have worked with large and complex data, and will have a track record of driving data-informed decisions. Data Science Manager - Instagram Creator Relevance Responsibilities: Help build and lead a great data science team to deliver world-class products Apply your expertise in quantitative analysis, data mining, and the presentation of data to see beyond the numbers and understand how our users interact with our core products Partner with Product and Engineering teams to solve problems and identify trends and opportunities Inform, influence, support, and execute our product decisions Manage development of data resources, gather requirements, organize sources, and support product launches while keeping the team accountable and impactful Recruit and develop teams of experienced data scientists Minimum Qualifications: 5+ years of experience managing high performers in a formal capacity Experience in quantitative analysis at a technology company, consulting, investment banking, or product management including experience with SQL or other programming languages. ML expertise strongly preferred. Experience communicating and representing results of analysis to senior leadership Experience initiating and driving projects to completion with minimal guidance. Experience with applied statistics or experimentation (i.e. A/B testing) in an industry setting Preferred Qualifications: Experience with large data sets and distributed computing (Hive/Hadoop). Understanding of statistical analysis, experience with packages, such as R, MATLAB, SPSS, SAS, Stata, etc. Bachelor Degree in Computer Science, Math, Physics, Engineering, or related quantitative field.
21/09/2022
Full time
Meta's mission is to give people the power to build community and bring the world closer together through our family of apps and services. we're building a different kind of company that connects billions of people around the world, gives them ways to share what matters most to them, and helps bring people closer together. Whether we're creating new products or helping a small business expand its reach, people at Meta are builders at heart. Our global teams are constantly iterating, solving problems, and working together to empower people around the world to build community and connect in meaningful ways. Together, we can help people build stronger communities - we're just getting started.We're looking for analytics leaders to work on our core products with a passion for social media to help drive informed business decisions for Meta. You will enjoy working with cutting edge technology, and the ability to see your insights turned into real products on a regular basis. The perfect candidate will have a background in a technical field, will have experience working with large data stores, and will have some experience building software. You will enjoy working with one of the richest data sets in the world, cutting edge technology, and the ability to see your insights turned into real products. You are scrappy, focused on results, a self-starter, and have demonstrated success in using analytics to drive the understanding, progression, and user engagement of a product. You have experience leading and growing a team of data scientists. You have a passion for mentoring members of your team to achieve strong results andcontinued engagement.The perfect candidate will have experience building and managing world-class analytics teams, will have worked with large and complex data, and will have a track record of driving data-informed decisions. Data Science Manager - Instagram Creator Relevance Responsibilities: Help build and lead a great data science team to deliver world-class products Apply your expertise in quantitative analysis, data mining, and the presentation of data to see beyond the numbers and understand how our users interact with our core products Partner with Product and Engineering teams to solve problems and identify trends and opportunities Inform, influence, support, and execute our product decisions Manage development of data resources, gather requirements, organize sources, and support product launches while keeping the team accountable and impactful Recruit and develop teams of experienced data scientists Minimum Qualifications: 5+ years of experience managing high performers in a formal capacity Experience in quantitative analysis at a technology company, consulting, investment banking, or product management including experience with SQL or other programming languages. ML expertise strongly preferred. Experience communicating and representing results of analysis to senior leadership Experience initiating and driving projects to completion with minimal guidance. Experience with applied statistics or experimentation (i.e. A/B testing) in an industry setting Preferred Qualifications: Experience with large data sets and distributed computing (Hive/Hadoop). Understanding of statistical analysis, experience with packages, such as R, MATLAB, SPSS, SAS, Stata, etc. Bachelor Degree in Computer Science, Math, Physics, Engineering, or related quantitative field.
Python Developer - £45,000-£55,000 + bonus + benefits I am seeking a Python Software Developer to deliver advanced data analytic applications that are used across the organisation and deliver actionable insights to inform decision making. This role would suit a talented and enthusiastic Python Developer to join a market leading consumer data specialist who leverage the power of Machine Learning and Data Science to solve real world digital challenges, boasting an enviable team of engineers and scientists, they are growing through investment and evolution. Responsibilities: Actively contribute to developing the company software products using Python Collaborate with other team members Senior level management as needed Follow development best practice and relevant standards and procedures Work to Agile best practice Required Experience: Proven experience in a similar Python Developer role ideally in a data focussed field Technical skills across Python, (Frameworks such as Flask and Django), SQL, JavaScipt, CSS, HTML, Linux, GIT Comfortable working in cross functional teams to set timescales Track record of building and deploying full-stack web applications Be willing to learn new skills based on the companies varied tech stack, where needed Degree educated in Computer Science, or related field is desirable This company offices are based in multiple locations with the opportunity of flexible working so candidates across the UK will be considered. All candidates applying must be eligible to work in the UK, and be UK based. Please apply to this ad to discuss the full details, alternatively contact me via our website. Laura Sheehan at Spectrum IT Recruitment
07/10/2021
Full time
Python Developer - £45,000-£55,000 + bonus + benefits I am seeking a Python Software Developer to deliver advanced data analytic applications that are used across the organisation and deliver actionable insights to inform decision making. This role would suit a talented and enthusiastic Python Developer to join a market leading consumer data specialist who leverage the power of Machine Learning and Data Science to solve real world digital challenges, boasting an enviable team of engineers and scientists, they are growing through investment and evolution. Responsibilities: Actively contribute to developing the company software products using Python Collaborate with other team members Senior level management as needed Follow development best practice and relevant standards and procedures Work to Agile best practice Required Experience: Proven experience in a similar Python Developer role ideally in a data focussed field Technical skills across Python, (Frameworks such as Flask and Django), SQL, JavaScipt, CSS, HTML, Linux, GIT Comfortable working in cross functional teams to set timescales Track record of building and deploying full-stack web applications Be willing to learn new skills based on the companies varied tech stack, where needed Degree educated in Computer Science, or related field is desirable This company offices are based in multiple locations with the opportunity of flexible working so candidates across the UK will be considered. All candidates applying must be eligible to work in the UK, and be UK based. Please apply to this ad to discuss the full details, alternatively contact me via our website. Laura Sheehan at Spectrum IT Recruitment
Data Science Engineer Up to £80,000 + stock options + bonus UK Based Remote Are you an experienced machine learning engineer who has built machine learning models to optimize and improve AI products? Are you looking for fast-career progression in a leading data-driven tech company? THE COMPANY: As a Data Science Engineer, you will work alongside the data science team to optimize and improve their line of successful AI products and machine learning platform. They have received several rounds of investment and funding, so have a big drive to grow out the data science team which is essential to the business. You will have the opportunity to mentor junior data scientists and move quickly into a Lead position in the team. THE ROLE: The role of Data Science Engineer will require you to build machine learning solutions end-to-end to develop and improve the data platform and products, as well as presenting to senior stakeholder and clients. In specific, you can expect to be involved in the following You will be involved in building machine learning models from scratch through to deployment You will design and implement new models to add value, with opportunity for future R&D You will be a self-starter and bring new ideas and techniques and think outside the box You will be interacting with clients and stakeholders to explain various models and AI products You will mentor/ manage junior data scientists in the growing team, with the opportunity to progress quickly to leadership YOUR SKILLS AND EXPERIENCE: The successful Data Science Engineer will have the following skills and experience: Educated to MSc/ PhD level in a STEM degree focused in quantitative methods and statistics (Science, Technology, Engineering, Mathematics etc) Extensive applied commercial experience coding in Python Experience building and deploying ML models from scratch to production Excellent communication skills and a strong "go-getter" attitude THE BENEFITS: The successful Data Science Engineer will receive a salary, dependent on experience be up to £80,000. HOW TO APPLY: Please register your interest by sending your CV to Rosie O'Callaghan via the Apply link on this page.
30/09/2021
Full time
Data Science Engineer Up to £80,000 + stock options + bonus UK Based Remote Are you an experienced machine learning engineer who has built machine learning models to optimize and improve AI products? Are you looking for fast-career progression in a leading data-driven tech company? THE COMPANY: As a Data Science Engineer, you will work alongside the data science team to optimize and improve their line of successful AI products and machine learning platform. They have received several rounds of investment and funding, so have a big drive to grow out the data science team which is essential to the business. You will have the opportunity to mentor junior data scientists and move quickly into a Lead position in the team. THE ROLE: The role of Data Science Engineer will require you to build machine learning solutions end-to-end to develop and improve the data platform and products, as well as presenting to senior stakeholder and clients. In specific, you can expect to be involved in the following You will be involved in building machine learning models from scratch through to deployment You will design and implement new models to add value, with opportunity for future R&D You will be a self-starter and bring new ideas and techniques and think outside the box You will be interacting with clients and stakeholders to explain various models and AI products You will mentor/ manage junior data scientists in the growing team, with the opportunity to progress quickly to leadership YOUR SKILLS AND EXPERIENCE: The successful Data Science Engineer will have the following skills and experience: Educated to MSc/ PhD level in a STEM degree focused in quantitative methods and statistics (Science, Technology, Engineering, Mathematics etc) Extensive applied commercial experience coding in Python Experience building and deploying ML models from scratch to production Excellent communication skills and a strong "go-getter" attitude THE BENEFITS: The successful Data Science Engineer will receive a salary, dependent on experience be up to £80,000. HOW TO APPLY: Please register your interest by sending your CV to Rosie O'Callaghan via the Apply link on this page.
Job Description Where you'll fit in & what our team goals are.... As a Data Scientist, you will develop analytics using a wide range of data sources to directly help the research, portfolio construction and trading processes across a range of different investment teams and share responsibility for the implementation and maintenance of these processes. This will be done under the guidance of senior members of the team.. Responsibilities How you'll spend your time.... What you'll do: Actively participate in research projects through idea generation, data preparation, rigorous analysis, reaching sensible conclusions and making actionable recommendations Engineer processes to gather data from a wide variety of data sources and construct and maintain data validation processes Present results of data analysis using data visualization techniques within applications such as Shiny, Dash and Jupyter Research advanced data science methods and techniques Assist in enriching the team's research and production infrastructure and analytics capabilities What you'll like about this role: An organization with steadfast commitments to diversity and inclusion A working climate that values and supports different perspectives and culture Opportunity to gain in-depth exposure to the field of quantitative finance Opportunities to collaborate with fundamental research analysts and portfolio managers across the organization to exchange ideas, communicate insights, and broaden their area of expertise Making an impactful contribution to the success of the team and help clients to achieve their investment goals Structured mentoring and career progression Work-life balance Required Qualifications To be successful in this role you will have.... Prior relevant work experience, augmented as needed by academic or online learning in analytic methods or programming Excellent quantitative problem-solving and analytical skills Strong programming skills, including proficiency in a data analysis language (Python, R) Demonstrated interest in financial markets Clear, concise, proactive communication skills Proven intellectual curiosity Attention to detail, accuracy, and timeliness, and a cooperative can-do attitude About Our Company You'll find the promise we make to our clients is the same one we make to our employees: Your success is our priority. Here, you'll find growth and career opportunities across all our businesses. We're intentionally built to help you succeed. Our reach is expansive with a global team of 2,000 people working together. Our expertise is diverse with more than 450 investment professionals sharing global perspectives across all major asset classes and markets. Our clients have access to a broad array of investment strategies and we have the capability to create bespoke solutions matched to clients' specific requirements. Columbia Threadneedle is a people business and we recognise that our success is due to our talented people, who bring diversity of thought, complementary skills and capabilities. We are committed to providing an inclusive workplace that supports the diversity of our employees and reflects our broader communities and client-base. We welcome applications from returners to the industry. We appreciate that work-life balance is an important factor for many when considering their next move so please discuss any flexible working requirements directly with your recruiter.
15/09/2021
Full time
Job Description Where you'll fit in & what our team goals are.... As a Data Scientist, you will develop analytics using a wide range of data sources to directly help the research, portfolio construction and trading processes across a range of different investment teams and share responsibility for the implementation and maintenance of these processes. This will be done under the guidance of senior members of the team.. Responsibilities How you'll spend your time.... What you'll do: Actively participate in research projects through idea generation, data preparation, rigorous analysis, reaching sensible conclusions and making actionable recommendations Engineer processes to gather data from a wide variety of data sources and construct and maintain data validation processes Present results of data analysis using data visualization techniques within applications such as Shiny, Dash and Jupyter Research advanced data science methods and techniques Assist in enriching the team's research and production infrastructure and analytics capabilities What you'll like about this role: An organization with steadfast commitments to diversity and inclusion A working climate that values and supports different perspectives and culture Opportunity to gain in-depth exposure to the field of quantitative finance Opportunities to collaborate with fundamental research analysts and portfolio managers across the organization to exchange ideas, communicate insights, and broaden their area of expertise Making an impactful contribution to the success of the team and help clients to achieve their investment goals Structured mentoring and career progression Work-life balance Required Qualifications To be successful in this role you will have.... Prior relevant work experience, augmented as needed by academic or online learning in analytic methods or programming Excellent quantitative problem-solving and analytical skills Strong programming skills, including proficiency in a data analysis language (Python, R) Demonstrated interest in financial markets Clear, concise, proactive communication skills Proven intellectual curiosity Attention to detail, accuracy, and timeliness, and a cooperative can-do attitude About Our Company You'll find the promise we make to our clients is the same one we make to our employees: Your success is our priority. Here, you'll find growth and career opportunities across all our businesses. We're intentionally built to help you succeed. Our reach is expansive with a global team of 2,000 people working together. Our expertise is diverse with more than 450 investment professionals sharing global perspectives across all major asset classes and markets. Our clients have access to a broad array of investment strategies and we have the capability to create bespoke solutions matched to clients' specific requirements. Columbia Threadneedle is a people business and we recognise that our success is due to our talented people, who bring diversity of thought, complementary skills and capabilities. We are committed to providing an inclusive workplace that supports the diversity of our employees and reflects our broader communities and client-base. We welcome applications from returners to the industry. We appreciate that work-life balance is an important factor for many when considering their next move so please discuss any flexible working requirements directly with your recruiter.
Senior Data Scientist, Python, Financial services Summertime usually means vacation time, but with the world changing and things only just returning back to normal, maybe a new exciting job is all you need! My client, a leading London-based Software Consultancy, are seeking a skilled Principal Data Scientist / Senior Data Scientist to with strong Python skills and a background within financial services to work on some exciting client projects. The successful Principal Data Scientist / Senior Data Scientist will be fully hands-on, and will work on the analysis of large datasets as well as the creation and maintenance of NLP, AI & ML solutions with a focus on structured, semi-structured, and unstructured financial data. The Company Joining this organisation as the successful Principal Data Scientist / Senior Data Scientist is an excellent opportunity for an experienced candidate to sky-rocket their career. An established environment, with a global presence, this is a chance to join a software consultancy who appreciate the importance of cutting edge technologies. As well as joining this fast-paced environment, and benefitting from exciting projects, you will benefit from: 25 days holiday + Bank holidays Private healthcare Memberships & discounts Company social events What's Required? Proven commercial experience as a Data Scientist Sound skills with Python as well as with relevant packages such as Numpy, Sklearn, Pandas, Flask, Pytorch and Tensorflow Prior experience within financial services, ideally within trading or investment banking A working knowledge of relational and NoSQL databases including SQL Server, Oracle, MongoDB or CassandraDB Further skills with C# or Java as well as experience developing Neural Networks & AI / ML Solutions is highly beneficial Experience of Agile Scrum What Next? If you're an experienced Software Team Lead with experience of React & C# .NET looking to work with a market-leading organisation that really push the limits of tech, then please apply today to learn more! Senior Data Scientist, Principal Data Scientist, Python, Numpy, Sklearn, Pandas, Flask, Pytorch and Tensorflow, Financial services, Investment Banking, Trading, AI, ML Corriculo Ltd acts as an employment agency and an employment business.
10/09/2021
Full time
Senior Data Scientist, Python, Financial services Summertime usually means vacation time, but with the world changing and things only just returning back to normal, maybe a new exciting job is all you need! My client, a leading London-based Software Consultancy, are seeking a skilled Principal Data Scientist / Senior Data Scientist to with strong Python skills and a background within financial services to work on some exciting client projects. The successful Principal Data Scientist / Senior Data Scientist will be fully hands-on, and will work on the analysis of large datasets as well as the creation and maintenance of NLP, AI & ML solutions with a focus on structured, semi-structured, and unstructured financial data. The Company Joining this organisation as the successful Principal Data Scientist / Senior Data Scientist is an excellent opportunity for an experienced candidate to sky-rocket their career. An established environment, with a global presence, this is a chance to join a software consultancy who appreciate the importance of cutting edge technologies. As well as joining this fast-paced environment, and benefitting from exciting projects, you will benefit from: 25 days holiday + Bank holidays Private healthcare Memberships & discounts Company social events What's Required? Proven commercial experience as a Data Scientist Sound skills with Python as well as with relevant packages such as Numpy, Sklearn, Pandas, Flask, Pytorch and Tensorflow Prior experience within financial services, ideally within trading or investment banking A working knowledge of relational and NoSQL databases including SQL Server, Oracle, MongoDB or CassandraDB Further skills with C# or Java as well as experience developing Neural Networks & AI / ML Solutions is highly beneficial Experience of Agile Scrum What Next? If you're an experienced Software Team Lead with experience of React & C# .NET looking to work with a market-leading organisation that really push the limits of tech, then please apply today to learn more! Senior Data Scientist, Principal Data Scientist, Python, Numpy, Sklearn, Pandas, Flask, Pytorch and Tensorflow, Financial services, Investment Banking, Trading, AI, ML Corriculo Ltd acts as an employment agency and an employment business.
Cancer Research UK (CRUK) have awarded funding to the CRUK City of London (CoL) Centre to create a CRUK Radiation Research Unit (RRU). The unit will feature an ambitious research programme for radiation oncology and radiation biology called RadNet. This network, is CRUK’s largest ever investment in radiotherapy research, is aimed to accelerate the development of advanced radiotherapy techniques, challenging the boundaries of this mainstay treatment through world-first exploratory projects.
We are seeking a Principal Database Architect (Engineer) whose main objective is to create and manage a complex and comprehensive electronic database. The post holder will be based at UCL Medical Physics and Biomedical Engineering and will be expected to regularly visit the partner institutions (Barts, King’s and the Crick, and their respective clinical sites).
The database will be created by collating multi-parametric electronic data from specific cohorts of patients with cancer who have undergone radiotherapy, from several UK hospitals. Data will be accessed by multi-disciplinary researchers within the RadNet consortium to answer clinical research questions using developed analytical tools. The post holder will liaise with the RadNet community to manage an efficient and effective platform. This is an ambitious undertaking that requires a highly skilled individual with expertise, drive and vision to bring to fruition.
This position is initially funded till 31st October 2024.in the first instance. Candidates must have a Bachelors degree in a computational, mathematical, physical sciences or engineering subject or equivalent experience. A Masters in Data Science, Statistics, Bioinformatics, Health Informatics or Computing related field is desirable.
Advanced experience with data processing, management and statistical packages such as SQL, R, STATA, Python and BASH with the ability to elicit and validate requirements as well as scripting, debugging and improving scripts and experience of Mac / PC / Linux system administration. Job Description & Person Specification can be access through the 'How to Apply' link.
For informal enquiries about the role please contact Prof Gary Royle (g.royle@ucl.ac.uk)
For enquiries regarding the application process, please contact the City of London Radnet project manager Michelle Craft at m.tu@ucl.ac.uk
28/01/2021
Full time
Cancer Research UK (CRUK) have awarded funding to the CRUK City of London (CoL) Centre to create a CRUK Radiation Research Unit (RRU). The unit will feature an ambitious research programme for radiation oncology and radiation biology called RadNet. This network, is CRUK’s largest ever investment in radiotherapy research, is aimed to accelerate the development of advanced radiotherapy techniques, challenging the boundaries of this mainstay treatment through world-first exploratory projects.
We are seeking a Principal Database Architect (Engineer) whose main objective is to create and manage a complex and comprehensive electronic database. The post holder will be based at UCL Medical Physics and Biomedical Engineering and will be expected to regularly visit the partner institutions (Barts, King’s and the Crick, and their respective clinical sites).
The database will be created by collating multi-parametric electronic data from specific cohorts of patients with cancer who have undergone radiotherapy, from several UK hospitals. Data will be accessed by multi-disciplinary researchers within the RadNet consortium to answer clinical research questions using developed analytical tools. The post holder will liaise with the RadNet community to manage an efficient and effective platform. This is an ambitious undertaking that requires a highly skilled individual with expertise, drive and vision to bring to fruition.
This position is initially funded till 31st October 2024.in the first instance. Candidates must have a Bachelors degree in a computational, mathematical, physical sciences or engineering subject or equivalent experience. A Masters in Data Science, Statistics, Bioinformatics, Health Informatics or Computing related field is desirable.
Advanced experience with data processing, management and statistical packages such as SQL, R, STATA, Python and BASH with the ability to elicit and validate requirements as well as scripting, debugging and improving scripts and experience of Mac / PC / Linux system administration. Job Description & Person Specification can be access through the 'How to Apply' link.
For informal enquiries about the role please contact Prof Gary Royle (g.royle@ucl.ac.uk)
For enquiries regarding the application process, please contact the City of London Radnet project manager Michelle Craft at m.tu@ucl.ac.uk
Cloudwise Solutions
Cloudwise Solutions
Data Scientist - London £35,000 - £45,000 A private equity firm with investments in the online gaming, fintech as well as online marketing space are currently looking to add a data scientist with experience in data analysis and statistical modelling to the team. Responsibilities You will have the opportunity to analyse large data sets across multiple industries and businesses and to apply your skills (both technical and behavioural) in order to generate value. In order to succeed you will have a strong set of "soft" skills in addition to your technical expertise. Your key responsibilities will be to Analyse raw and messy data Visualise results and generate actionable insights and recommendations Collaborate with stakeholders across various businesses and industries Qualifications Proven experience in applying data analysis and machine learning to real-world problems Understanding of predictive modelling, machine-learning, clustering and classification techniques Fluency in R or Python Knowledge of SQL Familiarity with visualization tools (eg Tableau, Power BI)
15/02/2019
Cloudwise Solutions
Cloudwise Solutions
Data Scientist - London £35,000 - £45,000 A private equity firm with investments in the online gaming, fintech as well as online marketing space are currently looking to add a data scientist with experience in data analysis and statistical modelling to the team. Responsibilities You will have the opportunity to analyse large data sets across multiple industries and businesses and to apply your skills (both technical and behavioural) in order to generate value. In order to succeed you will have a strong set of "soft" skills in addition to your technical expertise. Your key responsibilities will be to Analyse raw and messy data Visualise results and generate actionable insights and recommendations Collaborate with stakeholders across various businesses and industries Qualifications Proven experience in applying data analysis and machine learning to real-world problems Understanding of predictive modelling, machine-learning, clustering and classification techniques Fluency in R or Python Knowledge of SQL Familiarity with visualization tools (eg Tableau, Power BI)
Edinburgh, Midlothian
Lloyds Banking Group
Edinburgh, Midlothian
Lloyds Banking Group
End Date 24 February 2019 Salary Range £48,636 - £54,040 We support agile working - click here for more information on agile working options. Agile Working Options Other Agile Working Arrangements / Open to Discussion Job Description Summary A key role in our Group CIO Engineering Data Science team driving data science model development and engineering capabilities. Job Description Lloyds Banking Group is the UK's biggest Retail, Digital and Mobile bank with over 30 million active online customers across our three main brands. Our products touch millions of customers so we have a big responsibility to help Britain Prosper. We've placed digital banking at the heart of our strategy to become the best bank for customers, backed with an investment of £3bn in Digital Transformation into areas such as Data Science, AI and Robotics. So why join our CIO Engineering Team? You'd help us play a pivotal role in engineering new capabilities for the Group and build the bank of the future. You'd be working with a group of AI, RPA and Automation trailblazers at the hub of a ground-breaking agenda. You'll use data science techniques in a huge way to rapidly transform everything from how we deliver key engineering projects to optimising new Cloud platforms and various strategic transformations. You'll work on well-invested programmes at scale with over 70,000 users, thousands of apps, platforms and devices and 30m+ customers relying on your work. Interested? Here are some real examples of the work you could be doing: You'll be harvesting and wrangling data from various sources (Cloud data stores, NoSQL databases etc) to engineer automated solutions for projects such as optimisation of applications or the load placed between various Cloud platforms. You'll develop new data models, algorithms and visualisations to help our engineers fix our technology "Stack" and predict the downstream impacts and effects of system outages. And you'll help overhaul our massive patch management operations using streamed and batch process data to deliver better systems and devices. Together we'll make it possible.. So who are we looking for? We're keen to attract established Data Engineers/Data Scientists with skills spanning across the following: A strong knowledge of Python, including libraries (Pandas, Numpy, Matplotlib etc.). Skills in model-development and an understanding of different data sources and structures including NoSQL databases (Cassandra, Splunk, Log Analytics). A good understanding of machine learning principles (with additional skills in ML technologies like Scikit-Learn and understanding of common ML algorithms being very helpful). Any wider knowledge of software development practices or Cloud data engineering would also be very useful. And how could we challenge you? You'll get exposure to a host of wider technologies and careers by broadening your data science and engineering horizons. We'll give you the stretch you desire in terms of systems data science with opportunities to widen your skills in engineering data-led solutions to complex technology stacks. What we'd give you in return: Offering you both funding and profile - we'll provide you with a diverse, energising and informal environment that focuses on equal opportunity and real career progression. We'll also give you a broad remuneration package which includes: A Car/allowance A Performance Share bonus of c15% A generous pension contribution A flex cash pot you can adjust to suit your lifestyle. Private health cover 30 days holiday plus bank holidays So if your expertise lies with engineering data, Python and NoSQL databases and joining a growing, state of the art team within a leading FinTech appeals then get in touch, we'd like to hear from you... At Lloyds Banking Group, we're driven by a clear purpose; to help Britain prosper. Across the Group, our colleagues are focused on making a difference to customers, businesses and communities. With us you'll have a key role to play in shaping the financial services of the future, whilst the scale and reach of our Group means you'll have many opportunities to learn, grow and develop. We're focused on creating a values-led culture and are committed to building a workforce which reflects the diversity of the customers and communities we serve. Together we're building a truly inclusive workplace where all of our colleagues have the opportunity to make a real difference.
15/02/2019
Edinburgh, Midlothian
Lloyds Banking Group
Edinburgh, Midlothian
Lloyds Banking Group
End Date 24 February 2019 Salary Range £48,636 - £54,040 We support agile working - click here for more information on agile working options. Agile Working Options Other Agile Working Arrangements / Open to Discussion Job Description Summary A key role in our Group CIO Engineering Data Science team driving data science model development and engineering capabilities. Job Description Lloyds Banking Group is the UK's biggest Retail, Digital and Mobile bank with over 30 million active online customers across our three main brands. Our products touch millions of customers so we have a big responsibility to help Britain Prosper. We've placed digital banking at the heart of our strategy to become the best bank for customers, backed with an investment of £3bn in Digital Transformation into areas such as Data Science, AI and Robotics. So why join our CIO Engineering Team? You'd help us play a pivotal role in engineering new capabilities for the Group and build the bank of the future. You'd be working with a group of AI, RPA and Automation trailblazers at the hub of a ground-breaking agenda. You'll use data science techniques in a huge way to rapidly transform everything from how we deliver key engineering projects to optimising new Cloud platforms and various strategic transformations. You'll work on well-invested programmes at scale with over 70,000 users, thousands of apps, platforms and devices and 30m+ customers relying on your work. Interested? Here are some real examples of the work you could be doing: You'll be harvesting and wrangling data from various sources (Cloud data stores, NoSQL databases etc) to engineer automated solutions for projects such as optimisation of applications or the load placed between various Cloud platforms. You'll develop new data models, algorithms and visualisations to help our engineers fix our technology "Stack" and predict the downstream impacts and effects of system outages. And you'll help overhaul our massive patch management operations using streamed and batch process data to deliver better systems and devices. Together we'll make it possible.. So who are we looking for? We're keen to attract established Data Engineers/Data Scientists with skills spanning across the following: A strong knowledge of Python, including libraries (Pandas, Numpy, Matplotlib etc.). Skills in model-development and an understanding of different data sources and structures including NoSQL databases (Cassandra, Splunk, Log Analytics). A good understanding of machine learning principles (with additional skills in ML technologies like Scikit-Learn and understanding of common ML algorithms being very helpful). Any wider knowledge of software development practices or Cloud data engineering would also be very useful. And how could we challenge you? You'll get exposure to a host of wider technologies and careers by broadening your data science and engineering horizons. We'll give you the stretch you desire in terms of systems data science with opportunities to widen your skills in engineering data-led solutions to complex technology stacks. What we'd give you in return: Offering you both funding and profile - we'll provide you with a diverse, energising and informal environment that focuses on equal opportunity and real career progression. We'll also give you a broad remuneration package which includes: A Car/allowance A Performance Share bonus of c15% A generous pension contribution A flex cash pot you can adjust to suit your lifestyle. Private health cover 30 days holiday plus bank holidays So if your expertise lies with engineering data, Python and NoSQL databases and joining a growing, state of the art team within a leading FinTech appeals then get in touch, we'd like to hear from you... At Lloyds Banking Group, we're driven by a clear purpose; to help Britain prosper. Across the Group, our colleagues are focused on making a difference to customers, businesses and communities. With us you'll have a key role to play in shaping the financial services of the future, whilst the scale and reach of our Group means you'll have many opportunities to learn, grow and develop. We're focused on creating a values-led culture and are committed to building a workforce which reflects the diversity of the customers and communities we serve. Together we're building a truly inclusive workplace where all of our colleagues have the opportunity to make a real difference.
Glocomms
Glocomms
Job Title: Senior Data Scientist Location: London; United Kingdom Salary: Competitive Department: Risk Division Our client is a leading Investment Bank based in the heart of the City of London, well recognized for their focus on cutting edge technology and innovation. Throughout 2019 the bank is continuing to invest heavily in their data science department, therefore the are looking to expand their team. Responsibilities: * Be an innovator of new cutting-edge practices and presenting results across the Risk division. * Provide technical thought leadership, sharing knowledge and R&D results to increase technical standards within the Innovation Team (peer-to-peer) and the wider Risk community; * Oversee the development, testing and implementation aspects of developments, using innovative approaches and exploring the use of new technical/visual/statistical methods; * Strive for the highest standards of design, technical competency, accuracy and coding. Requirements: * Minimum Bachelor's Degree in a numerate or computer science subject with knowledge of advanced analytical techniques; * Good experience and passion about development and implementation of predictive analytics, automation, cognitive computing (e.g. text analytics, image recognition), visualisation or similar applied data science areas. * Be proficient in Python, R, Matlab, C, SAS, Java, Spark, VBA etc. Data extraction, cleansing and manipulation both in database (SQL) and using in memory/dynamic visual approaches. IBM Bluemix, MS Azure, AWS, Web development or similar Cognitive computing Analytical and data science infrastructure and ecosystems. Perks: Opportunity to work alongside and learn from the top tech professionals in the industry. Get 'hands on' experience with machine learning algorithms, cognitive computing, data visualisation. 'My Discounts' for all employees, with great discounts, cash back savings and offers across hundreds of brilliant high street brands, travel, tickets, gym memberships and much more. Pension Healthcare package. Childcare Vouchers. And an amazing view of London :)
15/02/2019
Glocomms
Glocomms
Job Title: Senior Data Scientist Location: London; United Kingdom Salary: Competitive Department: Risk Division Our client is a leading Investment Bank based in the heart of the City of London, well recognized for their focus on cutting edge technology and innovation. Throughout 2019 the bank is continuing to invest heavily in their data science department, therefore the are looking to expand their team. Responsibilities: * Be an innovator of new cutting-edge practices and presenting results across the Risk division. * Provide technical thought leadership, sharing knowledge and R&D results to increase technical standards within the Innovation Team (peer-to-peer) and the wider Risk community; * Oversee the development, testing and implementation aspects of developments, using innovative approaches and exploring the use of new technical/visual/statistical methods; * Strive for the highest standards of design, technical competency, accuracy and coding. Requirements: * Minimum Bachelor's Degree in a numerate or computer science subject with knowledge of advanced analytical techniques; * Good experience and passion about development and implementation of predictive analytics, automation, cognitive computing (e.g. text analytics, image recognition), visualisation or similar applied data science areas. * Be proficient in Python, R, Matlab, C, SAS, Java, Spark, VBA etc. Data extraction, cleansing and manipulation both in database (SQL) and using in memory/dynamic visual approaches. IBM Bluemix, MS Azure, AWS, Web development or similar Cognitive computing Analytical and data science infrastructure and ecosystems. Perks: Opportunity to work alongside and learn from the top tech professionals in the industry. Get 'hands on' experience with machine learning algorithms, cognitive computing, data visualisation. 'My Discounts' for all employees, with great discounts, cash back savings and offers across hundreds of brilliant high street brands, travel, tickets, gym memberships and much more. Pension Healthcare package. Childcare Vouchers. And an amazing view of London :)
Edinburgh, Midlothian
Lloyds Banking Group
End Date 24 February 2019 Salary Range £48,636 - £54,040 We support agile working - click here for more information on agile working options. Agile Working Options Other Agile Working Arrangements / Open to Discussion Job Description Summary A key role in our Group CIO Engineering Data Science team driving data science model development and engineering capabilities. Job Description Lloyds Banking Group is the UK's biggest Retail, Digital and Mobile bank with over 30 million active online customers across our three main brands. Our products touch millions of customers so we have a big responsibility to help Britain Prosper. We've placed digital banking at the heart of our strategy to become the best bank for customers, backed with an investment of £3bn in Digital Transformation into areas such as Data Science, AI and Robotics. So why join our CIO Engineering Team? You'd help us play a pivotal role in engineering new capabilities for the Group and build the bank of the future. You'd be working with a group of AI, RPA and Automation trailblazers at the hub of a ground-breaking agenda. You'll use data science techniques in a huge way to rapidly transform everything from how we deliver key engineering projects to optimising new Cloud platforms and various strategic transformations. You'll work on well-invested programmes at scale with over 70,000 users, thousands of apps, platforms and devices and 30m+ customers relying on your work. Interested? Here are some real examples of the work you could be doing: You'll be harvesting and wrangling data from various sources (Cloud data stores, NoSQL databases etc) to engineer automated solutions for projects such as optimisation of applications or the load placed between various Cloud platforms. You'll develop new data models, algorithms and visualisations to help our engineers fix our technology "Stack" and predict the downstream impacts and effects of system outages. And you'll help overhaul our massive patch management operations using streamed and batch process data to deliver better systems and devices. Together we'll make it possible.. So who are we looking for? We're keen to attract established Data Engineers/Data Scientists with skills spanning across the following: A strong knowledge of Python, including libraries (Pandas, Numpy, Matplotlib etc.). Skills in model-development and an understanding of different data sources and structures including NoSQL databases (Cassandra, Splunk, Log Analytics). A good understanding of machine learning principles (with additional skills in ML technologies like Scikit-Learn and understanding of common ML algorithms being very helpful). Any wider knowledge of software development practices or Cloud data engineering would also be very useful. And how could we challenge you? You'll get exposure to a host of wider technologies and careers by broadening your data science and engineering horizons. We'll give you the stretch you desire in terms of systems data science with opportunities to widen your skills in engineering data-led solutions to complex technology stacks. What we'd give you in return: Offering you both funding and profile - we'll provide you with a diverse, energising and informal environment that focuses on equal opportunity and real career progression. We'll also give you a broad remuneration package which includes: A Car/allowance A Performance Share bonus of c15% A generous pension contribution A flex cash pot you can adjust to suit your lifestyle. Private health cover 30 days holiday plus bank holidays So if your expertise lies with engineering data, Python and NoSQL databases and joining a growing, state of the art team within a leading FinTech appeals then get in touch, we'd like to hear from you... At Lloyds Banking Group, we're driven by a clear purpose; to help Britain prosper. Across the Group, our colleagues are focused on making a difference to customers, businesses and communities. With us you'll have a key role to play in shaping the financial services of the future, whilst the scale and reach of our Group means you'll have many opportunities to learn, grow and develop. We're focused on creating a values-led culture and are committed to building a workforce which reflects the diversity of the customers and communities we serve. Together we're building a truly inclusive workplace where all of our colleagues have the opportunity to make a real difference.
15/02/2019
Edinburgh, Midlothian
Lloyds Banking Group
End Date 24 February 2019 Salary Range £48,636 - £54,040 We support agile working - click here for more information on agile working options. Agile Working Options Other Agile Working Arrangements / Open to Discussion Job Description Summary A key role in our Group CIO Engineering Data Science team driving data science model development and engineering capabilities. Job Description Lloyds Banking Group is the UK's biggest Retail, Digital and Mobile bank with over 30 million active online customers across our three main brands. Our products touch millions of customers so we have a big responsibility to help Britain Prosper. We've placed digital banking at the heart of our strategy to become the best bank for customers, backed with an investment of £3bn in Digital Transformation into areas such as Data Science, AI and Robotics. So why join our CIO Engineering Team? You'd help us play a pivotal role in engineering new capabilities for the Group and build the bank of the future. You'd be working with a group of AI, RPA and Automation trailblazers at the hub of a ground-breaking agenda. You'll use data science techniques in a huge way to rapidly transform everything from how we deliver key engineering projects to optimising new Cloud platforms and various strategic transformations. You'll work on well-invested programmes at scale with over 70,000 users, thousands of apps, platforms and devices and 30m+ customers relying on your work. Interested? Here are some real examples of the work you could be doing: You'll be harvesting and wrangling data from various sources (Cloud data stores, NoSQL databases etc) to engineer automated solutions for projects such as optimisation of applications or the load placed between various Cloud platforms. You'll develop new data models, algorithms and visualisations to help our engineers fix our technology "Stack" and predict the downstream impacts and effects of system outages. And you'll help overhaul our massive patch management operations using streamed and batch process data to deliver better systems and devices. Together we'll make it possible.. So who are we looking for? We're keen to attract established Data Engineers/Data Scientists with skills spanning across the following: A strong knowledge of Python, including libraries (Pandas, Numpy, Matplotlib etc.). Skills in model-development and an understanding of different data sources and structures including NoSQL databases (Cassandra, Splunk, Log Analytics). A good understanding of machine learning principles (with additional skills in ML technologies like Scikit-Learn and understanding of common ML algorithms being very helpful). Any wider knowledge of software development practices or Cloud data engineering would also be very useful. And how could we challenge you? You'll get exposure to a host of wider technologies and careers by broadening your data science and engineering horizons. We'll give you the stretch you desire in terms of systems data science with opportunities to widen your skills in engineering data-led solutions to complex technology stacks. What we'd give you in return: Offering you both funding and profile - we'll provide you with a diverse, energising and informal environment that focuses on equal opportunity and real career progression. We'll also give you a broad remuneration package which includes: A Car/allowance A Performance Share bonus of c15% A generous pension contribution A flex cash pot you can adjust to suit your lifestyle. Private health cover 30 days holiday plus bank holidays So if your expertise lies with engineering data, Python and NoSQL databases and joining a growing, state of the art team within a leading FinTech appeals then get in touch, we'd like to hear from you... At Lloyds Banking Group, we're driven by a clear purpose; to help Britain prosper. Across the Group, our colleagues are focused on making a difference to customers, businesses and communities. With us you'll have a key role to play in shaping the financial services of the future, whilst the scale and reach of our Group means you'll have many opportunities to learn, grow and develop. We're focused on creating a values-led culture and are committed to building a workforce which reflects the diversity of the customers and communities we serve. Together we're building a truly inclusive workplace where all of our colleagues have the opportunity to make a real difference.
Glocomms
Job Title: Senior Data Scientist Location: London; United Kingdom Salary: Competitive Department: Risk Division Our client is a leading Investment Bank based in the heart of the City of London, well recognized for their focus on cutting edge technology and innovation. Throughout 2019 the bank is continuing to invest heavily in their data science department, therefore the are looking to expand their team. Responsibilities: * Be an innovator of new cutting-edge practices and presenting results across the Risk division. * Provide technical thought leadership, sharing knowledge and R&D results to increase technical standards within the Innovation Team (peer-to-peer) and the wider Risk community; * Oversee the development, testing and implementation aspects of developments, using innovative approaches and exploring the use of new technical/visual/statistical methods; * Strive for the highest standards of design, technical competency, accuracy and coding. Requirements: * Minimum Bachelor's Degree in a numerate or computer science subject with knowledge of advanced analytical techniques; * Good experience and passion about development and implementation of predictive analytics, automation, cognitive computing (e.g. text analytics, image recognition), visualisation or similar applied data science areas. * Be proficient in Python, R, Matlab, C, SAS, Java, Spark, VBA etc. Data extraction, cleansing and manipulation both in database (SQL) and using in memory/dynamic visual approaches. IBM Bluemix, MS Azure, AWS, Web development or similar Cognitive computing Analytical and data science infrastructure and ecosystems. Perks: Opportunity to work alongside and learn from the top tech professionals in the industry. Get 'hands on' experience with machine learning algorithms, cognitive computing, data visualisation. 'My Discounts' for all employees, with great discounts, cash back savings and offers across hundreds of brilliant high street brands, travel, tickets, gym memberships and much more. Pension Healthcare package. Childcare Vouchers. And an amazing view of London :)
15/02/2019
Glocomms
Job Title: Senior Data Scientist Location: London; United Kingdom Salary: Competitive Department: Risk Division Our client is a leading Investment Bank based in the heart of the City of London, well recognized for their focus on cutting edge technology and innovation. Throughout 2019 the bank is continuing to invest heavily in their data science department, therefore the are looking to expand their team. Responsibilities: * Be an innovator of new cutting-edge practices and presenting results across the Risk division. * Provide technical thought leadership, sharing knowledge and R&D results to increase technical standards within the Innovation Team (peer-to-peer) and the wider Risk community; * Oversee the development, testing and implementation aspects of developments, using innovative approaches and exploring the use of new technical/visual/statistical methods; * Strive for the highest standards of design, technical competency, accuracy and coding. Requirements: * Minimum Bachelor's Degree in a numerate or computer science subject with knowledge of advanced analytical techniques; * Good experience and passion about development and implementation of predictive analytics, automation, cognitive computing (e.g. text analytics, image recognition), visualisation or similar applied data science areas. * Be proficient in Python, R, Matlab, C, SAS, Java, Spark, VBA etc. Data extraction, cleansing and manipulation both in database (SQL) and using in memory/dynamic visual approaches. IBM Bluemix, MS Azure, AWS, Web development or similar Cognitive computing Analytical and data science infrastructure and ecosystems. Perks: Opportunity to work alongside and learn from the top tech professionals in the industry. Get 'hands on' experience with machine learning algorithms, cognitive computing, data visualisation. 'My Discounts' for all employees, with great discounts, cash back savings and offers across hundreds of brilliant high street brands, travel, tickets, gym memberships and much more. Pension Healthcare package. Childcare Vouchers. And an amazing view of London :)
Jobs - Frequently Asked Questions
Use the location filter to find IT jobs in cities like London, Manchester, Birmingham, and across the UK.
Entry-level roles include IT support technician, junior developer, QA tester, and helpdesk analyst.
New jobs are posted daily. Set up alerts to be notified as soon as new roles match your preferences.
Key skills include problem-solving, coding, cloud computing, networking, and familiarity with tools like AWS or SQL.
Yes, many employers offer training or junior roles. Focus on building a strong CV with relevant coursework or personal projects.