Strong expertise in Databricks and Spark within a Medallion Architecture Insider IR35 9-12 months remote contact with occassional trips to London Day rate £600-£675 Remote contract Data Architect Inside IR35 contract opportunity with an energy and utilities company. Start date of May Contract Duration 9, possibly 12 months Day rate of £600-£675.00 on an Inside IR35 contract In this temporary, fully remote role with occasional trips to London, you'll have the opportunity to lead the Databricks platform strategy and architecture, owning key decisions across the Lakehouse from ingestion to serving. You'll be part of a team dedicated to translating business needs into architectural direction and delivery plans. Lead the Databricks platform strategy and architecture Design scalable, high-performance Lakehouse and streaming data pipelines Collaborate closely with senior stakeholders to drive best-practice adoption Preferred Requirements Strong expertise in Databricks and Spark Proven experience designing enterprise-scale data/platform architectures Cloud knowledge (ideally Azure), including ADLS Gen2, Event Hubs, Key Vault, and IaC tools such as Terraform Deep understanding of modern data architecture, DataOps, medallion design, streaming/batch patterns, and CI/CD for data pipelines Leadership experience mentoring engineers and driving best-practice adoption Preferred Qualifications Expertise in Data Architect, Databricks, Medallion Architecture, PySpark, Azure, and ADLS Gen2 Previous experience in Energy Trading Data Architecture projects Confident stakeholder engagement, able to communicate technical decisions effectively across technical and business audiences Interview slots after the Easter Bank Holiday confirmed across 8th - 10th April Must be based in the UK and available for the occasional meet-up and Workshop in central London.
01/04/2026
Contractor
Strong expertise in Databricks and Spark within a Medallion Architecture Insider IR35 9-12 months remote contact with occassional trips to London Day rate £600-£675 Remote contract Data Architect Inside IR35 contract opportunity with an energy and utilities company. Start date of May Contract Duration 9, possibly 12 months Day rate of £600-£675.00 on an Inside IR35 contract In this temporary, fully remote role with occasional trips to London, you'll have the opportunity to lead the Databricks platform strategy and architecture, owning key decisions across the Lakehouse from ingestion to serving. You'll be part of a team dedicated to translating business needs into architectural direction and delivery plans. Lead the Databricks platform strategy and architecture Design scalable, high-performance Lakehouse and streaming data pipelines Collaborate closely with senior stakeholders to drive best-practice adoption Preferred Requirements Strong expertise in Databricks and Spark Proven experience designing enterprise-scale data/platform architectures Cloud knowledge (ideally Azure), including ADLS Gen2, Event Hubs, Key Vault, and IaC tools such as Terraform Deep understanding of modern data architecture, DataOps, medallion design, streaming/batch patterns, and CI/CD for data pipelines Leadership experience mentoring engineers and driving best-practice adoption Preferred Qualifications Expertise in Data Architect, Databricks, Medallion Architecture, PySpark, Azure, and ADLS Gen2 Previous experience in Energy Trading Data Architecture projects Confident stakeholder engagement, able to communicate technical decisions effectively across technical and business audiences Interview slots after the Easter Bank Holiday confirmed across 8th - 10th April Must be based in the UK and available for the occasional meet-up and Workshop in central London.
Company Overview We are working with an innovative organisation that recognises the increasing complexity of project delivery. Since 2013, our client has been helping companies of all sizes improve the way projects are delivered. Their mission is to become the number one provider of innovative project solutions, driven by a community of experienced, caring, and passionate professionals who are committed to improving project delivery. Why Join Our Client? Our client is currently in an exciting phase of growth, making this an excellent time to join their journey. They are building something special-scaling the business while maintaining a strong people-first approach. Investment in their teams is a key priority, creating an environment where development is encouraged and individuals are supported to grow with the organisation. Their culture sets them apart from other consulting practices, and they are looking to build a team that is equally ambitious. Position Overview Our client is seeking a Senior Data Engineer who thrives on building scalable, cloud-first data systems. In this role, you will design and manage data pipelines that support analytics, AI, and automation across complex infrastructure programmes. Your work will play a key part in enabling data-driven transformation across critical UK industries. Core Responsibilities Design, build, and optimise data pipelines using Azure Data Factory, Synapse, and Databricks Develop and maintain ETL/ELT workflows to ensure high data quality and reliability Collaborate with analysts and AI engineers to deliver robust and reusable data products Manage data lakes and warehouses using formats such as Delta Lake and Parquet Implement best practices for data governance, performance, and security Continuously evaluate and adopt new technologies to evolve the organisation's data platform Provide technical guidance to junior engineers and contribute to team capability building Technical Stack Core: Azure Data Factory Azure Synapse Analytics Azure Data Lake Storage Gen2 SQL Server Databricks Enhancements: Python (PySpark, Pandas) CI/CD (Azure DevOps) Infrastructure as Code (Terraform, Bicep) REST APIs GitHub Actions Desirable: Microsoft Fabric Delta Live Tables Power BI dataset automation DataOps practices What You'll Bring Professional experience in data engineering or cloud data development Strong understanding of data architecture, APIs, and modern data pipeline design Hands-on experience within Microsoft's Azure ecosystem, with an interest in emerging technologies such as Fabric, AI-enhanced ETL, and real-time data streaming Proven ability to lead technical workstreams and mentor junior team members A strong alignment with the organisation's IDEAL values: Integrity, Drive, Empathy, Adaptability, and Loyalty Ready to Apply? This is a fantastic opportunity to join a forward-thinking organisation at a key stage of growth, working on impactful projects across critical industries. If you're looking to take the next step in your career within a collaborative and innovative environment, we'd love to hear from you.
31/03/2026
Full time
Company Overview We are working with an innovative organisation that recognises the increasing complexity of project delivery. Since 2013, our client has been helping companies of all sizes improve the way projects are delivered. Their mission is to become the number one provider of innovative project solutions, driven by a community of experienced, caring, and passionate professionals who are committed to improving project delivery. Why Join Our Client? Our client is currently in an exciting phase of growth, making this an excellent time to join their journey. They are building something special-scaling the business while maintaining a strong people-first approach. Investment in their teams is a key priority, creating an environment where development is encouraged and individuals are supported to grow with the organisation. Their culture sets them apart from other consulting practices, and they are looking to build a team that is equally ambitious. Position Overview Our client is seeking a Senior Data Engineer who thrives on building scalable, cloud-first data systems. In this role, you will design and manage data pipelines that support analytics, AI, and automation across complex infrastructure programmes. Your work will play a key part in enabling data-driven transformation across critical UK industries. Core Responsibilities Design, build, and optimise data pipelines using Azure Data Factory, Synapse, and Databricks Develop and maintain ETL/ELT workflows to ensure high data quality and reliability Collaborate with analysts and AI engineers to deliver robust and reusable data products Manage data lakes and warehouses using formats such as Delta Lake and Parquet Implement best practices for data governance, performance, and security Continuously evaluate and adopt new technologies to evolve the organisation's data platform Provide technical guidance to junior engineers and contribute to team capability building Technical Stack Core: Azure Data Factory Azure Synapse Analytics Azure Data Lake Storage Gen2 SQL Server Databricks Enhancements: Python (PySpark, Pandas) CI/CD (Azure DevOps) Infrastructure as Code (Terraform, Bicep) REST APIs GitHub Actions Desirable: Microsoft Fabric Delta Live Tables Power BI dataset automation DataOps practices What You'll Bring Professional experience in data engineering or cloud data development Strong understanding of data architecture, APIs, and modern data pipeline design Hands-on experience within Microsoft's Azure ecosystem, with an interest in emerging technologies such as Fabric, AI-enhanced ETL, and real-time data streaming Proven ability to lead technical workstreams and mentor junior team members A strong alignment with the organisation's IDEAL values: Integrity, Drive, Empathy, Adaptability, and Loyalty Ready to Apply? This is a fantastic opportunity to join a forward-thinking organisation at a key stage of growth, working on impactful projects across critical industries. If you're looking to take the next step in your career within a collaborative and innovative environment, we'd love to hear from you.
Site Name: USA - Pennsylvania - Upper Providence, UK - Hertfordshire - Stevenage, UK - London - Brentford, USA - Pennsylvania - Philadelphia Posted Date: Oct The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. We are looking for a skilled Data Framework Engineer II to join our growing team. The Data Framework team builds and manages (in partnership with Tech) reusable components and architectures designed to make it both fast and easy to build robust, scalable, production-grade data products and services in the challenging biomedical data space. A Data Framework Engineer IIknows the metrics desired for their tools andservices anditerates to deliver and improve on those metrics in an agile fashion. A Data Framework Engineer II is a highly technical individual contributor, building modern, cloud-native systems for standardizing and templatizing data engineering, such as: Standardized physical storage and search / indexing systems Schema management (data + metadata + versioning + provenance + governance) API semantics and ontology management Standard API architectures Kafka + standard streaming semantics Standard components for publishing data to file-based, relational, and other sorts of data stores Metadata systems Tooling for QA / evaluation Additional responsibilities also include: Given a well-specified data framework problem, implement end-to-end solutionsusing appropriate programming languages(e.g.Python,Scala, or Go), open-source tools (e.g.Spark,Elasticsearch, ...), and cloud vendor-provided tools (e.g.Amazon S3) Leverage tools provided by Tech (e.g.infrastructure as code, CloudOps,DevOps, logging / alerting, ...) in delivery ofsolutions Write proper documentation in code as well as in wikis/other documentationsystems Writefantastic code along withthe proper unit, functional, and integration tests for code and services to ensurequality Stayup to datewith developments in theopen-sourcecommunity around data engineering, data science, and similartooling The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: We are looking for professionals with these required skills to achieve our goals: PhD in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering OR Masters and 2+ years experience Experience with common distributed data tools (Spark, Kafka, etc) Experience with basics of data architecture (e.g. optimizing physical layout for access patterns) Experience with basics of search engines/indexing (e.g. Elasticsearch, Lucene) Demonstrated experience in writing Python, Scala, Go, and/or C++ Preferred Qualifications: If you have the following characteristics, it would be a plus: Experience with agile software development Experience building and designing a DevOps-first way of working Demonstrated experience building reusable components on top of the CNCF ecosystem including Kubernetes (or similar ecosystem) Experience with schema tools and schema management (Avro, Protobuf) Why GSK? Our values and expectations are at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness LI-GSK If you require an accommodation or other assistance to apply for a job at GSK, please contact the GSK Service Centre at 1- (US Toll Free) or +1 (outside US). GSK is an Equal Opportunity Employer and, in the US, we adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to race, color, national origin, religion, sex, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class. At GSK, the health and safety of our employees are of paramount importance. As a science-led healthcare company on a mission to get ahead of disease together, we believe that supporting vaccination against COVID-19 is the single best thing we can do in the US to ensure the health and safety of our employees, complementary workers, workplaces, customers, consumers, communities, and the patients we serve. GSK has made the decision to require all US employees to be fully vaccinated against COVID-19, where allowed by state or local law and where vaccine supply is readily available. The only exceptions to this requirement are employees who are approved for an accommodation for religious, medical or disability-related reasons. Important notice to Employment businesses/ Agencies GSK does not accept referrals from employment businesses and/or employment agencies in respect of the vacancies posted on this site. All employment businesses/agencies are required to contact GSK's commercial and general procurement/human resources department to obtain prior written authorization before referring any candidates to GSK. The obtaining of prior written authorization is a condition precedent to any agreement (verbal or written) between the employment business/ agency and GSK. In the absence of such written authorization being obtained any actions undertaken by the employment business/agency shall be deemed to have been performed without the consent or contractual agreement of GSK. GSK shall therefore not be liable for any fees arising from such actions or any fees arising from any referrals by employment businesses/agencies in respect of the vacancies posted on this site. Please note that if you are a US Licensed Healthcare Professional or Healthcare Professional as defined by the laws of the state issuing your license, GSK may be required to capture and report expenses GSK incurs, on your behalf, in the event you are afforded an interview for employment. This capture of applicable transfers of value is necessary to ensure GSK's compliance to all federal and state US Transparency requirements. For more information, please visit GSK's Transparency Reporting For the Record site.
24/09/2022
Full time
Site Name: USA - Pennsylvania - Upper Providence, UK - Hertfordshire - Stevenage, UK - London - Brentford, USA - Pennsylvania - Philadelphia Posted Date: Oct The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. We are looking for a skilled Data Framework Engineer II to join our growing team. The Data Framework team builds and manages (in partnership with Tech) reusable components and architectures designed to make it both fast and easy to build robust, scalable, production-grade data products and services in the challenging biomedical data space. A Data Framework Engineer IIknows the metrics desired for their tools andservices anditerates to deliver and improve on those metrics in an agile fashion. A Data Framework Engineer II is a highly technical individual contributor, building modern, cloud-native systems for standardizing and templatizing data engineering, such as: Standardized physical storage and search / indexing systems Schema management (data + metadata + versioning + provenance + governance) API semantics and ontology management Standard API architectures Kafka + standard streaming semantics Standard components for publishing data to file-based, relational, and other sorts of data stores Metadata systems Tooling for QA / evaluation Additional responsibilities also include: Given a well-specified data framework problem, implement end-to-end solutionsusing appropriate programming languages(e.g.Python,Scala, or Go), open-source tools (e.g.Spark,Elasticsearch, ...), and cloud vendor-provided tools (e.g.Amazon S3) Leverage tools provided by Tech (e.g.infrastructure as code, CloudOps,DevOps, logging / alerting, ...) in delivery ofsolutions Write proper documentation in code as well as in wikis/other documentationsystems Writefantastic code along withthe proper unit, functional, and integration tests for code and services to ensurequality Stayup to datewith developments in theopen-sourcecommunity around data engineering, data science, and similartooling The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: We are looking for professionals with these required skills to achieve our goals: PhD in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering OR Masters and 2+ years experience Experience with common distributed data tools (Spark, Kafka, etc) Experience with basics of data architecture (e.g. optimizing physical layout for access patterns) Experience with basics of search engines/indexing (e.g. Elasticsearch, Lucene) Demonstrated experience in writing Python, Scala, Go, and/or C++ Preferred Qualifications: If you have the following characteristics, it would be a plus: Experience with agile software development Experience building and designing a DevOps-first way of working Demonstrated experience building reusable components on top of the CNCF ecosystem including Kubernetes (or similar ecosystem) Experience with schema tools and schema management (Avro, Protobuf) Why GSK? Our values and expectations are at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness LI-GSK If you require an accommodation or other assistance to apply for a job at GSK, please contact the GSK Service Centre at 1- (US Toll Free) or +1 (outside US). GSK is an Equal Opportunity Employer and, in the US, we adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to race, color, national origin, religion, sex, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class. At GSK, the health and safety of our employees are of paramount importance. As a science-led healthcare company on a mission to get ahead of disease together, we believe that supporting vaccination against COVID-19 is the single best thing we can do in the US to ensure the health and safety of our employees, complementary workers, workplaces, customers, consumers, communities, and the patients we serve. GSK has made the decision to require all US employees to be fully vaccinated against COVID-19, where allowed by state or local law and where vaccine supply is readily available. The only exceptions to this requirement are employees who are approved for an accommodation for religious, medical or disability-related reasons. Important notice to Employment businesses/ Agencies GSK does not accept referrals from employment businesses and/or employment agencies in respect of the vacancies posted on this site. All employment businesses/agencies are required to contact GSK's commercial and general procurement/human resources department to obtain prior written authorization before referring any candidates to GSK. The obtaining of prior written authorization is a condition precedent to any agreement (verbal or written) between the employment business/ agency and GSK. In the absence of such written authorization being obtained any actions undertaken by the employment business/agency shall be deemed to have been performed without the consent or contractual agreement of GSK. GSK shall therefore not be liable for any fees arising from such actions or any fees arising from any referrals by employment businesses/agencies in respect of the vacancies posted on this site. Please note that if you are a US Licensed Healthcare Professional or Healthcare Professional as defined by the laws of the state issuing your license, GSK may be required to capture and report expenses GSK incurs, on your behalf, in the event you are afforded an interview for employment. This capture of applicable transfers of value is necessary to ensure GSK's compliance to all federal and state US Transparency requirements. For more information, please visit GSK's Transparency Reporting For the Record site.
Snowflake Data Engineer - Up to £55,000 London and up to £46,000 National Do you like working with the latest technology and are interested in enhancing your tech abilities? We have an exciting opportunity for a highly skilled Data Engineer with significant experience of Snowflake. As well as being an expert in the Snowflake cloud platform, you'll have a strong background in Data Ingestion and Integration, designing and implementing ETL pipelines on various technologies, Data Modelling and a rounded understanding of data warehousing. Aviva believes strongly in experimentation leading to industrialisation and we are searching for passionate, energetic data engineers who are focussed on using their skills to drive out real business value for our customers A bit about the job: Aviva Zero is a greenfield Personal Lines insurer headquartered in Hoxton (London), set with the ambition to be the best in the UK market. It will combine the pace, focus, and test and learn mentality of a start-up with the expertise, and financial backing of Aviva. Data is the life blood of any modern organisation and Aviva is no different. Our Data Engineering team sits within Aviva Quantum our global Data Science Practise (covering areas including Machine Learning, Analytics, Data Engineering, AI and many more). You will form a vital part of our business, contribute to our first-class end-to-end solutions. You will play an active role in defining our practices, standards and ways of working, and apply them to your role. Be open to working across organisation and team boundaries to ensure we bring the best to our customers. Skills and experience we're looking for: Experience of delivering end to end solutions with different databases technologies focusing on Snowflake but also Dynamo, Oracle, SQL Server, Postgres. Experience of managing data using the Data Vault architecture and managing it through DBT. Strong understanding of data manipulation/wrangling techniques in SQL along with at least one of the following Python, Scala, Snowpark or PySpark. Experience in designing structure to support reporting solutions optimised for use from tools like Qlik, Tableau etc. Good understanding of modern code development practices including DevOps/DataOps but also Agile. Strong interpersonal skills with the ability to work with customer to establish requirements and then design and deliver the solution. Taking the customer on the end-2-end journey with you. What you'll get for this role: Salary up to £55,000 London and up to £46,000 National (depending on location, skills, experience, and qualifications) Generous pension (starting level Aviva contributes 8% when you contribute 2%) Part of the Sales Bonus Scheme Family friendly parental and carer's leave 29 days holiday per year plus bank holidays and the option to buy/sell up to 5 additional days Up to 40% discount for Aviva products Brilliant flexible benefits Aviva Matching Share Plan and Save As You Earn scheme 21 volunteering hours per year Aviva is for everyone: We are inclusive - we want applications from people with diverse backgrounds and experiences. Excited but not sure you tick every box? Research tells us that women, particularly, feel this way. So, regardless of gender, why not apply. And if you're in a job share just apply as a pair. We flex locations, hours and working patterns to suit our customers, business, and you. Most of our people are smart working - spending around 60% of their time in our offices and 40% at home. To find out more about working at Aviva take a look here We interview every disabled applicant who meets the minimum criteria for the job. Once you've applied, please send us an email stating that you have a disclosed disability, and we'll interview you. We'd love it if you could submit your application online. If you require an alternative method of applying, please give Abigail Aitken a call on or send an email to
24/09/2022
Full time
Snowflake Data Engineer - Up to £55,000 London and up to £46,000 National Do you like working with the latest technology and are interested in enhancing your tech abilities? We have an exciting opportunity for a highly skilled Data Engineer with significant experience of Snowflake. As well as being an expert in the Snowflake cloud platform, you'll have a strong background in Data Ingestion and Integration, designing and implementing ETL pipelines on various technologies, Data Modelling and a rounded understanding of data warehousing. Aviva believes strongly in experimentation leading to industrialisation and we are searching for passionate, energetic data engineers who are focussed on using their skills to drive out real business value for our customers A bit about the job: Aviva Zero is a greenfield Personal Lines insurer headquartered in Hoxton (London), set with the ambition to be the best in the UK market. It will combine the pace, focus, and test and learn mentality of a start-up with the expertise, and financial backing of Aviva. Data is the life blood of any modern organisation and Aviva is no different. Our Data Engineering team sits within Aviva Quantum our global Data Science Practise (covering areas including Machine Learning, Analytics, Data Engineering, AI and many more). You will form a vital part of our business, contribute to our first-class end-to-end solutions. You will play an active role in defining our practices, standards and ways of working, and apply them to your role. Be open to working across organisation and team boundaries to ensure we bring the best to our customers. Skills and experience we're looking for: Experience of delivering end to end solutions with different databases technologies focusing on Snowflake but also Dynamo, Oracle, SQL Server, Postgres. Experience of managing data using the Data Vault architecture and managing it through DBT. Strong understanding of data manipulation/wrangling techniques in SQL along with at least one of the following Python, Scala, Snowpark or PySpark. Experience in designing structure to support reporting solutions optimised for use from tools like Qlik, Tableau etc. Good understanding of modern code development practices including DevOps/DataOps but also Agile. Strong interpersonal skills with the ability to work with customer to establish requirements and then design and deliver the solution. Taking the customer on the end-2-end journey with you. What you'll get for this role: Salary up to £55,000 London and up to £46,000 National (depending on location, skills, experience, and qualifications) Generous pension (starting level Aviva contributes 8% when you contribute 2%) Part of the Sales Bonus Scheme Family friendly parental and carer's leave 29 days holiday per year plus bank holidays and the option to buy/sell up to 5 additional days Up to 40% discount for Aviva products Brilliant flexible benefits Aviva Matching Share Plan and Save As You Earn scheme 21 volunteering hours per year Aviva is for everyone: We are inclusive - we want applications from people with diverse backgrounds and experiences. Excited but not sure you tick every box? Research tells us that women, particularly, feel this way. So, regardless of gender, why not apply. And if you're in a job share just apply as a pair. We flex locations, hours and working patterns to suit our customers, business, and you. Most of our people are smart working - spending around 60% of their time in our offices and 40% at home. To find out more about working at Aviva take a look here We interview every disabled applicant who meets the minimum criteria for the job. Once you've applied, please send us an email stating that you have a disclosed disability, and we'll interview you. We'd love it if you could submit your application online. If you require an alternative method of applying, please give Abigail Aitken a call on or send an email to
Site Name: UK - Hertfordshire - Stevenage, USA - Connecticut - Hartford, USA - Delaware - Dover, USA - Maryland - Rockville, USA - Massachusetts - Cambridge, USA - Massachusetts - Waltham, USA - New Jersey - Trenton, USA - Pennsylvania - Upper Providence Posted Date: Jun 6 2022 The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. Achieving delivery of the right data to the right people at the right time needs design and implementation of data flows and data products which leverage internal and external data assets and tools to drive discovery and development is a key objective for the Data Science and Data Engineering (DSDE) team within GSK's Pharmaceutical R&D organization. There are five key drivers for this approach, which are closely aligned with GSK's corporate priorities of Innovation, Performance and Trust: Automation of end-to-end data flows :Faster and reliable ingestion of high throughput data in genetics, genomics and multi-omics, to extract value of investments in new technology (instrument to analysis-ready data in Enabling governance by design of external and internal data :with engineered practical solutions for controlled use and monitoring Innovative disease-specific and domain-expert specific data products : to enable computational scientists and their research unit collaborators to get faster to key insights leading to faster biopharmaceutical development cycles. Supporting e2ecode traceability and data provenance :Increasing assurance of data integrity through automation, integration. Improving engineering efficiency :Extensible, reusable, scalable,updateable,maintainable, virtualized traceable data and code would be driven by data engineering innovation and better resource utilization. We are looking for experienced Senior DevOps Engineers to join our growing Data Ops team. As a Senior Dev Ops Engineer is a highly technical individual contributor, building modern, cloud-native, DevOps-first systems for standardizing and templatizingbiomedical and scientificdata engineering, with demonstrable experience across the following areas: Deliver declarative components for common data ingestion, transformation and publishing techniques Define and implement data governance aligned to modern standards Establish scalable, automated processes for data engineering teams across GSK Thought leader and partner with wider DSDE data engineering teams to advise on implementation and best practices Cloud Infrastructure-as-Code Define Service and Flow orchestration Data as a configurable resource(including configuration-driven access to scientific data modelling tools) Observability (monitoring, alerting, logging, tracing, etc.) Enable quality engineering through KPIs and code coverage and quality checks Standardise GitOps/declarative software development lifecycle Audit as a service Senior DevOpsEngineers take full ownership of delivering high-performing, high-impactbiomedical and scientificdataopsproducts and services, froma description of apattern thatcustomer Data Engineers are trying touseall the way through tofinal delivery (and ongoing monitoring and operations)of a templated project and all associated automation. They arestandard-bearers for software engineering and quality coding practices within theteam andareexpected to mentor more junior engineers; they may even coordinate the work of more junior engineers on a large project.Theydevise useful metrics for ensuring their services are meeting customer demand and having animpact anditerate to deliver and improve on those metrics in an agile fashion. Successful Senior DevOpsEngineers are developing expertise with the types of data and types of tools that are leveraged in the biomedical and scientific data engineering space, andhas the following skills and experience(withsignificant depth in one or more of these areas): Demonstrable experience deploying robust modularised/container-based solutions to production (ideally GCP) and leveraging the Cloud NativeComputing Foundation (CNCF) ecosystem Significant depth in DevOps principles and tools (e.g.GitOps, Jenkins,CircleCI, Azure DevOps, etc.), and how to integrate these tools with other productivity tools (e.g. Jira, Slack, Microsoft Teams) to build a comprehensive workflow Programming in Python. Scala orGo Embedding agile software engineering (task/issue management, testing, documentation, software development lifecycle, source control, etc.) Leveraging major cloud providers, both via Kubernetesorvia vendor-specific services Authentication and Authorization flows and associated technologies (e.g.OAuth2 + JWT) Common distributed data tools (e.g.Spark, Hive) The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: We are looking for professionals with these required skills to achieve our goals: Masters in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering, etc, plus 5 years job experience (or PhD plus 3 years job experience) Experience with DevOps tools and concepts (e.g. Jira, GitLabs / Jenkins / CircleCI / Azure DevOps /etc.)Excellent with common distributed data tools in a production setting (Spark, Kafka, etc) Experience with specialized data architecture (e.g. optimizing physical layout for access patterns, including bloom filters, optimizing against self-describing formats such as ORC or Parquet, etc.) Experience with search / indexing systems (e.g. Elasticsearch) Expertise with agile development in Python, Scala, Go, and/or C++ Experience building reusable components on top of the CNCF ecosystem including Kubernetes Metrics-first mindset Experience mentoring junior engineers into deep technical expertise Preferred Qualifications: If you have the following characteristics, it would be a plus: Experience with agile software development Experience with building and designing a DevOps-first way of working Experience with building reusable components on top of the CNCF ecosystem including Kubernetes (or similar ecosystem ) LI-GSK Why GSK? Our values and expectationsare at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness As a company driven by our values of Patient focus, Transparency, Respect and Integrity, we know inclusion and diversity are essential for us to be able to succeed. We want all our colleagues to thrive at GSK bringing their unique experiences, ensuring they feel good and to keep growing their careers. As a candidate for a role, we want you to feel the same way. As an Equal Opportunity Employer, we are open to all talent. In the US, we also adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to race/ethnicity, colour, national origin, religion, gender, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class ( US only). We believe in an agile working culture for all our roles. If flexibility is important to you, we encourage you to explore with our hiring team what the opportunities are. Please don't hesitate to contact us if you'd like to discuss any adjustments to our process which might help you demonstrate your strengths and capabilities. You can either call us on , or send an email As you apply, we will ask you to share some personal information which is entirely voluntary..... click apply for full job details
23/09/2022
Full time
Site Name: UK - Hertfordshire - Stevenage, USA - Connecticut - Hartford, USA - Delaware - Dover, USA - Maryland - Rockville, USA - Massachusetts - Cambridge, USA - Massachusetts - Waltham, USA - New Jersey - Trenton, USA - Pennsylvania - Upper Providence Posted Date: Jun 6 2022 The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. Achieving delivery of the right data to the right people at the right time needs design and implementation of data flows and data products which leverage internal and external data assets and tools to drive discovery and development is a key objective for the Data Science and Data Engineering (DSDE) team within GSK's Pharmaceutical R&D organization. There are five key drivers for this approach, which are closely aligned with GSK's corporate priorities of Innovation, Performance and Trust: Automation of end-to-end data flows :Faster and reliable ingestion of high throughput data in genetics, genomics and multi-omics, to extract value of investments in new technology (instrument to analysis-ready data in Enabling governance by design of external and internal data :with engineered practical solutions for controlled use and monitoring Innovative disease-specific and domain-expert specific data products : to enable computational scientists and their research unit collaborators to get faster to key insights leading to faster biopharmaceutical development cycles. Supporting e2ecode traceability and data provenance :Increasing assurance of data integrity through automation, integration. Improving engineering efficiency :Extensible, reusable, scalable,updateable,maintainable, virtualized traceable data and code would be driven by data engineering innovation and better resource utilization. We are looking for experienced Senior DevOps Engineers to join our growing Data Ops team. As a Senior Dev Ops Engineer is a highly technical individual contributor, building modern, cloud-native, DevOps-first systems for standardizing and templatizingbiomedical and scientificdata engineering, with demonstrable experience across the following areas: Deliver declarative components for common data ingestion, transformation and publishing techniques Define and implement data governance aligned to modern standards Establish scalable, automated processes for data engineering teams across GSK Thought leader and partner with wider DSDE data engineering teams to advise on implementation and best practices Cloud Infrastructure-as-Code Define Service and Flow orchestration Data as a configurable resource(including configuration-driven access to scientific data modelling tools) Observability (monitoring, alerting, logging, tracing, etc.) Enable quality engineering through KPIs and code coverage and quality checks Standardise GitOps/declarative software development lifecycle Audit as a service Senior DevOpsEngineers take full ownership of delivering high-performing, high-impactbiomedical and scientificdataopsproducts and services, froma description of apattern thatcustomer Data Engineers are trying touseall the way through tofinal delivery (and ongoing monitoring and operations)of a templated project and all associated automation. They arestandard-bearers for software engineering and quality coding practices within theteam andareexpected to mentor more junior engineers; they may even coordinate the work of more junior engineers on a large project.Theydevise useful metrics for ensuring their services are meeting customer demand and having animpact anditerate to deliver and improve on those metrics in an agile fashion. Successful Senior DevOpsEngineers are developing expertise with the types of data and types of tools that are leveraged in the biomedical and scientific data engineering space, andhas the following skills and experience(withsignificant depth in one or more of these areas): Demonstrable experience deploying robust modularised/container-based solutions to production (ideally GCP) and leveraging the Cloud NativeComputing Foundation (CNCF) ecosystem Significant depth in DevOps principles and tools (e.g.GitOps, Jenkins,CircleCI, Azure DevOps, etc.), and how to integrate these tools with other productivity tools (e.g. Jira, Slack, Microsoft Teams) to build a comprehensive workflow Programming in Python. Scala orGo Embedding agile software engineering (task/issue management, testing, documentation, software development lifecycle, source control, etc.) Leveraging major cloud providers, both via Kubernetesorvia vendor-specific services Authentication and Authorization flows and associated technologies (e.g.OAuth2 + JWT) Common distributed data tools (e.g.Spark, Hive) The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: We are looking for professionals with these required skills to achieve our goals: Masters in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering, etc, plus 5 years job experience (or PhD plus 3 years job experience) Experience with DevOps tools and concepts (e.g. Jira, GitLabs / Jenkins / CircleCI / Azure DevOps /etc.)Excellent with common distributed data tools in a production setting (Spark, Kafka, etc) Experience with specialized data architecture (e.g. optimizing physical layout for access patterns, including bloom filters, optimizing against self-describing formats such as ORC or Parquet, etc.) Experience with search / indexing systems (e.g. Elasticsearch) Expertise with agile development in Python, Scala, Go, and/or C++ Experience building reusable components on top of the CNCF ecosystem including Kubernetes Metrics-first mindset Experience mentoring junior engineers into deep technical expertise Preferred Qualifications: If you have the following characteristics, it would be a plus: Experience with agile software development Experience with building and designing a DevOps-first way of working Experience with building reusable components on top of the CNCF ecosystem including Kubernetes (or similar ecosystem ) LI-GSK Why GSK? Our values and expectationsare at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness As a company driven by our values of Patient focus, Transparency, Respect and Integrity, we know inclusion and diversity are essential for us to be able to succeed. We want all our colleagues to thrive at GSK bringing their unique experiences, ensuring they feel good and to keep growing their careers. As a candidate for a role, we want you to feel the same way. As an Equal Opportunity Employer, we are open to all talent. In the US, we also adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to race/ethnicity, colour, national origin, religion, gender, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class ( US only). We believe in an agile working culture for all our roles. If flexibility is important to you, we encourage you to explore with our hiring team what the opportunities are. Please don't hesitate to contact us if you'd like to discuss any adjustments to our process which might help you demonstrate your strengths and capabilities. You can either call us on , or send an email As you apply, we will ask you to share some personal information which is entirely voluntary..... click apply for full job details
Site Name: UK - Hertfordshire - Stevenage, USA - Connecticut - Hartford, USA - Delaware - Dover, USA - Maryland - Rockville, USA - Massachusetts - Waltham, USA - Pennsylvania - Upper Providence, Warren NJ Posted Date: Aug The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. Achieving delivery of the right data to the right people at the right time needs design and implementation of data flows and data products which leverage internal and external data assets and tools to drive discovery and development is a key objective for the Data Science and Data Engineering (DS D E) team within GSK's Pharmaceutical R&D organisation . There are five key drivers for this approach, which are closely aligned with GSK's corporate priorities of Innovation, Performance and Trust: Automation of end-to-end data flows: Faster and reliable ingestion of high throughput data in genetics, genomics and multi-omics, to extract value of investments in new technology (instrument to analysis-ready data in Enabling governance by design of external and internal data: with engineered practical solutions for controlled use and monitoring Innovative disease-specific and domain-expert specific data products : to enable computational scientists and their research unit collaborators to get faster to key insights leading to faster biopharmaceutical development cycles. Supporting e2 e code traceability and data provenance: Increasing assurance of data integrity through automation, integration Improving engineering efficiency: Extensible, reusable, scalable,updateable,maintainable, virtualized traceable data and code would b e driven by data engineering innovation and better resource utilization. We are looking for an experienced Sr. Data Ops Engineer to join our growing Data Ops team. As a Sr. Data Ops Engineer is a highly technical individual contributor, building modern, cloud-native, DevOps-first systems for standardizing and templatizingbiomedical and scientificdata engineering, with demonstrable experience across the following areas : Deliver declarative components for common data ingestion, transformation and publishing techniques Define and implement data governance aligned to modern standards Establish scalable, automated processes for data engineering team s across GSK Thought leader and partner with wider DSDE data engineering teams to advise on implementation and best practices Cloud Infrastructure-as-Code D efine Service and Flow orchestration Data as a configurable resource(including configuration-driven access to scientific data modelling tools) Ob servabilty (monitoring, alerting, logging, tracing, ...) Enable quality engineering through KPIs and c ode coverage and quality checks Standardise GitOps /declarative software development lifecycle Audit as a service Sr. DataOpsEngineerstake full ownership of delivering high-performing, high-impactbiomedical and scientificdataopsproducts and services, froma description of apattern thatcustomer Data Engineers are trying touseall the way through tofinal delivery (and ongoing monitoring and operations)of a templated project and all associated automation. They arestandard-bearers for software engineering and quality coding practices within theteam andareexpected to mentor more junior engineers; they may even coordinate the work of more junior engineers on a large project.Theydevise useful metrics for ensuring their services are meeting customer demand and having animpact anditerate to deliver and improve on those metrics in an agile fashion. A successfulSr.DataOpsEngineeris developing expertise with the types of data and types of tools that are leveraged in the biomedical and scientific data engineering space, andhas the following skills and experience(withsignificant depth in one or more of these areas): Demonstrable experience deploying robust modularised/ container based solutions to production (ideally GCP) and leveraging the Cloud NativeComputing Foundation (CNCF) ecosystem Significant depth in DevOps principles and tools ( e.g. GitOps , Jenkins, CircleCI , Azure DevOps, ...), and how to integrate these tools with other productivity tools (e.g. Jira, Slack, Microsoft Teams) to build a comprehensive workflow P rogramming in Python. Scala or Go Embedding agile s oftware engineering ( task/issue management, testing, documentation, software development lifecycle, source control, ) Leveraging major cloud providers, both via Kubernetes or via vendor-specific services Authentication and Authorization flows and associated technologies ( e.g. OAuth2 + JWT) Common distributed data tools ( e.g. Spark, Hive) The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: Bachelors degree in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering, etc, plus 7 years job experience or Masters degree with 5 Years of experience (or PhD plus 3 years job experience) Deep experience with DevOps tools and concepts ( e.g. Jira, GitLabs / Jenkins / CircleCI / Azure DevOps / ...) Excellent with common distributed data tools in a production setting (Spark, Kafka, etc) Experience with specialized data architecture ( e.g. optimizing physical layout for access patterns, including bloom filters, optimizing against self-describing formats such as ORC or Parquet, etc) Experience with search / indexing systems ( e.g. Elasticsearch) Deep expertise with agile development in Python, Scala, Go, and/or C++ Experience building reusable components on top of the CNCF ecosystem including Kubernetes Metrics-first mindset Experience mentoring junior engineers into deep technical expertise Preferred Qualifications: If you have the following characteristics, it would be a plus: Experience with agile software development Experience building and designing a DevOps-first way of working Demonstrated experience building reusable components on top of the CNCF ecosystem including Kubernetes (or similar ecosystem ) LI-GSK Why GSK? Our values and expectations are at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness As a company driven by our values of Patient focus, Transparency, Respect and Integrity, we know inclusion and diversity are essential for us to be able to succeed. We want all our colleagues to thrive at GSK bringing their unique experiences, ensuring they feel good and to keep growing their careers. As a candidate for a role, we want you to feel the same way. As an Equal Opportunity Employer, we are open to all talent. In the US, we also adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to neurodiversity, race/ethnicity, colour, national origin, religion, gender, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class ( US only). We believe in an agile working culture for all our roles. If flexibility is important to you, we encourage you to explore with our hiring team what the opportunities are. Please don't hesitate to contact us if you'd like to discuss any adjustments to our process which might help you demonstrate your strengths and capabilities. You can either call us on , or send an email As you apply, we will ask you to share some personal information which is entirely voluntary..... click apply for full job details
23/09/2022
Full time
Site Name: UK - Hertfordshire - Stevenage, USA - Connecticut - Hartford, USA - Delaware - Dover, USA - Maryland - Rockville, USA - Massachusetts - Waltham, USA - Pennsylvania - Upper Providence, Warren NJ Posted Date: Aug The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. Achieving delivery of the right data to the right people at the right time needs design and implementation of data flows and data products which leverage internal and external data assets and tools to drive discovery and development is a key objective for the Data Science and Data Engineering (DS D E) team within GSK's Pharmaceutical R&D organisation . There are five key drivers for this approach, which are closely aligned with GSK's corporate priorities of Innovation, Performance and Trust: Automation of end-to-end data flows: Faster and reliable ingestion of high throughput data in genetics, genomics and multi-omics, to extract value of investments in new technology (instrument to analysis-ready data in Enabling governance by design of external and internal data: with engineered practical solutions for controlled use and monitoring Innovative disease-specific and domain-expert specific data products : to enable computational scientists and their research unit collaborators to get faster to key insights leading to faster biopharmaceutical development cycles. Supporting e2 e code traceability and data provenance: Increasing assurance of data integrity through automation, integration Improving engineering efficiency: Extensible, reusable, scalable,updateable,maintainable, virtualized traceable data and code would b e driven by data engineering innovation and better resource utilization. We are looking for an experienced Sr. Data Ops Engineer to join our growing Data Ops team. As a Sr. Data Ops Engineer is a highly technical individual contributor, building modern, cloud-native, DevOps-first systems for standardizing and templatizingbiomedical and scientificdata engineering, with demonstrable experience across the following areas : Deliver declarative components for common data ingestion, transformation and publishing techniques Define and implement data governance aligned to modern standards Establish scalable, automated processes for data engineering team s across GSK Thought leader and partner with wider DSDE data engineering teams to advise on implementation and best practices Cloud Infrastructure-as-Code D efine Service and Flow orchestration Data as a configurable resource(including configuration-driven access to scientific data modelling tools) Ob servabilty (monitoring, alerting, logging, tracing, ...) Enable quality engineering through KPIs and c ode coverage and quality checks Standardise GitOps /declarative software development lifecycle Audit as a service Sr. DataOpsEngineerstake full ownership of delivering high-performing, high-impactbiomedical and scientificdataopsproducts and services, froma description of apattern thatcustomer Data Engineers are trying touseall the way through tofinal delivery (and ongoing monitoring and operations)of a templated project and all associated automation. They arestandard-bearers for software engineering and quality coding practices within theteam andareexpected to mentor more junior engineers; they may even coordinate the work of more junior engineers on a large project.Theydevise useful metrics for ensuring their services are meeting customer demand and having animpact anditerate to deliver and improve on those metrics in an agile fashion. A successfulSr.DataOpsEngineeris developing expertise with the types of data and types of tools that are leveraged in the biomedical and scientific data engineering space, andhas the following skills and experience(withsignificant depth in one or more of these areas): Demonstrable experience deploying robust modularised/ container based solutions to production (ideally GCP) and leveraging the Cloud NativeComputing Foundation (CNCF) ecosystem Significant depth in DevOps principles and tools ( e.g. GitOps , Jenkins, CircleCI , Azure DevOps, ...), and how to integrate these tools with other productivity tools (e.g. Jira, Slack, Microsoft Teams) to build a comprehensive workflow P rogramming in Python. Scala or Go Embedding agile s oftware engineering ( task/issue management, testing, documentation, software development lifecycle, source control, ) Leveraging major cloud providers, both via Kubernetes or via vendor-specific services Authentication and Authorization flows and associated technologies ( e.g. OAuth2 + JWT) Common distributed data tools ( e.g. Spark, Hive) The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: Bachelors degree in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering, etc, plus 7 years job experience or Masters degree with 5 Years of experience (or PhD plus 3 years job experience) Deep experience with DevOps tools and concepts ( e.g. Jira, GitLabs / Jenkins / CircleCI / Azure DevOps / ...) Excellent with common distributed data tools in a production setting (Spark, Kafka, etc) Experience with specialized data architecture ( e.g. optimizing physical layout for access patterns, including bloom filters, optimizing against self-describing formats such as ORC or Parquet, etc) Experience with search / indexing systems ( e.g. Elasticsearch) Deep expertise with agile development in Python, Scala, Go, and/or C++ Experience building reusable components on top of the CNCF ecosystem including Kubernetes Metrics-first mindset Experience mentoring junior engineers into deep technical expertise Preferred Qualifications: If you have the following characteristics, it would be a plus: Experience with agile software development Experience building and designing a DevOps-first way of working Demonstrated experience building reusable components on top of the CNCF ecosystem including Kubernetes (or similar ecosystem ) LI-GSK Why GSK? Our values and expectations are at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness As a company driven by our values of Patient focus, Transparency, Respect and Integrity, we know inclusion and diversity are essential for us to be able to succeed. We want all our colleagues to thrive at GSK bringing their unique experiences, ensuring they feel good and to keep growing their careers. As a candidate for a role, we want you to feel the same way. As an Equal Opportunity Employer, we are open to all talent. In the US, we also adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to neurodiversity, race/ethnicity, colour, national origin, religion, gender, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class ( US only). We believe in an agile working culture for all our roles. If flexibility is important to you, we encourage you to explore with our hiring team what the opportunities are. Please don't hesitate to contact us if you'd like to discuss any adjustments to our process which might help you demonstrate your strengths and capabilities. You can either call us on , or send an email As you apply, we will ask you to share some personal information which is entirely voluntary..... click apply for full job details
Site Name: USA - Pennsylvania - Upper Providence, UK - Hertfordshire - Stevenage, UK - London - Brentford, USA - Pennsylvania - Philadelphia Posted Date: Oct The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. We are looking for a skilled Data Ops Engineer II to join our growing team. The Data Ops team acceleratesbiomedicaland scientificdata product development and ensures consistent, professional-grade operations for the Data Science and Engineering organization by building templated projects (code repository plus DevOps pipelines) for various Data Science/Data Engineering architecture patternsin the challenging biomedical data space.A Data Ops Engineer IIknows the metrics desired for their tools andservices anditerates to deliver and improve on those metrics in an agile fashion. A Data Ops Engineer II is a highly technical individual contributor, building modern, cloud-native systems for standardizing and templatizing data engineering, such as: Standardized physical storage and search / indexing systems Schema management (data + metadata + versioning + provenance + governance) API semantics and ontology management Standard API architectures Kafka + standard streaming semantics Standard components for publishing data to file-based, relational, and other sorts of data stores Metadata systems Tooling for QA / evaluation Audit as a Service Additional responsibilities also include: Given a well-specified data framework problem, implement end-to-end solutionsusing appropriate programming languages(e.g.Python,Scala, or Go), open-source tools (e.g.Spark,Elasticsearch, ...), and cloud vendor-provided tools (e.g.Amazon S3) Leverage tools provided by Tech (e.g.infrastructure as code, CloudOps,DevOps, logging / alerting, ...) in delivery ofsolutions Write proper documentation in code as well as in wikis/other documentationsystems Writefantastic code along with theproper unit, functional, and integration tests for code and services to ensurequality Stayup to datewith developments in theopen-sourcecommunity around data engineering, data science, and similartooling The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: We are looking for professionals with these required skills to achieve our goals: Master's in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering and 2+ years of experience OR PhD in Computer Science Demonstrated experience with software engineering (testing, documentation, software development lifecycle, source control, ... Experience with DevOps tools and concepts (e.g. Jira, GitLabs / Jenkins / CircleCI / Azure DevOps / ...) Experience with common distributed data tools in a production setting (Spark, Kafka, etc) Experience with basics of search engines/indexing (e.g. Elasticsearch, Lucene) Demonstrated experience in writing Python, Scala, Go, and/or C++ Preferred Qualifications: If you have the following characteristics, it would be a plus: Comfort with specialized data architecture (e.g. optimizing physical layout for access patterns, including bloom filters, optimizing against self-describing formats such as ORC or Parquet, etc) Experience with the CNCF ecosystem / Kubernetes Comfort with search/indexing systems (e.g. Elasticsearch) Experience with schema tools/schema management (Avro, Protobuf) Why GSK? Our values and expectations are at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness LI-GSK If you require an accommodation or other assistance to apply for a job at GSK, please contact the GSK Service Centre at 1- (US Toll Free) or +1 (outside US). GSK is an Equal Opportunity Employer and, in the US, we adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to race, color, national origin, religion, sex, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class. At GSK, the health and safety of our employees are of paramount importance. As a science-led healthcare company on a mission to get ahead of disease together, we believe that supporting vaccination against COVID-19 is the single best thing we can do in the US to ensure the health and safety of our employees, complementary workers, workplaces, customers, consumers, communities, and the patients we serve. GSK has made the decision to require all US employees to be fully vaccinated against COVID-19, where allowed by state or local law and where vaccine supply is readily available. The only exceptions to this requirement are employees who are approved for an accommodation for religious, medical or disability-related reasons. Important notice to Employment businesses/ Agencies GSK does not accept referrals from employment businesses and/or employment agencies in respect of the vacancies posted on this site. All employment businesses/agencies are required to contact GSK's commercial and general procurement/human resources department to obtain prior written authorization before referring any candidates to GSK. The obtaining of prior written authorization is a condition precedent to any agreement (verbal or written) between the employment business/ agency and GSK. In the absence of such written authorization being obtained any actions undertaken by the employment business/agency shall be deemed to have been performed without the consent or contractual agreement of GSK. GSK shall therefore not be liable for any fees arising from such actions or any fees arising from any referrals by employment businesses/agencies in respect of the vacancies posted on this site. Please note that if you are a US Licensed Healthcare Professional or Healthcare Professional as defined by the laws of the state issuing your license, GSK may be required to capture and report expenses GSK incurs, on your behalf, in the event you are afforded an interview for employment. This capture of applicable transfers of value is necessary to ensure GSK's compliance to all federal and state US Transparency requirements. For more information, please visit GSK's Transparency Reporting For the Record site.
21/09/2022
Full time
Site Name: USA - Pennsylvania - Upper Providence, UK - Hertfordshire - Stevenage, UK - London - Brentford, USA - Pennsylvania - Philadelphia Posted Date: Oct The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. We are looking for a skilled Data Ops Engineer II to join our growing team. The Data Ops team acceleratesbiomedicaland scientificdata product development and ensures consistent, professional-grade operations for the Data Science and Engineering organization by building templated projects (code repository plus DevOps pipelines) for various Data Science/Data Engineering architecture patternsin the challenging biomedical data space.A Data Ops Engineer IIknows the metrics desired for their tools andservices anditerates to deliver and improve on those metrics in an agile fashion. A Data Ops Engineer II is a highly technical individual contributor, building modern, cloud-native systems for standardizing and templatizing data engineering, such as: Standardized physical storage and search / indexing systems Schema management (data + metadata + versioning + provenance + governance) API semantics and ontology management Standard API architectures Kafka + standard streaming semantics Standard components for publishing data to file-based, relational, and other sorts of data stores Metadata systems Tooling for QA / evaluation Audit as a Service Additional responsibilities also include: Given a well-specified data framework problem, implement end-to-end solutionsusing appropriate programming languages(e.g.Python,Scala, or Go), open-source tools (e.g.Spark,Elasticsearch, ...), and cloud vendor-provided tools (e.g.Amazon S3) Leverage tools provided by Tech (e.g.infrastructure as code, CloudOps,DevOps, logging / alerting, ...) in delivery ofsolutions Write proper documentation in code as well as in wikis/other documentationsystems Writefantastic code along with theproper unit, functional, and integration tests for code and services to ensurequality Stayup to datewith developments in theopen-sourcecommunity around data engineering, data science, and similartooling The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: We are looking for professionals with these required skills to achieve our goals: Master's in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering and 2+ years of experience OR PhD in Computer Science Demonstrated experience with software engineering (testing, documentation, software development lifecycle, source control, ... Experience with DevOps tools and concepts (e.g. Jira, GitLabs / Jenkins / CircleCI / Azure DevOps / ...) Experience with common distributed data tools in a production setting (Spark, Kafka, etc) Experience with basics of search engines/indexing (e.g. Elasticsearch, Lucene) Demonstrated experience in writing Python, Scala, Go, and/or C++ Preferred Qualifications: If you have the following characteristics, it would be a plus: Comfort with specialized data architecture (e.g. optimizing physical layout for access patterns, including bloom filters, optimizing against self-describing formats such as ORC or Parquet, etc) Experience with the CNCF ecosystem / Kubernetes Comfort with search/indexing systems (e.g. Elasticsearch) Experience with schema tools/schema management (Avro, Protobuf) Why GSK? Our values and expectations are at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness LI-GSK If you require an accommodation or other assistance to apply for a job at GSK, please contact the GSK Service Centre at 1- (US Toll Free) or +1 (outside US). GSK is an Equal Opportunity Employer and, in the US, we adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to race, color, national origin, religion, sex, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class. At GSK, the health and safety of our employees are of paramount importance. As a science-led healthcare company on a mission to get ahead of disease together, we believe that supporting vaccination against COVID-19 is the single best thing we can do in the US to ensure the health and safety of our employees, complementary workers, workplaces, customers, consumers, communities, and the patients we serve. GSK has made the decision to require all US employees to be fully vaccinated against COVID-19, where allowed by state or local law and where vaccine supply is readily available. The only exceptions to this requirement are employees who are approved for an accommodation for religious, medical or disability-related reasons. Important notice to Employment businesses/ Agencies GSK does not accept referrals from employment businesses and/or employment agencies in respect of the vacancies posted on this site. All employment businesses/agencies are required to contact GSK's commercial and general procurement/human resources department to obtain prior written authorization before referring any candidates to GSK. The obtaining of prior written authorization is a condition precedent to any agreement (verbal or written) between the employment business/ agency and GSK. In the absence of such written authorization being obtained any actions undertaken by the employment business/agency shall be deemed to have been performed without the consent or contractual agreement of GSK. GSK shall therefore not be liable for any fees arising from such actions or any fees arising from any referrals by employment businesses/agencies in respect of the vacancies posted on this site. Please note that if you are a US Licensed Healthcare Professional or Healthcare Professional as defined by the laws of the state issuing your license, GSK may be required to capture and report expenses GSK incurs, on your behalf, in the event you are afforded an interview for employment. This capture of applicable transfers of value is necessary to ensure GSK's compliance to all federal and state US Transparency requirements. For more information, please visit GSK's Transparency Reporting For the Record site.
Team Overview Data is central to Ofcom's work. We use data from a wide range of sources to understand the dynamics of the sectors we regulate and to provide key insights to inform policy decisions. In addition to existing regulatory responsibilities, there will be exciting opportunities to contribute to Ofcom's new duties in relation to the regulation of Online Harms. It is expected that this new area of work will expand the scope of data-driven activities at Ofcom in terms of the variety and volume of data, as well as the range of analysis we do. To ensure that Ofcom has the appropriate data capabilities to undertake the analysis effectively, ICT has created a new role for an Data Solutions Architect Purpose of the Role The Data Solutions Architect works closely with ICT and the Data Innovation Hub. You will architect, design, implement and oversee the operations of data solutions that empower data professionals to efficiently and effectively deliver their work. Candidates will exhibit critical thinking skills, the ability to synthesize complex problems, and have relevant skills and experience for enabling the transformation of data to create solutions that add value to a myriad of business requirements. You must have a deep understanding of the full data lifecycle and the role that high-quality data plays across applications, machine learning, business analytics, and reporting. You will lead the design and development of solution architectures in response to business requirements. This includes identifying and evaluating alternative architectures, their trade-offs in cost, performance and scalability. And ensuring that the relevant technical strategies, policies, standards and practices (including security) are applied correctly. The end to end solution will be fit for purpose - i.e. meet the needs of business, the agreed requirements, and support the strategic direction of Ofcom. Work with IT teams, business analysts and data analytics teams to understand data consumers' needs and develop solutions By maintaining your knowledge of emerging trends in data usage, tools and analysis techniques you will support our on-going development activities and continually promote data innovation as a means to achieve the business outcomes for specific Groups, and Ofcom. You will need to be self-motivated, an effective communicator and have a collaborative delivery approach.You will work in a collaborative cross-functional environment and interact with the full spectrum of colleagues (data engineers, data analysts, data scientists, operational support and policy makers), and you will need to inform and influence senior managers. Requirements of the Role Build strong relationships with colleagues across the business, understanding their motivations behind projects and own technical activities to translate business requirements (both functional and non-functional) into a solution. Ensuring the required business value is delivered. Fostering a customer centric approach to ensure delivery of business value and an iterative approach that responds to feedback and changing needs. Perform deep dives into technical areas to solve a specific solution or design challenges. Using trials or POCs to prove or discount an approach, to critic your own design. You will be responsible for ensuring that the solutions you help deliver form an integral part of the ICT estate and align with the wider reference architecture and domain roadmaps. Manage stakeholder expectations and be flexible, working on many different projects and topics at the same time period. Manage proactive and reactive communication. Facilitate difficult discussions within the team or with diverse senior stakeholders and external / 3rd parties as necessary. Provide documentation of solutions detailing the business, data, application and technology layers. Work with Data Engineers to define data pipelines and data lakes, covering the ingression, ETL or ELT, and the cataloguing of data. Takes overall responsibility for planning effective data storage, cost, security, quality, sharing, availability, retention and publishing within the organisation. Develops analytics policy, standards and guidelines Ensuring successful transitions for solutions into production ensuring production support have the necessary knowledge and documentation to support the service. Skills, knowledge and experience Robust Data and Technical/Solutions Architecture skills - sets direction for and possesses a deep understanding of architecture and strategies which integrate with industry trends e.g. TOGAF Hands-on experience with analytical tools and languages such as: Python, R, SQL, Azure Data Factory (and SSIS), Databricks, Power BI, Git etc. Experience of infrastructure: cloud-based technologies used for storage, data lakes, data warehouses, data streaming, Databases, ETL / ELT / Transformation. Experience of DevOps/ DataOps methods in development of data solutions to ensure pipelines and processes can be automated and controlled. Experience with Cloud based data and analytical initiatives utilising Azure, AWS, Google Cloud or similar cloud services Experience of working closely with Data Professionals (E.g. Data Scientists and Data Analysts) to understanding their needs. Experience of implementating of statistical, Artificial Intelligence, Machine Learning and Deep learning applications. Experience with integrations (e.g. via APIs) with external vendors to share data between organizations Experience of working with external technology supplier and service providers to deliver business solutions SFIA Skill Enterprise and business architecture STPL - Level 5 Solution architecture ARCH - Level 5 Requirements definition and management REQM- Level 5 Database design DBDS- Level 5 Analytics INAN- Level 4 Emerging Technology Monitoring (EMRG)- Level 4 Relationship Management RLMT- Level 5
04/11/2021
Full time
Team Overview Data is central to Ofcom's work. We use data from a wide range of sources to understand the dynamics of the sectors we regulate and to provide key insights to inform policy decisions. In addition to existing regulatory responsibilities, there will be exciting opportunities to contribute to Ofcom's new duties in relation to the regulation of Online Harms. It is expected that this new area of work will expand the scope of data-driven activities at Ofcom in terms of the variety and volume of data, as well as the range of analysis we do. To ensure that Ofcom has the appropriate data capabilities to undertake the analysis effectively, ICT has created a new role for an Data Solutions Architect Purpose of the Role The Data Solutions Architect works closely with ICT and the Data Innovation Hub. You will architect, design, implement and oversee the operations of data solutions that empower data professionals to efficiently and effectively deliver their work. Candidates will exhibit critical thinking skills, the ability to synthesize complex problems, and have relevant skills and experience for enabling the transformation of data to create solutions that add value to a myriad of business requirements. You must have a deep understanding of the full data lifecycle and the role that high-quality data plays across applications, machine learning, business analytics, and reporting. You will lead the design and development of solution architectures in response to business requirements. This includes identifying and evaluating alternative architectures, their trade-offs in cost, performance and scalability. And ensuring that the relevant technical strategies, policies, standards and practices (including security) are applied correctly. The end to end solution will be fit for purpose - i.e. meet the needs of business, the agreed requirements, and support the strategic direction of Ofcom. Work with IT teams, business analysts and data analytics teams to understand data consumers' needs and develop solutions By maintaining your knowledge of emerging trends in data usage, tools and analysis techniques you will support our on-going development activities and continually promote data innovation as a means to achieve the business outcomes for specific Groups, and Ofcom. You will need to be self-motivated, an effective communicator and have a collaborative delivery approach.You will work in a collaborative cross-functional environment and interact with the full spectrum of colleagues (data engineers, data analysts, data scientists, operational support and policy makers), and you will need to inform and influence senior managers. Requirements of the Role Build strong relationships with colleagues across the business, understanding their motivations behind projects and own technical activities to translate business requirements (both functional and non-functional) into a solution. Ensuring the required business value is delivered. Fostering a customer centric approach to ensure delivery of business value and an iterative approach that responds to feedback and changing needs. Perform deep dives into technical areas to solve a specific solution or design challenges. Using trials or POCs to prove or discount an approach, to critic your own design. You will be responsible for ensuring that the solutions you help deliver form an integral part of the ICT estate and align with the wider reference architecture and domain roadmaps. Manage stakeholder expectations and be flexible, working on many different projects and topics at the same time period. Manage proactive and reactive communication. Facilitate difficult discussions within the team or with diverse senior stakeholders and external / 3rd parties as necessary. Provide documentation of solutions detailing the business, data, application and technology layers. Work with Data Engineers to define data pipelines and data lakes, covering the ingression, ETL or ELT, and the cataloguing of data. Takes overall responsibility for planning effective data storage, cost, security, quality, sharing, availability, retention and publishing within the organisation. Develops analytics policy, standards and guidelines Ensuring successful transitions for solutions into production ensuring production support have the necessary knowledge and documentation to support the service. Skills, knowledge and experience Robust Data and Technical/Solutions Architecture skills - sets direction for and possesses a deep understanding of architecture and strategies which integrate with industry trends e.g. TOGAF Hands-on experience with analytical tools and languages such as: Python, R, SQL, Azure Data Factory (and SSIS), Databricks, Power BI, Git etc. Experience of infrastructure: cloud-based technologies used for storage, data lakes, data warehouses, data streaming, Databases, ETL / ELT / Transformation. Experience of DevOps/ DataOps methods in development of data solutions to ensure pipelines and processes can be automated and controlled. Experience with Cloud based data and analytical initiatives utilising Azure, AWS, Google Cloud or similar cloud services Experience of working closely with Data Professionals (E.g. Data Scientists and Data Analysts) to understanding their needs. Experience of implementating of statistical, Artificial Intelligence, Machine Learning and Deep learning applications. Experience with integrations (e.g. via APIs) with external vendors to share data between organizations Experience of working with external technology supplier and service providers to deliver business solutions SFIA Skill Enterprise and business architecture STPL - Level 5 Solution architecture ARCH - Level 5 Requirements definition and management REQM- Level 5 Database design DBDS- Level 5 Analytics INAN- Level 4 Emerging Technology Monitoring (EMRG)- Level 4 Relationship Management RLMT- Level 5
Team Overview Data is central to Ofcom's work. We use data from a wide range of sources to understand the dynamics of the sectors we regulate and to provide key insights to inform policy decisions. In addition to existing regulatory responsibilities, there will be exciting opportunities to contribute to Ofcom's new duties in relation to the regulation of Online Harms. It is expected that this new area of work will expand the scope of data-driven activities at Ofcom in terms of the variety and volume of data, as well as the range of analysis we do. To ensure that Ofcom has the appropriate data capabilities to undertake the analysis effectively, ICT has created a new role for an Data Solutions Architect Purpose of the Role The Data Solutions Architect works closely with ICT and the Data Innovation Hub. You will architect, design, implement and oversee the operations of data solutions that empower data professionals to efficiently and effectively deliver their work. Candidates will exhibit critical thinking skills, the ability to synthesize complex problems, and have relevant skills and experience for enabling the transformation of data to create solutions that add value to a myriad of business requirements. You must have a deep understanding of the full data lifecycle and the role that high-quality data plays across applications, machine learning, business analytics, and reporting. You will lead the design and development of solution architectures in response to business requirements. This includes identifying and evaluating alternative architectures, their trade-offs in cost, performance and scalability. And ensuring that the relevant technical strategies, policies, standards and practices (including security) are applied correctly. The end to end solution will be fit for purpose - i.e. meet the needs of business, the agreed requirements, and support the strategic direction of Ofcom. Work with IT teams, business analysts and data analytics teams to understand data consumers' needs and develop solutions By maintaining your knowledge of emerging trends in data usage, tools and analysis techniques you will support our on-going development activities and continually promote data innovation as a means to achieve the business outcomes for specific Groups, and Ofcom. You will need to be self-motivated, an effective communicator and have a collaborative delivery approach.You will work in a collaborative cross-functional environment and interact with the full spectrum of colleagues (data engineers, data analysts, data scientists, operational support and policy makers), and you will need to inform and influence senior managers. Requirements of the Role Build strong relationships with colleagues across the business, understanding their motivations behind projects and own technical activities to translate business requirements (both functional and non-functional) into a solution. Ensuring the required business value is delivered. Fostering a customer centric approach to ensure delivery of business value and an iterative approach that responds to feedback and changing needs. Perform deep dives into technical areas to solve a specific solution or design challenges. Using trials or POCs to prove or discount an approach, to critic your own design. You will be responsible for ensuring that the solutions you help deliver form an integral part of the ICT estate and align with the wider reference architecture and domain roadmaps. Manage stakeholder expectations and be flexible, working on many different projects and topics at the same time period. Manage proactive and reactive communication. Facilitate difficult discussions within the team or with diverse senior stakeholders and external / 3rd parties as necessary. Provide documentation of solutions detailing the business, data, application and technology layers. Work with Data Engineers to define data pipelines and data lakes, covering the ingression, ETL or ELT, and the cataloguing of data. Takes overall responsibility for planning effective data storage, cost, security, quality, sharing, availability, retention and publishing within the organisation. Develops analytics policy, standards and guidelines Ensuring successful transitions for solutions into production ensuring production support have the necessary knowledge and documentation to support the service. Skills, knowledge and experience Robust Data and Technical/Solutions Architecture skills - sets direction for and possesses a deep understanding of architecture and strategies which integrate with industry trends e.g. TOGAF Hands-on experience with analytical tools and languages such as: Python, R, SQL, Azure Data Factory (and SSIS), Databricks, Power BI, Git etc. Experience of infrastructure: cloud-based technologies used for storage, data lakes, data warehouses, data streaming, Databases, ETL / ELT / Transformation. Experience of DevOps/ DataOps methods in development of data solutions to ensure pipelines and processes can be automated and controlled. Experience with Cloud based data and analytical initiatives utilising Azure, AWS, Google Cloud or similar cloud services Experience of working closely with Data Professionals (E.g. Data Scientists and Data Analysts) to understanding their needs. Experience of implementating of statistical, Artificial Intelligence, Machine Learning and Deep learning applications. Experience with integrations (e.g. via APIs) with external vendors to share data between organizations Experience of working with external technology supplier and service providers to deliver business solutions SFIA Skill Enterprise and business architecture STPL - Level 5 Solution architecture ARCH - Level 5 Requirements definition and management REQM- Level 5 Database design DBDS- Level 5 Analytics INAN- Level 4 Emerging Technology Monitoring (EMRG)- Level 4 Relationship Management RLMT- Level 5
04/11/2021
Full time
Team Overview Data is central to Ofcom's work. We use data from a wide range of sources to understand the dynamics of the sectors we regulate and to provide key insights to inform policy decisions. In addition to existing regulatory responsibilities, there will be exciting opportunities to contribute to Ofcom's new duties in relation to the regulation of Online Harms. It is expected that this new area of work will expand the scope of data-driven activities at Ofcom in terms of the variety and volume of data, as well as the range of analysis we do. To ensure that Ofcom has the appropriate data capabilities to undertake the analysis effectively, ICT has created a new role for an Data Solutions Architect Purpose of the Role The Data Solutions Architect works closely with ICT and the Data Innovation Hub. You will architect, design, implement and oversee the operations of data solutions that empower data professionals to efficiently and effectively deliver their work. Candidates will exhibit critical thinking skills, the ability to synthesize complex problems, and have relevant skills and experience for enabling the transformation of data to create solutions that add value to a myriad of business requirements. You must have a deep understanding of the full data lifecycle and the role that high-quality data plays across applications, machine learning, business analytics, and reporting. You will lead the design and development of solution architectures in response to business requirements. This includes identifying and evaluating alternative architectures, their trade-offs in cost, performance and scalability. And ensuring that the relevant technical strategies, policies, standards and practices (including security) are applied correctly. The end to end solution will be fit for purpose - i.e. meet the needs of business, the agreed requirements, and support the strategic direction of Ofcom. Work with IT teams, business analysts and data analytics teams to understand data consumers' needs and develop solutions By maintaining your knowledge of emerging trends in data usage, tools and analysis techniques you will support our on-going development activities and continually promote data innovation as a means to achieve the business outcomes for specific Groups, and Ofcom. You will need to be self-motivated, an effective communicator and have a collaborative delivery approach.You will work in a collaborative cross-functional environment and interact with the full spectrum of colleagues (data engineers, data analysts, data scientists, operational support and policy makers), and you will need to inform and influence senior managers. Requirements of the Role Build strong relationships with colleagues across the business, understanding their motivations behind projects and own technical activities to translate business requirements (both functional and non-functional) into a solution. Ensuring the required business value is delivered. Fostering a customer centric approach to ensure delivery of business value and an iterative approach that responds to feedback and changing needs. Perform deep dives into technical areas to solve a specific solution or design challenges. Using trials or POCs to prove or discount an approach, to critic your own design. You will be responsible for ensuring that the solutions you help deliver form an integral part of the ICT estate and align with the wider reference architecture and domain roadmaps. Manage stakeholder expectations and be flexible, working on many different projects and topics at the same time period. Manage proactive and reactive communication. Facilitate difficult discussions within the team or with diverse senior stakeholders and external / 3rd parties as necessary. Provide documentation of solutions detailing the business, data, application and technology layers. Work with Data Engineers to define data pipelines and data lakes, covering the ingression, ETL or ELT, and the cataloguing of data. Takes overall responsibility for planning effective data storage, cost, security, quality, sharing, availability, retention and publishing within the organisation. Develops analytics policy, standards and guidelines Ensuring successful transitions for solutions into production ensuring production support have the necessary knowledge and documentation to support the service. Skills, knowledge and experience Robust Data and Technical/Solutions Architecture skills - sets direction for and possesses a deep understanding of architecture and strategies which integrate with industry trends e.g. TOGAF Hands-on experience with analytical tools and languages such as: Python, R, SQL, Azure Data Factory (and SSIS), Databricks, Power BI, Git etc. Experience of infrastructure: cloud-based technologies used for storage, data lakes, data warehouses, data streaming, Databases, ETL / ELT / Transformation. Experience of DevOps/ DataOps methods in development of data solutions to ensure pipelines and processes can be automated and controlled. Experience with Cloud based data and analytical initiatives utilising Azure, AWS, Google Cloud or similar cloud services Experience of working closely with Data Professionals (E.g. Data Scientists and Data Analysts) to understanding their needs. Experience of implementating of statistical, Artificial Intelligence, Machine Learning and Deep learning applications. Experience with integrations (e.g. via APIs) with external vendors to share data between organizations Experience of working with external technology supplier and service providers to deliver business solutions SFIA Skill Enterprise and business architecture STPL - Level 5 Solution architecture ARCH - Level 5 Requirements definition and management REQM- Level 5 Database design DBDS- Level 5 Analytics INAN- Level 4 Emerging Technology Monitoring (EMRG)- Level 4 Relationship Management RLMT- Level 5
Team Overview Data is central to Ofcom's work. We use data from a wide range of sources to understand the dynamics of the sectors we regulate and to provide key insights to inform policy decisions. In addition to existing regulatory responsibilities, there will be exciting opportunities to contribute to Ofcom's new duties in relation to the regulation of Online Harms. It is expected that this new area of work will expand the scope of data-driven activities at Ofcom in terms of the variety and volume of data, as well as the range of analysis we do. To ensure that Ofcom has the appropriate data capabilities to undertake the analysis effectively, ICT has created a new role for an Data Solutions Architect Purpose of the Role The Data Solutions Architect works closely with ICT and the Data Innovation Hub. You will architect, design, implement and oversee the operations of data solutions that empower data professionals to efficiently and effectively deliver their work. Candidates will exhibit critical thinking skills, the ability to synthesize complex problems, and have relevant skills and experience for enabling the transformation of data to create solutions that add value to a myriad of business requirements. You must have a deep understanding of the full data lifecycle and the role that high-quality data plays across applications, machine learning, business analytics, and reporting. You will lead the design and development of solution architectures in response to business requirements. This includes identifying and evaluating alternative architectures, their trade-offs in cost, performance and scalability. And ensuring that the relevant technical strategies, policies, standards and practices (including security) are applied correctly. The end to end solution will be fit for purpose - i.e. meet the needs of business, the agreed requirements, and support the strategic direction of Ofcom. Work with IT teams, business analysts and data analytics teams to understand data consumers' needs and develop solutions By maintaining your knowledge of emerging trends in data usage, tools and analysis techniques you will support our on-going development activities and continually promote data innovation as a means to achieve the business outcomes for specific Groups, and Ofcom. You will need to be self-motivated, an effective communicator and have a collaborative delivery approach.You will work in a collaborative cross-functional environment and interact with the full spectrum of colleagues (data engineers, data analysts, data scientists, operational support and policy makers), and you will need to inform and influence senior managers. Requirements of the Role Build strong relationships with colleagues across the business, understanding their motivations behind projects and own technical activities to translate business requirements (both functional and non-functional) into a solution. Ensuring the required business value is delivered. Fostering a customer centric approach to ensure delivery of business value and an iterative approach that responds to feedback and changing needs. Perform deep dives into technical areas to solve a specific solution or design challenges. Using trials or POCs to prove or discount an approach, to critic your own design. You will be responsible for ensuring that the solutions you help deliver form an integral part of the ICT estate and align with the wider reference architecture and domain roadmaps. Manage stakeholder expectations and be flexible, working on many different projects and topics at the same time period. Manage proactive and reactive communication. Facilitate difficult discussions within the team or with diverse senior stakeholders and external / 3rd parties as necessary. Provide documentation of solutions detailing the business, data, application and technology layers. Work with Data Engineers to define data pipelines and data lakes, covering the ingression, ETL or ELT, and the cataloguing of data. Takes overall responsibility for planning effective data storage, cost, security, quality, sharing, availability, retention and publishing within the organisation. Develops analytics policy, standards and guidelines Ensuring successful transitions for solutions into production ensuring production support have the necessary knowledge and documentation to support the service. Skills, knowledge and experience Robust Data and Technical/Solutions Architecture skills - sets direction for and possesses a deep understanding of architecture and strategies which integrate with industry trends e.g. TOGAF Hands-on experience with analytical tools and languages such as: Python, R, SQL, Azure Data Factory (and SSIS), Databricks, Power BI, Git etc. Experience of infrastructure: cloud-based technologies used for storage, data lakes, data warehouses, data streaming, Databases, ETL / ELT / Transformation. Experience of DevOps/ DataOps methods in development of data solutions to ensure pipelines and processes can be automated and controlled. Experience with Cloud based data and analytical initiatives utilising Azure, AWS, Google Cloud or similar cloud services Experience of working closely with Data Professionals (E.g. Data Scientists and Data Analysts) to understanding their needs. Experience of implementating of statistical, Artificial Intelligence, Machine Learning and Deep learning applications. Experience with integrations (e.g. via APIs) with external vendors to share data between organizations Experience of working with external technology supplier and service providers to deliver business solutions SFIA Skill Enterprise and business architecture STPL - Level 5 Solution architecture ARCH - Level 5 Requirements definition and management REQM- Level 5 Database design DBDS- Level 5 Analytics INAN- Level 4 Emerging Technology Monitoring (EMRG)- Level 4 Relationship Management RLMT- Level 5
04/11/2021
Full time
Team Overview Data is central to Ofcom's work. We use data from a wide range of sources to understand the dynamics of the sectors we regulate and to provide key insights to inform policy decisions. In addition to existing regulatory responsibilities, there will be exciting opportunities to contribute to Ofcom's new duties in relation to the regulation of Online Harms. It is expected that this new area of work will expand the scope of data-driven activities at Ofcom in terms of the variety and volume of data, as well as the range of analysis we do. To ensure that Ofcom has the appropriate data capabilities to undertake the analysis effectively, ICT has created a new role for an Data Solutions Architect Purpose of the Role The Data Solutions Architect works closely with ICT and the Data Innovation Hub. You will architect, design, implement and oversee the operations of data solutions that empower data professionals to efficiently and effectively deliver their work. Candidates will exhibit critical thinking skills, the ability to synthesize complex problems, and have relevant skills and experience for enabling the transformation of data to create solutions that add value to a myriad of business requirements. You must have a deep understanding of the full data lifecycle and the role that high-quality data plays across applications, machine learning, business analytics, and reporting. You will lead the design and development of solution architectures in response to business requirements. This includes identifying and evaluating alternative architectures, their trade-offs in cost, performance and scalability. And ensuring that the relevant technical strategies, policies, standards and practices (including security) are applied correctly. The end to end solution will be fit for purpose - i.e. meet the needs of business, the agreed requirements, and support the strategic direction of Ofcom. Work with IT teams, business analysts and data analytics teams to understand data consumers' needs and develop solutions By maintaining your knowledge of emerging trends in data usage, tools and analysis techniques you will support our on-going development activities and continually promote data innovation as a means to achieve the business outcomes for specific Groups, and Ofcom. You will need to be self-motivated, an effective communicator and have a collaborative delivery approach.You will work in a collaborative cross-functional environment and interact with the full spectrum of colleagues (data engineers, data analysts, data scientists, operational support and policy makers), and you will need to inform and influence senior managers. Requirements of the Role Build strong relationships with colleagues across the business, understanding their motivations behind projects and own technical activities to translate business requirements (both functional and non-functional) into a solution. Ensuring the required business value is delivered. Fostering a customer centric approach to ensure delivery of business value and an iterative approach that responds to feedback and changing needs. Perform deep dives into technical areas to solve a specific solution or design challenges. Using trials or POCs to prove or discount an approach, to critic your own design. You will be responsible for ensuring that the solutions you help deliver form an integral part of the ICT estate and align with the wider reference architecture and domain roadmaps. Manage stakeholder expectations and be flexible, working on many different projects and topics at the same time period. Manage proactive and reactive communication. Facilitate difficult discussions within the team or with diverse senior stakeholders and external / 3rd parties as necessary. Provide documentation of solutions detailing the business, data, application and technology layers. Work with Data Engineers to define data pipelines and data lakes, covering the ingression, ETL or ELT, and the cataloguing of data. Takes overall responsibility for planning effective data storage, cost, security, quality, sharing, availability, retention and publishing within the organisation. Develops analytics policy, standards and guidelines Ensuring successful transitions for solutions into production ensuring production support have the necessary knowledge and documentation to support the service. Skills, knowledge and experience Robust Data and Technical/Solutions Architecture skills - sets direction for and possesses a deep understanding of architecture and strategies which integrate with industry trends e.g. TOGAF Hands-on experience with analytical tools and languages such as: Python, R, SQL, Azure Data Factory (and SSIS), Databricks, Power BI, Git etc. Experience of infrastructure: cloud-based technologies used for storage, data lakes, data warehouses, data streaming, Databases, ETL / ELT / Transformation. Experience of DevOps/ DataOps methods in development of data solutions to ensure pipelines and processes can be automated and controlled. Experience with Cloud based data and analytical initiatives utilising Azure, AWS, Google Cloud or similar cloud services Experience of working closely with Data Professionals (E.g. Data Scientists and Data Analysts) to understanding their needs. Experience of implementating of statistical, Artificial Intelligence, Machine Learning and Deep learning applications. Experience with integrations (e.g. via APIs) with external vendors to share data between organizations Experience of working with external technology supplier and service providers to deliver business solutions SFIA Skill Enterprise and business architecture STPL - Level 5 Solution architecture ARCH - Level 5 Requirements definition and management REQM- Level 5 Database design DBDS- Level 5 Analytics INAN- Level 4 Emerging Technology Monitoring (EMRG)- Level 4 Relationship Management RLMT- Level 5
About Us We're an innovative tech consultancy - a team of problem solvers. Since 1993 we've been finding better ways to solve complex technology problems for some of the world's leading organisations and delivered solutions that millions of people use every day. We bring together experts from diverse backgrounds and experiences in a collaborative and open culture to deliver outstanding outcomes for our clients, and a stimulating and rewarding environment for our people. We are DataOps advocates and use software engineering best practices to build scalable and re-usable data solutions to help clients use their data to gain insights, drive decisions and deliver business value. Clients don't engage BJSS to do the straightforward things, they ask us to help on their biggest challenges which means we get to work with a wide range of tools and technologies and there are always new things to learn. About the Role BJSS data engineers are specialist software engineers that build, optimise and maintain data applications, systems and services. This role combines the discipline of software engineering with the knowledge and experience of building data solutions in order to deliver business value. As a BJSS data engineer you'll help our clients deploy data pipelines and processes in a production-safe manner, using the latest technologies and with a DataOps culture. You'll work in a fast moving, agile environment, within multi-disciplinary teams of highly skilled consultants, delivering modern data platforms into large organisations. You can expect to get involved in variety of projects in the cloud (AWS, Azure, GCP), learning about and using data services such as Databricks, Data Factory, Synapse, Kafka, Redshift, Glue, Athena, BigQuery, S3, Cloud Data Fusion etc. About You You're an engineer at heart and enjoy the challenge of building reliable, efficient data applications systems, services and platforms. You have a good understanding of coding best practices and design patterns and experience with code and data versioning, dependency management, code quality and optimisation, error handling, logging, monitoring, validation and alerting. You have experience in writing well tested object-oriented python. You have experience with using CI/CD tooling to analyse, build, test and deploy your code. You have a good understanding of design choices for data storage and data processing, with a particular focus on cloud data services. You have experience in using parallel computing to process large datasets and to optimise computationally intensive tasks. You have experience in programmatically deploying, scheduling and monitoring components in a workflow. You have experience in writing complex queries against relational and non-relational data stores. Some of the Perks A collaborative and inspiring environment working alongside some of the best tech people in the industry Hybrid working - you can vary your working location to allow you to collaborate better, feed your creativity, and take the time and space to focus when you need it Training opportunities and incentives - we support professional certifications across engineering and non-engineering roles Flexible benefits allowance - you can spend on additional pension contributions, healthcare, dental and more… We partner with Lifeworks to offer wellbeing support to our employees Life Assurance (4 x annual salary) Giving back - the ability to get involved nationally and regionally with partnerships to get people from different backgrounds into tech 25 days annual leave plus bank holidays Discounts - we have preferred rates from dozens of retail, lifestyle and utility brands An industry-leading referral scheme BJSS is committed to equal opportunities and diversity so we want to ensure that our recruitment and selection processes are fair to all who wish to apply.
04/11/2021
Full time
About Us We're an innovative tech consultancy - a team of problem solvers. Since 1993 we've been finding better ways to solve complex technology problems for some of the world's leading organisations and delivered solutions that millions of people use every day. We bring together experts from diverse backgrounds and experiences in a collaborative and open culture to deliver outstanding outcomes for our clients, and a stimulating and rewarding environment for our people. We are DataOps advocates and use software engineering best practices to build scalable and re-usable data solutions to help clients use their data to gain insights, drive decisions and deliver business value. Clients don't engage BJSS to do the straightforward things, they ask us to help on their biggest challenges which means we get to work with a wide range of tools and technologies and there are always new things to learn. About the Role BJSS data engineers are specialist software engineers that build, optimise and maintain data applications, systems and services. This role combines the discipline of software engineering with the knowledge and experience of building data solutions in order to deliver business value. As a BJSS data engineer you'll help our clients deploy data pipelines and processes in a production-safe manner, using the latest technologies and with a DataOps culture. You'll work in a fast moving, agile environment, within multi-disciplinary teams of highly skilled consultants, delivering modern data platforms into large organisations. You can expect to get involved in variety of projects in the cloud (AWS, Azure, GCP), learning about and using data services such as Databricks, Data Factory, Synapse, Kafka, Redshift, Glue, Athena, BigQuery, S3, Cloud Data Fusion etc. About You You're an engineer at heart and enjoy the challenge of building reliable, efficient data applications systems, services and platforms. You have a good understanding of coding best practices and design patterns and experience with code and data versioning, dependency management, code quality and optimisation, error handling, logging, monitoring, validation and alerting. You have experience in writing well tested object-oriented python. You have experience with using CI/CD tooling to analyse, build, test and deploy your code. You have a good understanding of design choices for data storage and data processing, with a particular focus on cloud data services. You have experience in using parallel computing to process large datasets and to optimise computationally intensive tasks. You have experience in programmatically deploying, scheduling and monitoring components in a workflow. You have experience in writing complex queries against relational and non-relational data stores. Some of the Perks A collaborative and inspiring environment working alongside some of the best tech people in the industry Hybrid working - you can vary your working location to allow you to collaborate better, feed your creativity, and take the time and space to focus when you need it Training opportunities and incentives - we support professional certifications across engineering and non-engineering roles Flexible benefits allowance - you can spend on additional pension contributions, healthcare, dental and more… We partner with Lifeworks to offer wellbeing support to our employees Life Assurance (4 x annual salary) Giving back - the ability to get involved nationally and regionally with partnerships to get people from different backgrounds into tech 25 days annual leave plus bank holidays Discounts - we have preferred rates from dozens of retail, lifestyle and utility brands An industry-leading referral scheme BJSS is committed to equal opportunities and diversity so we want to ensure that our recruitment and selection processes are fair to all who wish to apply.
Are you a skilled data engineer who has helped enterprises deploy production-ready data platforms? Are you keen to implement cutting edge cloud data services, with focus on how consumers use the platform? Are you interested in building on your existing data and cloud experience? About Us We're an innovative tech consultancy - a team of problem solvers. Since 1993 we've been finding better ways to solve complex technology problems for some of the world's leading organisationsand?delivered solutions that millions of people use every day. We bring together experts from diverse backgrounds and experiences in a collaborative and open culture to deliver outstanding outcomes for our clients, and a stimulating and rewarding environment for our people. We're looking for data specialists with experience in Data Development, ETL, Data Warehousing and dealing with large sets of structured, semi-structured and unstructured data. About the Role As a BJSS data engineer you'll help our clients deploy data pipelines and processes in a production-safe manner, using the latest technologies and with a DataOps culture. You'll work in a fast moving, agile environment, within multi-disciplinary teams, delivering modern data platforms into large organisations. You'll get to work with some of the brightest and best in the industry on some of the most exciting digital programmes around. About You You'll have the expertise and confidence toworkcollaboratively with engineers, architects and business analysts in multi-disciplinary teams on client site, and haveexperience in several of these areas: Python AWS, Azure or GCP data services (e.g. Data Factory, Synapse, Redshift, Glue, Athena, BigQuery, Cloud Data Fusion etc) At least one distributed NoSQL database(e.g. HBase, Cassandra). Stream processing technologiessuch asKafka, Kinesis etc. Hadoop ecosystem exposure. Some of the Perks A collaborative and inspiring environment working alongside some of the best tech people in the industry Hybrid working - you can vary your working location to allow you to collaborate better, feed your creativity, and take the time and space to focus when you need it Training opportunities and incentives - we support professional certifications across engineering and non-engineering roles Flexible benefits allowance - you can spend on additional pension contributions, healthcare, dental and more… We partner with Lifeworks to offer wellbeing support to our employees Life Assurance (4 x annual salary) Giving back - the ability to get involved nationally and regionally with partnerships to get people from different backgrounds into tech 25 days annual leave plus bank holidays Discounts - we have preferred rates from dozens of retail, lifestyle and utility brands An industry-leading referral scheme BJSS is committed to equal opportunities and diversity so we want to ensure that our recruitment and selection processes are fair to all who wish to apply.
04/11/2021
Full time
Are you a skilled data engineer who has helped enterprises deploy production-ready data platforms? Are you keen to implement cutting edge cloud data services, with focus on how consumers use the platform? Are you interested in building on your existing data and cloud experience? About Us We're an innovative tech consultancy - a team of problem solvers. Since 1993 we've been finding better ways to solve complex technology problems for some of the world's leading organisationsand?delivered solutions that millions of people use every day. We bring together experts from diverse backgrounds and experiences in a collaborative and open culture to deliver outstanding outcomes for our clients, and a stimulating and rewarding environment for our people. We're looking for data specialists with experience in Data Development, ETL, Data Warehousing and dealing with large sets of structured, semi-structured and unstructured data. About the Role As a BJSS data engineer you'll help our clients deploy data pipelines and processes in a production-safe manner, using the latest technologies and with a DataOps culture. You'll work in a fast moving, agile environment, within multi-disciplinary teams, delivering modern data platforms into large organisations. You'll get to work with some of the brightest and best in the industry on some of the most exciting digital programmes around. About You You'll have the expertise and confidence toworkcollaboratively with engineers, architects and business analysts in multi-disciplinary teams on client site, and haveexperience in several of these areas: Python AWS, Azure or GCP data services (e.g. Data Factory, Synapse, Redshift, Glue, Athena, BigQuery, Cloud Data Fusion etc) At least one distributed NoSQL database(e.g. HBase, Cassandra). Stream processing technologiessuch asKafka, Kinesis etc. Hadoop ecosystem exposure. Some of the Perks A collaborative and inspiring environment working alongside some of the best tech people in the industry Hybrid working - you can vary your working location to allow you to collaborate better, feed your creativity, and take the time and space to focus when you need it Training opportunities and incentives - we support professional certifications across engineering and non-engineering roles Flexible benefits allowance - you can spend on additional pension contributions, healthcare, dental and more… We partner with Lifeworks to offer wellbeing support to our employees Life Assurance (4 x annual salary) Giving back - the ability to get involved nationally and regionally with partnerships to get people from different backgrounds into tech 25 days annual leave plus bank holidays Discounts - we have preferred rates from dozens of retail, lifestyle and utility brands An industry-leading referral scheme BJSS is committed to equal opportunities and diversity so we want to ensure that our recruitment and selection processes are fair to all who wish to apply.
Are you a skilled Python Engineer with a passion for Data-driven solutions? Are you keen to implement cutting edge cloud data services, with focus on how consumers use the platform? Are you interested in building on your existing data and cloud experience? About Us We're an innovative tech consultancy - a team of problem solvers. Since 1993 we've been finding better ways to solve complex technology problems for some of the world's leading organisations and?delivered solutions that millions of people use every day. We bring together experts from diverse backgrounds and experiences in a collaborative and open culture to deliver outstanding outcomes for our clients, and a stimulating and rewarding environment for our people. We're looking for data specialists with experience in Data Development, ETL, Data Warehousing and dealing with large sets of structured, semi-structured and unstructured data. About the Role As a BJSS Data Engineer you'll help our clients deploy data pipelines and processes in a production-safe manner, using the latest technologies and driven by a DataOps culture. You'll work in a fast moving, agile environment, within multi-disciplinary teams, delivering modern data platforms into some of the UK's most significant organisations. You'll get to work with and learn from some of the brightest and best in the industry on some of the most exciting digital programmes around. About You You'll have the expertise and confidence to work collaboratively with engineers, architects, and business analysts in multi-disciplinary teams on client site. Experience is not necessary in all of these areas, we just need great Python Engineers, however you will have the opportunity to learn in the following: AWS, Azure and/or GCP data services (e.g. Data Factory, Synapse, Redshift, Glue, Athena, BigQuery, Cloud Data Fusion etc.) At least one distributed NoSQL database (e.g. HBase, Cassandra) Stream processing technologies such as Kafka, Kinesis etc. Hadoop ecosystem exposure Some of the Perks A collaborative and inspiring environment working alongside some of the best tech people in the industry Hybrid working - you can vary your working location to allow you to collaborate better, feed your creativity, and take the time and space to focus when you need it Training opportunities and incentives - we support professional certifications across engineering and non-engineering roles Flexible benefits allowance - you can spend on additional pension contributions, healthcare, dental and more… We partner with Lifeworks to offer wellbeing support to our employees Life Assurance (4 x annual salary) Giving back - the ability to get involved nationally and regionally with partnerships to get people from different backgrounds into tech 25 days annual leave plus bank holidays Discounts - we have preferred rates from dozens of retail, lifestyle and utility brands An industry-leading referral scheme BJSS is committed to equal opportunities and diversity so we want to ensure that our recruitment and selection processes are fair to all who wish to apply.
04/11/2021
Full time
Are you a skilled Python Engineer with a passion for Data-driven solutions? Are you keen to implement cutting edge cloud data services, with focus on how consumers use the platform? Are you interested in building on your existing data and cloud experience? About Us We're an innovative tech consultancy - a team of problem solvers. Since 1993 we've been finding better ways to solve complex technology problems for some of the world's leading organisations and?delivered solutions that millions of people use every day. We bring together experts from diverse backgrounds and experiences in a collaborative and open culture to deliver outstanding outcomes for our clients, and a stimulating and rewarding environment for our people. We're looking for data specialists with experience in Data Development, ETL, Data Warehousing and dealing with large sets of structured, semi-structured and unstructured data. About the Role As a BJSS Data Engineer you'll help our clients deploy data pipelines and processes in a production-safe manner, using the latest technologies and driven by a DataOps culture. You'll work in a fast moving, agile environment, within multi-disciplinary teams, delivering modern data platforms into some of the UK's most significant organisations. You'll get to work with and learn from some of the brightest and best in the industry on some of the most exciting digital programmes around. About You You'll have the expertise and confidence to work collaboratively with engineers, architects, and business analysts in multi-disciplinary teams on client site. Experience is not necessary in all of these areas, we just need great Python Engineers, however you will have the opportunity to learn in the following: AWS, Azure and/or GCP data services (e.g. Data Factory, Synapse, Redshift, Glue, Athena, BigQuery, Cloud Data Fusion etc.) At least one distributed NoSQL database (e.g. HBase, Cassandra) Stream processing technologies such as Kafka, Kinesis etc. Hadoop ecosystem exposure Some of the Perks A collaborative and inspiring environment working alongside some of the best tech people in the industry Hybrid working - you can vary your working location to allow you to collaborate better, feed your creativity, and take the time and space to focus when you need it Training opportunities and incentives - we support professional certifications across engineering and non-engineering roles Flexible benefits allowance - you can spend on additional pension contributions, healthcare, dental and more… We partner with Lifeworks to offer wellbeing support to our employees Life Assurance (4 x annual salary) Giving back - the ability to get involved nationally and regionally with partnerships to get people from different backgrounds into tech 25 days annual leave plus bank holidays Discounts - we have preferred rates from dozens of retail, lifestyle and utility brands An industry-leading referral scheme BJSS is committed to equal opportunities and diversity so we want to ensure that our recruitment and selection processes are fair to all who wish to apply.
Who we are: Nutmeg is Europe's leading Digital Wealth Manager, but we don't want to stop there. We're continuing to build our platform to help us achieve our mission of being the most trusted Digital Wealth Manager in the world. Since being founded in 2011 we've: Grown to 160+ employees Raised over £100M in funding Launched 4 amazing products including JISA and Lifetime ISA Won multiple awards including Best Online Stocks & Shares ISA Provider for the fifth year in a row! We hit the 130,000 investor milestone in early 2021 and now manage over £3 billion AUM. *We offer flexible working* Job in a nutshell: We run a pure AWS-based cloud environment and deliver features using a continuous delivery approach. Our Data platform comprises a mix of services and open-source products fully running in Kubernetes and utilising AWS native Data solutions. Nutmeg Data solution is a mix of batching and streaming processes leveraging Airflow, Apache Kafka and AWS Data tools. Our key characteristic is enabling a self-service experience for all Data stakeholders. Nutmeg products are served by a polyglot mix of microservices designed following Domain-Driven Design principles and composing an Event-Driven Architecture powered by Apache Kafka. As a Senior Data Engineer, you will closely collaborate with technical and non-technical teams to deliver Data solutions supporting Nutmeg's Data strategy. We are looking for someone with previous job experience as a senior engineer and a strong passion for Data challenges. Requirements Your skills: Following Data engineering industry best practice Full ownership of end-to-end Data pipelines Designing, implementing, and maintaining Data models Writing automated test around Data models Understanding of CI/CD principles Experience with cloud platforms for Data (ideally AWS) Experience in converting business requirements into technical deliverables Previous experience with two or more of the following: Airflow, dbt, Kafka Connect, Looker, Python, and Redshift You might also have: DataOps best practice Experience in collaborating with BI and Data Science teams Use of agile/lean methodologies for continuous delivery and improvement Knowledge of monitoring, metrics or Site Reliability Engineering Understanding of Data governance and security standards Benefits 25 days' holiday Birthday day off 2 days' paid community leave Competitive salary Private healthcare with Vitality from day 1 Access to a digital GP and other healthcare resources Season ticket and bike loans Access to a wellbeing platform & regular knowledge sharing Regular homeworking perks and rewards Cycle storage and showers onsite Discounted Nutmeg account for you and your family and friends Part of an inclusive Nutmeg team
15/09/2021
Full time
Who we are: Nutmeg is Europe's leading Digital Wealth Manager, but we don't want to stop there. We're continuing to build our platform to help us achieve our mission of being the most trusted Digital Wealth Manager in the world. Since being founded in 2011 we've: Grown to 160+ employees Raised over £100M in funding Launched 4 amazing products including JISA and Lifetime ISA Won multiple awards including Best Online Stocks & Shares ISA Provider for the fifth year in a row! We hit the 130,000 investor milestone in early 2021 and now manage over £3 billion AUM. *We offer flexible working* Job in a nutshell: We run a pure AWS-based cloud environment and deliver features using a continuous delivery approach. Our Data platform comprises a mix of services and open-source products fully running in Kubernetes and utilising AWS native Data solutions. Nutmeg Data solution is a mix of batching and streaming processes leveraging Airflow, Apache Kafka and AWS Data tools. Our key characteristic is enabling a self-service experience for all Data stakeholders. Nutmeg products are served by a polyglot mix of microservices designed following Domain-Driven Design principles and composing an Event-Driven Architecture powered by Apache Kafka. As a Senior Data Engineer, you will closely collaborate with technical and non-technical teams to deliver Data solutions supporting Nutmeg's Data strategy. We are looking for someone with previous job experience as a senior engineer and a strong passion for Data challenges. Requirements Your skills: Following Data engineering industry best practice Full ownership of end-to-end Data pipelines Designing, implementing, and maintaining Data models Writing automated test around Data models Understanding of CI/CD principles Experience with cloud platforms for Data (ideally AWS) Experience in converting business requirements into technical deliverables Previous experience with two or more of the following: Airflow, dbt, Kafka Connect, Looker, Python, and Redshift You might also have: DataOps best practice Experience in collaborating with BI and Data Science teams Use of agile/lean methodologies for continuous delivery and improvement Knowledge of monitoring, metrics or Site Reliability Engineering Understanding of Data governance and security standards Benefits 25 days' holiday Birthday day off 2 days' paid community leave Competitive salary Private healthcare with Vitality from day 1 Access to a digital GP and other healthcare resources Season ticket and bike loans Access to a wellbeing platform & regular knowledge sharing Regular homeworking perks and rewards Cycle storage and showers onsite Discounted Nutmeg account for you and your family and friends Part of an inclusive Nutmeg team
Lead Data Engineer | Remote working | Gloucester | £65,000 - £80,000 Jonothan Bosworth Recruitment Specialists are currently seeking a Lead Data Engineer where you will join a well-established company who are at the forefront of a new growth plan and underway with an ambitious program of work. You will join a new data team as part of the emerging data strategy. This job opportunity is for an experienced Data Engineer who is looking to progress their career into a lead role innovating with the latest technologies to design and lead technical teams in building internal as well as client-facing solutions using Databricks, Azure Stack, and Power BI. As Lead Data Engineer you will help build high performance data platforms from the ground up and establish and manage the Data Engineering team along the way, ensuring they develop, maintain, and optimise data pipelines using best practice within a DataOps methodology. THE BASICS: You will design and implement numerous complex data flows to connect operational systems, data for analytics and business intelligence (BI) systems. Specifically: Design / Implement data storage and processing solutions. Data security and compliance. Monitor and optimise data solutions. Build Data Engineering capacity through technical support and personal development of Data Engineers. Inspire best practice for data products and services, and work with senior team members to identify, plan, develop and deliver data services. KEY SKILLS: You will have experience leading a team along with Cloud architecture and distributed systems Having worked on Big Data projects you will have experience using Big Data Frameworks to create Data Pipelines with the latest stream processing systems (e.g., Kafka, Storm, Spark-Streaming, etc.) Advanced Programming / Scripting (Java, Python, R etc.) Data Strategy, Architectures and Governance, Data Management and Security Data Integrations using Azure Data Factory, Data Bricks and APIs Data Repositories in SQL Server and Analysis Services Data Modelling, SQL and Azure Data Warehouse and Reporting solutions Able to work well under pressure, flexible, positive & focused during times of change. Travel to Gloucester twice a week. For more information, please contact Claire at Jonothan Bosworth Recruitment Specialists. NC_20_LDE_CE We are an equal opportunities employer, committed to diversity and inclusion. We are active anti-slavery advocates and prohibit discrimination and harassment of any kind based on race, colour, sex, religion, sexual orientation, national origin, disability, genetic information, pregnancy, or any other protected characteristic.
14/09/2021
Full time
Lead Data Engineer | Remote working | Gloucester | £65,000 - £80,000 Jonothan Bosworth Recruitment Specialists are currently seeking a Lead Data Engineer where you will join a well-established company who are at the forefront of a new growth plan and underway with an ambitious program of work. You will join a new data team as part of the emerging data strategy. This job opportunity is for an experienced Data Engineer who is looking to progress their career into a lead role innovating with the latest technologies to design and lead technical teams in building internal as well as client-facing solutions using Databricks, Azure Stack, and Power BI. As Lead Data Engineer you will help build high performance data platforms from the ground up and establish and manage the Data Engineering team along the way, ensuring they develop, maintain, and optimise data pipelines using best practice within a DataOps methodology. THE BASICS: You will design and implement numerous complex data flows to connect operational systems, data for analytics and business intelligence (BI) systems. Specifically: Design / Implement data storage and processing solutions. Data security and compliance. Monitor and optimise data solutions. Build Data Engineering capacity through technical support and personal development of Data Engineers. Inspire best practice for data products and services, and work with senior team members to identify, plan, develop and deliver data services. KEY SKILLS: You will have experience leading a team along with Cloud architecture and distributed systems Having worked on Big Data projects you will have experience using Big Data Frameworks to create Data Pipelines with the latest stream processing systems (e.g., Kafka, Storm, Spark-Streaming, etc.) Advanced Programming / Scripting (Java, Python, R etc.) Data Strategy, Architectures and Governance, Data Management and Security Data Integrations using Azure Data Factory, Data Bricks and APIs Data Repositories in SQL Server and Analysis Services Data Modelling, SQL and Azure Data Warehouse and Reporting solutions Able to work well under pressure, flexible, positive & focused during times of change. Travel to Gloucester twice a week. For more information, please contact Claire at Jonothan Bosworth Recruitment Specialists. NC_20_LDE_CE We are an equal opportunities employer, committed to diversity and inclusion. We are active anti-slavery advocates and prohibit discrimination and harassment of any kind based on race, colour, sex, religion, sexual orientation, national origin, disability, genetic information, pregnancy, or any other protected characteristic.
Who we are: Nutmeg is Europe's leading Digital Wealth Manager, but we don't want to stop there. We're continuing to build our platform to help us achieve our mission of being the most trusted Digital Wealth Manager in the world. Since being founded in 2011 we've: Grown to 160+ employees Raised over £100M in funding Launched 4 amazing products including JISA and Lifetime ISA Won multiple awards including Best Online Stocks & Shares ISA Provider for the fifth year in a row! We hit the 130,000 investor milestone in early 2021 and now manage over £3 billion AUM. *We offer flexible working* Job in a nutshell: We run a pure AWS-based cloud environment and deliver features using a continuous delivery approach. Our Data platform comprises a mix of services and open-source products fully running in Kubernetes and utilising AWS native Data solutions. Nutmeg Data solution is a mix of batching and streaming processes leveraging Airflow, Apache Kafka and AWS Data tools. Our key characteristic is enabling a self-service experience for all Data stakeholders. Nutmeg products are served by a polyglot mix of microservices designed following Domain-Driven Design principles and composing an Event-Driven Architecture powered by Apache Kafka. As a Senior Data Engineer, you will closely collaborate with technical and non-technical teams to deliver Data solutions supporting Nutmeg's Data strategy. We are looking for someone with previous job experience as a senior engineer and a strong passion for Data challenges. Requirements Your skills: Following Data engineering industry best practice Full ownership of end-to-end Data pipelines Designing, implementing, and maintaining Data models Writing automated test around Data models Understanding of CI/CD principles Experience with cloud platforms for Data (ideally AWS) Experience in converting business requirements into technical deliverables Previous experience with two or more of the following: Airflow, dbt, Kafka Connect, Looker, Python, and Redshift You might also have: DataOps best practice Experience in collaborating with BI and Data Science teams Use of agile/lean methodologies for continuous delivery and improvement Knowledge of monitoring, metrics or Site Reliability Engineering Understanding of Data governance and security standards Benefits 25 days' holiday Birthday day off 2 days' paid community leave Competitive salary Private healthcare with Vitality from day 1 Access to a digital GP and other healthcare resources Season ticket and bike loans Access to a wellbeing platform & regular knowledge sharing Regular homeworking perks and rewards Cycle storage and showers onsite Discounted Nutmeg account for you and your family and friends Part of an inclusive Nutmeg team
14/09/2021
Full time
Who we are: Nutmeg is Europe's leading Digital Wealth Manager, but we don't want to stop there. We're continuing to build our platform to help us achieve our mission of being the most trusted Digital Wealth Manager in the world. Since being founded in 2011 we've: Grown to 160+ employees Raised over £100M in funding Launched 4 amazing products including JISA and Lifetime ISA Won multiple awards including Best Online Stocks & Shares ISA Provider for the fifth year in a row! We hit the 130,000 investor milestone in early 2021 and now manage over £3 billion AUM. *We offer flexible working* Job in a nutshell: We run a pure AWS-based cloud environment and deliver features using a continuous delivery approach. Our Data platform comprises a mix of services and open-source products fully running in Kubernetes and utilising AWS native Data solutions. Nutmeg Data solution is a mix of batching and streaming processes leveraging Airflow, Apache Kafka and AWS Data tools. Our key characteristic is enabling a self-service experience for all Data stakeholders. Nutmeg products are served by a polyglot mix of microservices designed following Domain-Driven Design principles and composing an Event-Driven Architecture powered by Apache Kafka. As a Senior Data Engineer, you will closely collaborate with technical and non-technical teams to deliver Data solutions supporting Nutmeg's Data strategy. We are looking for someone with previous job experience as a senior engineer and a strong passion for Data challenges. Requirements Your skills: Following Data engineering industry best practice Full ownership of end-to-end Data pipelines Designing, implementing, and maintaining Data models Writing automated test around Data models Understanding of CI/CD principles Experience with cloud platforms for Data (ideally AWS) Experience in converting business requirements into technical deliverables Previous experience with two or more of the following: Airflow, dbt, Kafka Connect, Looker, Python, and Redshift You might also have: DataOps best practice Experience in collaborating with BI and Data Science teams Use of agile/lean methodologies for continuous delivery and improvement Knowledge of monitoring, metrics or Site Reliability Engineering Understanding of Data governance and security standards Benefits 25 days' holiday Birthday day off 2 days' paid community leave Competitive salary Private healthcare with Vitality from day 1 Access to a digital GP and other healthcare resources Season ticket and bike loans Access to a wellbeing platform & regular knowledge sharing Regular homeworking perks and rewards Cycle storage and showers onsite Discounted Nutmeg account for you and your family and friends Part of an inclusive Nutmeg team