Kensington Mortgage Company
Marlow, Buckinghamshire
We're Hiring: Data Services Manager Location: Remote - you must be flexible in adhoc visits to Marlow/London ? Department: Data Services Team Size - Currently 6 Direct Reports Are you a seasoned data professional ready to lead a high-performing team and shape the future of data at Kensington? We're looking for a Data Services Manager to join our dynamic Data Services team and play a pivotal role in delivering scalable, high-quality data solutions that drive strategic decision-making across the business. About the Role As the Data Services Manager , you'll report to and deputise for the Senior Data Services Manager, leading a talented team of Data Engineers. You'll be instrumental in delivering Kensington's Data Strategy, developing innovative data products, and championing data governance and quality across the organisation. This is a hands-on leadership role where you'll guide technical delivery, mentor team members, and collaborate with stakeholders across Mortgage Originations, Servicing, Securitisations, and Finance. Key Responsibilities Lead and mentor a high-performing Data Engineering and BI team Drive the design and delivery of complex data and analytics solutions Champion data governance, quality, and compliance Collaborate with cross-functional teams to align data solutions with business goals Support Agile delivery and continuous improvement Act as a technical authority and contribute to Kensington's Technical Design Authority What You'll Bring 10-15 years of hands-on experience in Data Engineering, BI, or Analytics 2+ years in a leadership or management role Deep expertise in Microsoft Data Platform (SQL Server, Azure, Power BI, Databricks, etc.) Strong understanding of data governance, privacy, and compliance Proven ability to lead hybrid cloud/on-premise data solutions Experience with CI/CD, DevOps/DataOps, and infrastructure as code (IaC) Essential: Experience working in a regulated financial services environment , with a solid understanding of data protection, risk management, and regulatory compliance
08/09/2025
Full time
We're Hiring: Data Services Manager Location: Remote - you must be flexible in adhoc visits to Marlow/London ? Department: Data Services Team Size - Currently 6 Direct Reports Are you a seasoned data professional ready to lead a high-performing team and shape the future of data at Kensington? We're looking for a Data Services Manager to join our dynamic Data Services team and play a pivotal role in delivering scalable, high-quality data solutions that drive strategic decision-making across the business. About the Role As the Data Services Manager , you'll report to and deputise for the Senior Data Services Manager, leading a talented team of Data Engineers. You'll be instrumental in delivering Kensington's Data Strategy, developing innovative data products, and championing data governance and quality across the organisation. This is a hands-on leadership role where you'll guide technical delivery, mentor team members, and collaborate with stakeholders across Mortgage Originations, Servicing, Securitisations, and Finance. Key Responsibilities Lead and mentor a high-performing Data Engineering and BI team Drive the design and delivery of complex data and analytics solutions Champion data governance, quality, and compliance Collaborate with cross-functional teams to align data solutions with business goals Support Agile delivery and continuous improvement Act as a technical authority and contribute to Kensington's Technical Design Authority What You'll Bring 10-15 years of hands-on experience in Data Engineering, BI, or Analytics 2+ years in a leadership or management role Deep expertise in Microsoft Data Platform (SQL Server, Azure, Power BI, Databricks, etc.) Strong understanding of data governance, privacy, and compliance Proven ability to lead hybrid cloud/on-premise data solutions Experience with CI/CD, DevOps/DataOps, and infrastructure as code (IaC) Essential: Experience working in a regulated financial services environment , with a solid understanding of data protection, risk management, and regulatory compliance
Site Name: USA - Pennsylvania - Upper Providence, UK - Hertfordshire - Stevenage, UK - London - Brentford, USA - Pennsylvania - Philadelphia Posted Date: Oct The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. We are looking for a skilled Data Framework Engineer II to join our growing team. The Data Framework team builds and manages (in partnership with Tech) reusable components and architectures designed to make it both fast and easy to build robust, scalable, production-grade data products and services in the challenging biomedical data space. A Data Framework Engineer IIknows the metrics desired for their tools andservices anditerates to deliver and improve on those metrics in an agile fashion. A Data Framework Engineer II is a highly technical individual contributor, building modern, cloud-native systems for standardizing and templatizing data engineering, such as: Standardized physical storage and search / indexing systems Schema management (data + metadata + versioning + provenance + governance) API semantics and ontology management Standard API architectures Kafka + standard streaming semantics Standard components for publishing data to file-based, relational, and other sorts of data stores Metadata systems Tooling for QA / evaluation Additional responsibilities also include: Given a well-specified data framework problem, implement end-to-end solutionsusing appropriate programming languages(e.g.Python,Scala, or Go), open-source tools (e.g.Spark,Elasticsearch, ...), and cloud vendor-provided tools (e.g.Amazon S3) Leverage tools provided by Tech (e.g.infrastructure as code, CloudOps,DevOps, logging / alerting, ...) in delivery ofsolutions Write proper documentation in code as well as in wikis/other documentationsystems Writefantastic code along withthe proper unit, functional, and integration tests for code and services to ensurequality Stayup to datewith developments in theopen-sourcecommunity around data engineering, data science, and similartooling The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: We are looking for professionals with these required skills to achieve our goals: PhD in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering OR Masters and 2+ years experience Experience with common distributed data tools (Spark, Kafka, etc) Experience with basics of data architecture (e.g. optimizing physical layout for access patterns) Experience with basics of search engines/indexing (e.g. Elasticsearch, Lucene) Demonstrated experience in writing Python, Scala, Go, and/or C++ Preferred Qualifications: If you have the following characteristics, it would be a plus: Experience with agile software development Experience building and designing a DevOps-first way of working Demonstrated experience building reusable components on top of the CNCF ecosystem including Kubernetes (or similar ecosystem) Experience with schema tools and schema management (Avro, Protobuf) Why GSK? Our values and expectations are at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness LI-GSK If you require an accommodation or other assistance to apply for a job at GSK, please contact the GSK Service Centre at 1- (US Toll Free) or +1 (outside US). GSK is an Equal Opportunity Employer and, in the US, we adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to race, color, national origin, religion, sex, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class. At GSK, the health and safety of our employees are of paramount importance. As a science-led healthcare company on a mission to get ahead of disease together, we believe that supporting vaccination against COVID-19 is the single best thing we can do in the US to ensure the health and safety of our employees, complementary workers, workplaces, customers, consumers, communities, and the patients we serve. GSK has made the decision to require all US employees to be fully vaccinated against COVID-19, where allowed by state or local law and where vaccine supply is readily available. The only exceptions to this requirement are employees who are approved for an accommodation for religious, medical or disability-related reasons. Important notice to Employment businesses/ Agencies GSK does not accept referrals from employment businesses and/or employment agencies in respect of the vacancies posted on this site. All employment businesses/agencies are required to contact GSK's commercial and general procurement/human resources department to obtain prior written authorization before referring any candidates to GSK. The obtaining of prior written authorization is a condition precedent to any agreement (verbal or written) between the employment business/ agency and GSK. In the absence of such written authorization being obtained any actions undertaken by the employment business/agency shall be deemed to have been performed without the consent or contractual agreement of GSK. GSK shall therefore not be liable for any fees arising from such actions or any fees arising from any referrals by employment businesses/agencies in respect of the vacancies posted on this site. Please note that if you are a US Licensed Healthcare Professional or Healthcare Professional as defined by the laws of the state issuing your license, GSK may be required to capture and report expenses GSK incurs, on your behalf, in the event you are afforded an interview for employment. This capture of applicable transfers of value is necessary to ensure GSK's compliance to all federal and state US Transparency requirements. For more information, please visit GSK's Transparency Reporting For the Record site.
24/09/2022
Full time
Site Name: USA - Pennsylvania - Upper Providence, UK - Hertfordshire - Stevenage, UK - London - Brentford, USA - Pennsylvania - Philadelphia Posted Date: Oct The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. We are looking for a skilled Data Framework Engineer II to join our growing team. The Data Framework team builds and manages (in partnership with Tech) reusable components and architectures designed to make it both fast and easy to build robust, scalable, production-grade data products and services in the challenging biomedical data space. A Data Framework Engineer IIknows the metrics desired for their tools andservices anditerates to deliver and improve on those metrics in an agile fashion. A Data Framework Engineer II is a highly technical individual contributor, building modern, cloud-native systems for standardizing and templatizing data engineering, such as: Standardized physical storage and search / indexing systems Schema management (data + metadata + versioning + provenance + governance) API semantics and ontology management Standard API architectures Kafka + standard streaming semantics Standard components for publishing data to file-based, relational, and other sorts of data stores Metadata systems Tooling for QA / evaluation Additional responsibilities also include: Given a well-specified data framework problem, implement end-to-end solutionsusing appropriate programming languages(e.g.Python,Scala, or Go), open-source tools (e.g.Spark,Elasticsearch, ...), and cloud vendor-provided tools (e.g.Amazon S3) Leverage tools provided by Tech (e.g.infrastructure as code, CloudOps,DevOps, logging / alerting, ...) in delivery ofsolutions Write proper documentation in code as well as in wikis/other documentationsystems Writefantastic code along withthe proper unit, functional, and integration tests for code and services to ensurequality Stayup to datewith developments in theopen-sourcecommunity around data engineering, data science, and similartooling The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: We are looking for professionals with these required skills to achieve our goals: PhD in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering OR Masters and 2+ years experience Experience with common distributed data tools (Spark, Kafka, etc) Experience with basics of data architecture (e.g. optimizing physical layout for access patterns) Experience with basics of search engines/indexing (e.g. Elasticsearch, Lucene) Demonstrated experience in writing Python, Scala, Go, and/or C++ Preferred Qualifications: If you have the following characteristics, it would be a plus: Experience with agile software development Experience building and designing a DevOps-first way of working Demonstrated experience building reusable components on top of the CNCF ecosystem including Kubernetes (or similar ecosystem) Experience with schema tools and schema management (Avro, Protobuf) Why GSK? Our values and expectations are at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness LI-GSK If you require an accommodation or other assistance to apply for a job at GSK, please contact the GSK Service Centre at 1- (US Toll Free) or +1 (outside US). GSK is an Equal Opportunity Employer and, in the US, we adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to race, color, national origin, religion, sex, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class. At GSK, the health and safety of our employees are of paramount importance. As a science-led healthcare company on a mission to get ahead of disease together, we believe that supporting vaccination against COVID-19 is the single best thing we can do in the US to ensure the health and safety of our employees, complementary workers, workplaces, customers, consumers, communities, and the patients we serve. GSK has made the decision to require all US employees to be fully vaccinated against COVID-19, where allowed by state or local law and where vaccine supply is readily available. The only exceptions to this requirement are employees who are approved for an accommodation for religious, medical or disability-related reasons. Important notice to Employment businesses/ Agencies GSK does not accept referrals from employment businesses and/or employment agencies in respect of the vacancies posted on this site. All employment businesses/agencies are required to contact GSK's commercial and general procurement/human resources department to obtain prior written authorization before referring any candidates to GSK. The obtaining of prior written authorization is a condition precedent to any agreement (verbal or written) between the employment business/ agency and GSK. In the absence of such written authorization being obtained any actions undertaken by the employment business/agency shall be deemed to have been performed without the consent or contractual agreement of GSK. GSK shall therefore not be liable for any fees arising from such actions or any fees arising from any referrals by employment businesses/agencies in respect of the vacancies posted on this site. Please note that if you are a US Licensed Healthcare Professional or Healthcare Professional as defined by the laws of the state issuing your license, GSK may be required to capture and report expenses GSK incurs, on your behalf, in the event you are afforded an interview for employment. This capture of applicable transfers of value is necessary to ensure GSK's compliance to all federal and state US Transparency requirements. For more information, please visit GSK's Transparency Reporting For the Record site.
Snowflake Data Engineer - Up to £55,000 London and up to £46,000 National Do you like working with the latest technology and are interested in enhancing your tech abilities? We have an exciting opportunity for a highly skilled Data Engineer with significant experience of Snowflake. As well as being an expert in the Snowflake cloud platform, you'll have a strong background in Data Ingestion and Integration, designing and implementing ETL pipelines on various technologies, Data Modelling and a rounded understanding of data warehousing. Aviva believes strongly in experimentation leading to industrialisation and we are searching for passionate, energetic data engineers who are focussed on using their skills to drive out real business value for our customers A bit about the job: Aviva Zero is a greenfield Personal Lines insurer headquartered in Hoxton (London), set with the ambition to be the best in the UK market. It will combine the pace, focus, and test and learn mentality of a start-up with the expertise, and financial backing of Aviva. Data is the life blood of any modern organisation and Aviva is no different. Our Data Engineering team sits within Aviva Quantum our global Data Science Practise (covering areas including Machine Learning, Analytics, Data Engineering, AI and many more). You will form a vital part of our business, contribute to our first-class end-to-end solutions. You will play an active role in defining our practices, standards and ways of working, and apply them to your role. Be open to working across organisation and team boundaries to ensure we bring the best to our customers. Skills and experience we're looking for: Experience of delivering end to end solutions with different databases technologies focusing on Snowflake but also Dynamo, Oracle, SQL Server, Postgres. Experience of managing data using the Data Vault architecture and managing it through DBT. Strong understanding of data manipulation/wrangling techniques in SQL along with at least one of the following Python, Scala, Snowpark or PySpark. Experience in designing structure to support reporting solutions optimised for use from tools like Qlik, Tableau etc. Good understanding of modern code development practices including DevOps/DataOps but also Agile. Strong interpersonal skills with the ability to work with customer to establish requirements and then design and deliver the solution. Taking the customer on the end-2-end journey with you. What you'll get for this role: Salary up to £55,000 London and up to £46,000 National (depending on location, skills, experience, and qualifications) Generous pension (starting level Aviva contributes 8% when you contribute 2%) Part of the Sales Bonus Scheme Family friendly parental and carer's leave 29 days holiday per year plus bank holidays and the option to buy/sell up to 5 additional days Up to 40% discount for Aviva products Brilliant flexible benefits Aviva Matching Share Plan and Save As You Earn scheme 21 volunteering hours per year Aviva is for everyone: We are inclusive - we want applications from people with diverse backgrounds and experiences. Excited but not sure you tick every box? Research tells us that women, particularly, feel this way. So, regardless of gender, why not apply. And if you're in a job share just apply as a pair. We flex locations, hours and working patterns to suit our customers, business, and you. Most of our people are smart working - spending around 60% of their time in our offices and 40% at home. To find out more about working at Aviva take a look here We interview every disabled applicant who meets the minimum criteria for the job. Once you've applied, please send us an email stating that you have a disclosed disability, and we'll interview you. We'd love it if you could submit your application online. If you require an alternative method of applying, please give Abigail Aitken a call on or send an email to
24/09/2022
Full time
Snowflake Data Engineer - Up to £55,000 London and up to £46,000 National Do you like working with the latest technology and are interested in enhancing your tech abilities? We have an exciting opportunity for a highly skilled Data Engineer with significant experience of Snowflake. As well as being an expert in the Snowflake cloud platform, you'll have a strong background in Data Ingestion and Integration, designing and implementing ETL pipelines on various technologies, Data Modelling and a rounded understanding of data warehousing. Aviva believes strongly in experimentation leading to industrialisation and we are searching for passionate, energetic data engineers who are focussed on using their skills to drive out real business value for our customers A bit about the job: Aviva Zero is a greenfield Personal Lines insurer headquartered in Hoxton (London), set with the ambition to be the best in the UK market. It will combine the pace, focus, and test and learn mentality of a start-up with the expertise, and financial backing of Aviva. Data is the life blood of any modern organisation and Aviva is no different. Our Data Engineering team sits within Aviva Quantum our global Data Science Practise (covering areas including Machine Learning, Analytics, Data Engineering, AI and many more). You will form a vital part of our business, contribute to our first-class end-to-end solutions. You will play an active role in defining our practices, standards and ways of working, and apply them to your role. Be open to working across organisation and team boundaries to ensure we bring the best to our customers. Skills and experience we're looking for: Experience of delivering end to end solutions with different databases technologies focusing on Snowflake but also Dynamo, Oracle, SQL Server, Postgres. Experience of managing data using the Data Vault architecture and managing it through DBT. Strong understanding of data manipulation/wrangling techniques in SQL along with at least one of the following Python, Scala, Snowpark or PySpark. Experience in designing structure to support reporting solutions optimised for use from tools like Qlik, Tableau etc. Good understanding of modern code development practices including DevOps/DataOps but also Agile. Strong interpersonal skills with the ability to work with customer to establish requirements and then design and deliver the solution. Taking the customer on the end-2-end journey with you. What you'll get for this role: Salary up to £55,000 London and up to £46,000 National (depending on location, skills, experience, and qualifications) Generous pension (starting level Aviva contributes 8% when you contribute 2%) Part of the Sales Bonus Scheme Family friendly parental and carer's leave 29 days holiday per year plus bank holidays and the option to buy/sell up to 5 additional days Up to 40% discount for Aviva products Brilliant flexible benefits Aviva Matching Share Plan and Save As You Earn scheme 21 volunteering hours per year Aviva is for everyone: We are inclusive - we want applications from people with diverse backgrounds and experiences. Excited but not sure you tick every box? Research tells us that women, particularly, feel this way. So, regardless of gender, why not apply. And if you're in a job share just apply as a pair. We flex locations, hours and working patterns to suit our customers, business, and you. Most of our people are smart working - spending around 60% of their time in our offices and 40% at home. To find out more about working at Aviva take a look here We interview every disabled applicant who meets the minimum criteria for the job. Once you've applied, please send us an email stating that you have a disclosed disability, and we'll interview you. We'd love it if you could submit your application online. If you require an alternative method of applying, please give Abigail Aitken a call on or send an email to
Site Name: UK - Hertfordshire - Stevenage, USA - Connecticut - Hartford, USA - Delaware - Dover, USA - Maryland - Rockville, USA - Massachusetts - Cambridge, USA - Massachusetts - Waltham, USA - New Jersey - Trenton, USA - Pennsylvania - Upper Providence Posted Date: Jun 6 2022 The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. Achieving delivery of the right data to the right people at the right time needs design and implementation of data flows and data products which leverage internal and external data assets and tools to drive discovery and development is a key objective for the Data Science and Data Engineering (DSDE) team within GSK's Pharmaceutical R&D organization. There are five key drivers for this approach, which are closely aligned with GSK's corporate priorities of Innovation, Performance and Trust: Automation of end-to-end data flows :Faster and reliable ingestion of high throughput data in genetics, genomics and multi-omics, to extract value of investments in new technology (instrument to analysis-ready data in Enabling governance by design of external and internal data :with engineered practical solutions for controlled use and monitoring Innovative disease-specific and domain-expert specific data products : to enable computational scientists and their research unit collaborators to get faster to key insights leading to faster biopharmaceutical development cycles. Supporting e2ecode traceability and data provenance :Increasing assurance of data integrity through automation, integration. Improving engineering efficiency :Extensible, reusable, scalable,updateable,maintainable, virtualized traceable data and code would be driven by data engineering innovation and better resource utilization. We are looking for experienced Senior DevOps Engineers to join our growing Data Ops team. As a Senior Dev Ops Engineer is a highly technical individual contributor, building modern, cloud-native, DevOps-first systems for standardizing and templatizingbiomedical and scientificdata engineering, with demonstrable experience across the following areas: Deliver declarative components for common data ingestion, transformation and publishing techniques Define and implement data governance aligned to modern standards Establish scalable, automated processes for data engineering teams across GSK Thought leader and partner with wider DSDE data engineering teams to advise on implementation and best practices Cloud Infrastructure-as-Code Define Service and Flow orchestration Data as a configurable resource(including configuration-driven access to scientific data modelling tools) Observability (monitoring, alerting, logging, tracing, etc.) Enable quality engineering through KPIs and code coverage and quality checks Standardise GitOps/declarative software development lifecycle Audit as a service Senior DevOpsEngineers take full ownership of delivering high-performing, high-impactbiomedical and scientificdataopsproducts and services, froma description of apattern thatcustomer Data Engineers are trying touseall the way through tofinal delivery (and ongoing monitoring and operations)of a templated project and all associated automation. They arestandard-bearers for software engineering and quality coding practices within theteam andareexpected to mentor more junior engineers; they may even coordinate the work of more junior engineers on a large project.Theydevise useful metrics for ensuring their services are meeting customer demand and having animpact anditerate to deliver and improve on those metrics in an agile fashion. Successful Senior DevOpsEngineers are developing expertise with the types of data and types of tools that are leveraged in the biomedical and scientific data engineering space, andhas the following skills and experience(withsignificant depth in one or more of these areas): Demonstrable experience deploying robust modularised/container-based solutions to production (ideally GCP) and leveraging the Cloud NativeComputing Foundation (CNCF) ecosystem Significant depth in DevOps principles and tools (e.g.GitOps, Jenkins,CircleCI, Azure DevOps, etc.), and how to integrate these tools with other productivity tools (e.g. Jira, Slack, Microsoft Teams) to build a comprehensive workflow Programming in Python. Scala orGo Embedding agile software engineering (task/issue management, testing, documentation, software development lifecycle, source control, etc.) Leveraging major cloud providers, both via Kubernetesorvia vendor-specific services Authentication and Authorization flows and associated technologies (e.g.OAuth2 + JWT) Common distributed data tools (e.g.Spark, Hive) The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: We are looking for professionals with these required skills to achieve our goals: Masters in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering, etc, plus 5 years job experience (or PhD plus 3 years job experience) Experience with DevOps tools and concepts (e.g. Jira, GitLabs / Jenkins / CircleCI / Azure DevOps /etc.)Excellent with common distributed data tools in a production setting (Spark, Kafka, etc) Experience with specialized data architecture (e.g. optimizing physical layout for access patterns, including bloom filters, optimizing against self-describing formats such as ORC or Parquet, etc.) Experience with search / indexing systems (e.g. Elasticsearch) Expertise with agile development in Python, Scala, Go, and/or C++ Experience building reusable components on top of the CNCF ecosystem including Kubernetes Metrics-first mindset Experience mentoring junior engineers into deep technical expertise Preferred Qualifications: If you have the following characteristics, it would be a plus: Experience with agile software development Experience with building and designing a DevOps-first way of working Experience with building reusable components on top of the CNCF ecosystem including Kubernetes (or similar ecosystem ) LI-GSK Why GSK? Our values and expectationsare at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness As a company driven by our values of Patient focus, Transparency, Respect and Integrity, we know inclusion and diversity are essential for us to be able to succeed. We want all our colleagues to thrive at GSK bringing their unique experiences, ensuring they feel good and to keep growing their careers. As a candidate for a role, we want you to feel the same way. As an Equal Opportunity Employer, we are open to all talent. In the US, we also adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to race/ethnicity, colour, national origin, religion, gender, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class ( US only). We believe in an agile working culture for all our roles. If flexibility is important to you, we encourage you to explore with our hiring team what the opportunities are. Please don't hesitate to contact us if you'd like to discuss any adjustments to our process which might help you demonstrate your strengths and capabilities. You can either call us on , or send an email As you apply, we will ask you to share some personal information which is entirely voluntary..... click apply for full job details
23/09/2022
Full time
Site Name: UK - Hertfordshire - Stevenage, USA - Connecticut - Hartford, USA - Delaware - Dover, USA - Maryland - Rockville, USA - Massachusetts - Cambridge, USA - Massachusetts - Waltham, USA - New Jersey - Trenton, USA - Pennsylvania - Upper Providence Posted Date: Jun 6 2022 The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. Achieving delivery of the right data to the right people at the right time needs design and implementation of data flows and data products which leverage internal and external data assets and tools to drive discovery and development is a key objective for the Data Science and Data Engineering (DSDE) team within GSK's Pharmaceutical R&D organization. There are five key drivers for this approach, which are closely aligned with GSK's corporate priorities of Innovation, Performance and Trust: Automation of end-to-end data flows :Faster and reliable ingestion of high throughput data in genetics, genomics and multi-omics, to extract value of investments in new technology (instrument to analysis-ready data in Enabling governance by design of external and internal data :with engineered practical solutions for controlled use and monitoring Innovative disease-specific and domain-expert specific data products : to enable computational scientists and their research unit collaborators to get faster to key insights leading to faster biopharmaceutical development cycles. Supporting e2ecode traceability and data provenance :Increasing assurance of data integrity through automation, integration. Improving engineering efficiency :Extensible, reusable, scalable,updateable,maintainable, virtualized traceable data and code would be driven by data engineering innovation and better resource utilization. We are looking for experienced Senior DevOps Engineers to join our growing Data Ops team. As a Senior Dev Ops Engineer is a highly technical individual contributor, building modern, cloud-native, DevOps-first systems for standardizing and templatizingbiomedical and scientificdata engineering, with demonstrable experience across the following areas: Deliver declarative components for common data ingestion, transformation and publishing techniques Define and implement data governance aligned to modern standards Establish scalable, automated processes for data engineering teams across GSK Thought leader and partner with wider DSDE data engineering teams to advise on implementation and best practices Cloud Infrastructure-as-Code Define Service and Flow orchestration Data as a configurable resource(including configuration-driven access to scientific data modelling tools) Observability (monitoring, alerting, logging, tracing, etc.) Enable quality engineering through KPIs and code coverage and quality checks Standardise GitOps/declarative software development lifecycle Audit as a service Senior DevOpsEngineers take full ownership of delivering high-performing, high-impactbiomedical and scientificdataopsproducts and services, froma description of apattern thatcustomer Data Engineers are trying touseall the way through tofinal delivery (and ongoing monitoring and operations)of a templated project and all associated automation. They arestandard-bearers for software engineering and quality coding practices within theteam andareexpected to mentor more junior engineers; they may even coordinate the work of more junior engineers on a large project.Theydevise useful metrics for ensuring their services are meeting customer demand and having animpact anditerate to deliver and improve on those metrics in an agile fashion. Successful Senior DevOpsEngineers are developing expertise with the types of data and types of tools that are leveraged in the biomedical and scientific data engineering space, andhas the following skills and experience(withsignificant depth in one or more of these areas): Demonstrable experience deploying robust modularised/container-based solutions to production (ideally GCP) and leveraging the Cloud NativeComputing Foundation (CNCF) ecosystem Significant depth in DevOps principles and tools (e.g.GitOps, Jenkins,CircleCI, Azure DevOps, etc.), and how to integrate these tools with other productivity tools (e.g. Jira, Slack, Microsoft Teams) to build a comprehensive workflow Programming in Python. Scala orGo Embedding agile software engineering (task/issue management, testing, documentation, software development lifecycle, source control, etc.) Leveraging major cloud providers, both via Kubernetesorvia vendor-specific services Authentication and Authorization flows and associated technologies (e.g.OAuth2 + JWT) Common distributed data tools (e.g.Spark, Hive) The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: We are looking for professionals with these required skills to achieve our goals: Masters in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering, etc, plus 5 years job experience (or PhD plus 3 years job experience) Experience with DevOps tools and concepts (e.g. Jira, GitLabs / Jenkins / CircleCI / Azure DevOps /etc.)Excellent with common distributed data tools in a production setting (Spark, Kafka, etc) Experience with specialized data architecture (e.g. optimizing physical layout for access patterns, including bloom filters, optimizing against self-describing formats such as ORC or Parquet, etc.) Experience with search / indexing systems (e.g. Elasticsearch) Expertise with agile development in Python, Scala, Go, and/or C++ Experience building reusable components on top of the CNCF ecosystem including Kubernetes Metrics-first mindset Experience mentoring junior engineers into deep technical expertise Preferred Qualifications: If you have the following characteristics, it would be a plus: Experience with agile software development Experience with building and designing a DevOps-first way of working Experience with building reusable components on top of the CNCF ecosystem including Kubernetes (or similar ecosystem ) LI-GSK Why GSK? Our values and expectationsare at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness As a company driven by our values of Patient focus, Transparency, Respect and Integrity, we know inclusion and diversity are essential for us to be able to succeed. We want all our colleagues to thrive at GSK bringing their unique experiences, ensuring they feel good and to keep growing their careers. As a candidate for a role, we want you to feel the same way. As an Equal Opportunity Employer, we are open to all talent. In the US, we also adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to race/ethnicity, colour, national origin, religion, gender, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class ( US only). We believe in an agile working culture for all our roles. If flexibility is important to you, we encourage you to explore with our hiring team what the opportunities are. Please don't hesitate to contact us if you'd like to discuss any adjustments to our process which might help you demonstrate your strengths and capabilities. You can either call us on , or send an email As you apply, we will ask you to share some personal information which is entirely voluntary..... click apply for full job details
Site Name: UK - Hertfordshire - Stevenage, USA - Connecticut - Hartford, USA - Delaware - Dover, USA - Maryland - Rockville, USA - Massachusetts - Waltham, USA - Pennsylvania - Upper Providence, Warren NJ Posted Date: Aug The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. Achieving delivery of the right data to the right people at the right time needs design and implementation of data flows and data products which leverage internal and external data assets and tools to drive discovery and development is a key objective for the Data Science and Data Engineering (DS D E) team within GSK's Pharmaceutical R&D organisation . There are five key drivers for this approach, which are closely aligned with GSK's corporate priorities of Innovation, Performance and Trust: Automation of end-to-end data flows: Faster and reliable ingestion of high throughput data in genetics, genomics and multi-omics, to extract value of investments in new technology (instrument to analysis-ready data in Enabling governance by design of external and internal data: with engineered practical solutions for controlled use and monitoring Innovative disease-specific and domain-expert specific data products : to enable computational scientists and their research unit collaborators to get faster to key insights leading to faster biopharmaceutical development cycles. Supporting e2 e code traceability and data provenance: Increasing assurance of data integrity through automation, integration Improving engineering efficiency: Extensible, reusable, scalable,updateable,maintainable, virtualized traceable data and code would b e driven by data engineering innovation and better resource utilization. We are looking for an experienced Sr. Data Ops Engineer to join our growing Data Ops team. As a Sr. Data Ops Engineer is a highly technical individual contributor, building modern, cloud-native, DevOps-first systems for standardizing and templatizingbiomedical and scientificdata engineering, with demonstrable experience across the following areas : Deliver declarative components for common data ingestion, transformation and publishing techniques Define and implement data governance aligned to modern standards Establish scalable, automated processes for data engineering team s across GSK Thought leader and partner with wider DSDE data engineering teams to advise on implementation and best practices Cloud Infrastructure-as-Code D efine Service and Flow orchestration Data as a configurable resource(including configuration-driven access to scientific data modelling tools) Ob servabilty (monitoring, alerting, logging, tracing, ...) Enable quality engineering through KPIs and c ode coverage and quality checks Standardise GitOps /declarative software development lifecycle Audit as a service Sr. DataOpsEngineerstake full ownership of delivering high-performing, high-impactbiomedical and scientificdataopsproducts and services, froma description of apattern thatcustomer Data Engineers are trying touseall the way through tofinal delivery (and ongoing monitoring and operations)of a templated project and all associated automation. They arestandard-bearers for software engineering and quality coding practices within theteam andareexpected to mentor more junior engineers; they may even coordinate the work of more junior engineers on a large project.Theydevise useful metrics for ensuring their services are meeting customer demand and having animpact anditerate to deliver and improve on those metrics in an agile fashion. A successfulSr.DataOpsEngineeris developing expertise with the types of data and types of tools that are leveraged in the biomedical and scientific data engineering space, andhas the following skills and experience(withsignificant depth in one or more of these areas): Demonstrable experience deploying robust modularised/ container based solutions to production (ideally GCP) and leveraging the Cloud NativeComputing Foundation (CNCF) ecosystem Significant depth in DevOps principles and tools ( e.g. GitOps , Jenkins, CircleCI , Azure DevOps, ...), and how to integrate these tools with other productivity tools (e.g. Jira, Slack, Microsoft Teams) to build a comprehensive workflow P rogramming in Python. Scala or Go Embedding agile s oftware engineering ( task/issue management, testing, documentation, software development lifecycle, source control, ) Leveraging major cloud providers, both via Kubernetes or via vendor-specific services Authentication and Authorization flows and associated technologies ( e.g. OAuth2 + JWT) Common distributed data tools ( e.g. Spark, Hive) The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: Bachelors degree in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering, etc, plus 7 years job experience or Masters degree with 5 Years of experience (or PhD plus 3 years job experience) Deep experience with DevOps tools and concepts ( e.g. Jira, GitLabs / Jenkins / CircleCI / Azure DevOps / ...) Excellent with common distributed data tools in a production setting (Spark, Kafka, etc) Experience with specialized data architecture ( e.g. optimizing physical layout for access patterns, including bloom filters, optimizing against self-describing formats such as ORC or Parquet, etc) Experience with search / indexing systems ( e.g. Elasticsearch) Deep expertise with agile development in Python, Scala, Go, and/or C++ Experience building reusable components on top of the CNCF ecosystem including Kubernetes Metrics-first mindset Experience mentoring junior engineers into deep technical expertise Preferred Qualifications: If you have the following characteristics, it would be a plus: Experience with agile software development Experience building and designing a DevOps-first way of working Demonstrated experience building reusable components on top of the CNCF ecosystem including Kubernetes (or similar ecosystem ) LI-GSK Why GSK? Our values and expectations are at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness As a company driven by our values of Patient focus, Transparency, Respect and Integrity, we know inclusion and diversity are essential for us to be able to succeed. We want all our colleagues to thrive at GSK bringing their unique experiences, ensuring they feel good and to keep growing their careers. As a candidate for a role, we want you to feel the same way. As an Equal Opportunity Employer, we are open to all talent. In the US, we also adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to neurodiversity, race/ethnicity, colour, national origin, religion, gender, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class ( US only). We believe in an agile working culture for all our roles. If flexibility is important to you, we encourage you to explore with our hiring team what the opportunities are. Please don't hesitate to contact us if you'd like to discuss any adjustments to our process which might help you demonstrate your strengths and capabilities. You can either call us on , or send an email As you apply, we will ask you to share some personal information which is entirely voluntary..... click apply for full job details
23/09/2022
Full time
Site Name: UK - Hertfordshire - Stevenage, USA - Connecticut - Hartford, USA - Delaware - Dover, USA - Maryland - Rockville, USA - Massachusetts - Waltham, USA - Pennsylvania - Upper Providence, Warren NJ Posted Date: Aug The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. Achieving delivery of the right data to the right people at the right time needs design and implementation of data flows and data products which leverage internal and external data assets and tools to drive discovery and development is a key objective for the Data Science and Data Engineering (DS D E) team within GSK's Pharmaceutical R&D organisation . There are five key drivers for this approach, which are closely aligned with GSK's corporate priorities of Innovation, Performance and Trust: Automation of end-to-end data flows: Faster and reliable ingestion of high throughput data in genetics, genomics and multi-omics, to extract value of investments in new technology (instrument to analysis-ready data in Enabling governance by design of external and internal data: with engineered practical solutions for controlled use and monitoring Innovative disease-specific and domain-expert specific data products : to enable computational scientists and their research unit collaborators to get faster to key insights leading to faster biopharmaceutical development cycles. Supporting e2 e code traceability and data provenance: Increasing assurance of data integrity through automation, integration Improving engineering efficiency: Extensible, reusable, scalable,updateable,maintainable, virtualized traceable data and code would b e driven by data engineering innovation and better resource utilization. We are looking for an experienced Sr. Data Ops Engineer to join our growing Data Ops team. As a Sr. Data Ops Engineer is a highly technical individual contributor, building modern, cloud-native, DevOps-first systems for standardizing and templatizingbiomedical and scientificdata engineering, with demonstrable experience across the following areas : Deliver declarative components for common data ingestion, transformation and publishing techniques Define and implement data governance aligned to modern standards Establish scalable, automated processes for data engineering team s across GSK Thought leader and partner with wider DSDE data engineering teams to advise on implementation and best practices Cloud Infrastructure-as-Code D efine Service and Flow orchestration Data as a configurable resource(including configuration-driven access to scientific data modelling tools) Ob servabilty (monitoring, alerting, logging, tracing, ...) Enable quality engineering through KPIs and c ode coverage and quality checks Standardise GitOps /declarative software development lifecycle Audit as a service Sr. DataOpsEngineerstake full ownership of delivering high-performing, high-impactbiomedical and scientificdataopsproducts and services, froma description of apattern thatcustomer Data Engineers are trying touseall the way through tofinal delivery (and ongoing monitoring and operations)of a templated project and all associated automation. They arestandard-bearers for software engineering and quality coding practices within theteam andareexpected to mentor more junior engineers; they may even coordinate the work of more junior engineers on a large project.Theydevise useful metrics for ensuring their services are meeting customer demand and having animpact anditerate to deliver and improve on those metrics in an agile fashion. A successfulSr.DataOpsEngineeris developing expertise with the types of data and types of tools that are leveraged in the biomedical and scientific data engineering space, andhas the following skills and experience(withsignificant depth in one or more of these areas): Demonstrable experience deploying robust modularised/ container based solutions to production (ideally GCP) and leveraging the Cloud NativeComputing Foundation (CNCF) ecosystem Significant depth in DevOps principles and tools ( e.g. GitOps , Jenkins, CircleCI , Azure DevOps, ...), and how to integrate these tools with other productivity tools (e.g. Jira, Slack, Microsoft Teams) to build a comprehensive workflow P rogramming in Python. Scala or Go Embedding agile s oftware engineering ( task/issue management, testing, documentation, software development lifecycle, source control, ) Leveraging major cloud providers, both via Kubernetes or via vendor-specific services Authentication and Authorization flows and associated technologies ( e.g. OAuth2 + JWT) Common distributed data tools ( e.g. Spark, Hive) The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: Bachelors degree in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering, etc, plus 7 years job experience or Masters degree with 5 Years of experience (or PhD plus 3 years job experience) Deep experience with DevOps tools and concepts ( e.g. Jira, GitLabs / Jenkins / CircleCI / Azure DevOps / ...) Excellent with common distributed data tools in a production setting (Spark, Kafka, etc) Experience with specialized data architecture ( e.g. optimizing physical layout for access patterns, including bloom filters, optimizing against self-describing formats such as ORC or Parquet, etc) Experience with search / indexing systems ( e.g. Elasticsearch) Deep expertise with agile development in Python, Scala, Go, and/or C++ Experience building reusable components on top of the CNCF ecosystem including Kubernetes Metrics-first mindset Experience mentoring junior engineers into deep technical expertise Preferred Qualifications: If you have the following characteristics, it would be a plus: Experience with agile software development Experience building and designing a DevOps-first way of working Demonstrated experience building reusable components on top of the CNCF ecosystem including Kubernetes (or similar ecosystem ) LI-GSK Why GSK? Our values and expectations are at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness As a company driven by our values of Patient focus, Transparency, Respect and Integrity, we know inclusion and diversity are essential for us to be able to succeed. We want all our colleagues to thrive at GSK bringing their unique experiences, ensuring they feel good and to keep growing their careers. As a candidate for a role, we want you to feel the same way. As an Equal Opportunity Employer, we are open to all talent. In the US, we also adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to neurodiversity, race/ethnicity, colour, national origin, religion, gender, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class ( US only). We believe in an agile working culture for all our roles. If flexibility is important to you, we encourage you to explore with our hiring team what the opportunities are. Please don't hesitate to contact us if you'd like to discuss any adjustments to our process which might help you demonstrate your strengths and capabilities. You can either call us on , or send an email As you apply, we will ask you to share some personal information which is entirely voluntary..... click apply for full job details
Yolk Recruitment are recruiting for a Junior - Mid level Data Engineer to join an exciting new project with a long established and successfully financial company. You will be working with a small team of skilled data engineers to build a brand-new cloud-based platform using Azure and Snowflake. This is an exciting opportunity for a data engineer to learning new skills in snowflake and azure. You will be part of a new green field platform were your knowledge and ideas will influence the development of the project. Responsibilities: Use a range of modelling techniques and technical skills to solve data engineering tasks that areas in the brand-new cloud platform Support the translation of data into valuable insights that inform decisions, alongside resources from architecture, governance, analytics and business product owner communities Working with the Platform Lead and Lead Data Engineer to develop the technical vision for our data platform, and supporting the development of an effective and fully functioning DataOps capability Idea generation, you will have the chance to influence the direction of this brand-new cloud platform (excellent learning opportunity) Core Skills: Strong SQL experience Strong python experience Experience working with ETL pipelines Experience working with R Some experience with data reporting tools (Power BI, Tableau or Looker) Desirable Skills: Any previous experience working within a cloud-based environment (Azure, snowflake, AWS, GCP or IBM)
22/09/2022
Full time
Yolk Recruitment are recruiting for a Junior - Mid level Data Engineer to join an exciting new project with a long established and successfully financial company. You will be working with a small team of skilled data engineers to build a brand-new cloud-based platform using Azure and Snowflake. This is an exciting opportunity for a data engineer to learning new skills in snowflake and azure. You will be part of a new green field platform were your knowledge and ideas will influence the development of the project. Responsibilities: Use a range of modelling techniques and technical skills to solve data engineering tasks that areas in the brand-new cloud platform Support the translation of data into valuable insights that inform decisions, alongside resources from architecture, governance, analytics and business product owner communities Working with the Platform Lead and Lead Data Engineer to develop the technical vision for our data platform, and supporting the development of an effective and fully functioning DataOps capability Idea generation, you will have the chance to influence the direction of this brand-new cloud platform (excellent learning opportunity) Core Skills: Strong SQL experience Strong python experience Experience working with ETL pipelines Experience working with R Some experience with data reporting tools (Power BI, Tableau or Looker) Desirable Skills: Any previous experience working within a cloud-based environment (Azure, snowflake, AWS, GCP or IBM)
At NTT DATA are exceptional together and we believe in growing by helping others grow, clients, partners, and employees. In order to achieve our goals, our diverse and talented team leads by example. As a Leader, you will uphold the essence of the company, be a focal point for your team, colleagues, and clients, and be involved in decisions to ensure the long-term sustainability of the organisation. We support each other to be who they want to be and work how they work best. This is how we bring innovation, and how we build a better future for our people, our business, and our society. We provide a safe environment in which all of us can be ourselves and reach our full potential. Our success comes from our people, regardless of ethnicity, cultural background, gender, nationality, sexual orientation, or anything else that can be used to differentiate people because we are exceptional together. As part of our continuous growth, we are looking for an Azure Data Architect to join our team in London. • Defining the data architecture as part of a data solution (e.g. ETL, data integration and data migration) or a wider program, such as a digital transformation. • Setting the direction and acting as technical lead for a technical team delivering Azure platform solutions, and when required on-prem data solutions. • Building conceptual and physical data models, optimised for operational or analytical use cases. • Estimating and providing Rough-Order-Of Magnitude (ROM) sizings for change. • Documentation of solutions (e.g. data models, configurations, and setup). • Providing Quality Assurance, reviewing the development. • Proving expertise and coaching to less experienced members of the Data Practice in Microsoft Azure Data Services. • Contributing to the methods, and standards and keeping abreast of the technical advances in data engineering for the Data Practice. • Demonstrating how Azure Data Services can be exploited in the success of cloud migration and digital transformation opportunities. • Architecting data platforms on the Microsoft Azure platform. • Designing conceptual and physical data architectures. • Producing high-level blueprints, and reference architectures. • Defining outline data models and data entity relationships. Solid skills in • Designing modern cloud-based Data Platform solutions with a sound understanding of architectural concepts (e.g. Platform as a Service cloud computing and Distributed Parallel architecture). • Experience with conceptual, logical and physical data modeling and a clear understanding of defining data architecture framework, standards and principles. • Mapping data from source to target and establishing current & future states based on business data requirements. • Data Warehousing, ETL / Data Processing, Data Migration. • Working Knowledge of Data Modeling Tools (Erwin Data Modeler, etc.) • Master Data Management (MDM) systems. Awareness of / Nice to have; • Developing data platforms using a wide range of Azure Services including Azure Cosmos DB, Azure SQL Database, Azure Data Lake Storage, Azure Data Factory and Azure Blob storage, HDInsight, and Machine Learning Studio. • Data Visualisation techniques. • Microsoft BI stack and more advanced warehousing options (e.g. Hadoop, etc.). • Designing analytics and machine learning solutions using Azure Databricks platform. • Data Cleansing, Data Quality and Data Governance tools and methods. • DevOps/ DataOps methods in the development of data solutions. Our employees' safety is the priority, so currently you will be working from the comfort of your own home most of the time. In the future when we can spend time in some more time of our offices, in Oxford Circus, Bank or Epworth House. As a result, you will be able to enjoy London's vibrant dynamics! .... And another thing, you will be joining a close-knit team that is supportive and approachable. This means that as a new joiner, you will always have someone available to offer help and guidance. How to apply: We appreciate that you may not have an up to date CV, so please just send what you have and let's organise a chat!
22/09/2022
Full time
At NTT DATA are exceptional together and we believe in growing by helping others grow, clients, partners, and employees. In order to achieve our goals, our diverse and talented team leads by example. As a Leader, you will uphold the essence of the company, be a focal point for your team, colleagues, and clients, and be involved in decisions to ensure the long-term sustainability of the organisation. We support each other to be who they want to be and work how they work best. This is how we bring innovation, and how we build a better future for our people, our business, and our society. We provide a safe environment in which all of us can be ourselves and reach our full potential. Our success comes from our people, regardless of ethnicity, cultural background, gender, nationality, sexual orientation, or anything else that can be used to differentiate people because we are exceptional together. As part of our continuous growth, we are looking for an Azure Data Architect to join our team in London. • Defining the data architecture as part of a data solution (e.g. ETL, data integration and data migration) or a wider program, such as a digital transformation. • Setting the direction and acting as technical lead for a technical team delivering Azure platform solutions, and when required on-prem data solutions. • Building conceptual and physical data models, optimised for operational or analytical use cases. • Estimating and providing Rough-Order-Of Magnitude (ROM) sizings for change. • Documentation of solutions (e.g. data models, configurations, and setup). • Providing Quality Assurance, reviewing the development. • Proving expertise and coaching to less experienced members of the Data Practice in Microsoft Azure Data Services. • Contributing to the methods, and standards and keeping abreast of the technical advances in data engineering for the Data Practice. • Demonstrating how Azure Data Services can be exploited in the success of cloud migration and digital transformation opportunities. • Architecting data platforms on the Microsoft Azure platform. • Designing conceptual and physical data architectures. • Producing high-level blueprints, and reference architectures. • Defining outline data models and data entity relationships. Solid skills in • Designing modern cloud-based Data Platform solutions with a sound understanding of architectural concepts (e.g. Platform as a Service cloud computing and Distributed Parallel architecture). • Experience with conceptual, logical and physical data modeling and a clear understanding of defining data architecture framework, standards and principles. • Mapping data from source to target and establishing current & future states based on business data requirements. • Data Warehousing, ETL / Data Processing, Data Migration. • Working Knowledge of Data Modeling Tools (Erwin Data Modeler, etc.) • Master Data Management (MDM) systems. Awareness of / Nice to have; • Developing data platforms using a wide range of Azure Services including Azure Cosmos DB, Azure SQL Database, Azure Data Lake Storage, Azure Data Factory and Azure Blob storage, HDInsight, and Machine Learning Studio. • Data Visualisation techniques. • Microsoft BI stack and more advanced warehousing options (e.g. Hadoop, etc.). • Designing analytics and machine learning solutions using Azure Databricks platform. • Data Cleansing, Data Quality and Data Governance tools and methods. • DevOps/ DataOps methods in the development of data solutions. Our employees' safety is the priority, so currently you will be working from the comfort of your own home most of the time. In the future when we can spend time in some more time of our offices, in Oxford Circus, Bank or Epworth House. As a result, you will be able to enjoy London's vibrant dynamics! .... And another thing, you will be joining a close-knit team that is supportive and approachable. This means that as a new joiner, you will always have someone available to offer help and guidance. How to apply: We appreciate that you may not have an up to date CV, so please just send what you have and let's organise a chat!
Site Name: USA - Pennsylvania - Upper Providence, UK - Hertfordshire - Stevenage, UK - London - Brentford, USA - Pennsylvania - Philadelphia Posted Date: Oct The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. We are looking for a skilled Data Ops Engineer II to join our growing team. The Data Ops team acceleratesbiomedicaland scientificdata product development and ensures consistent, professional-grade operations for the Data Science and Engineering organization by building templated projects (code repository plus DevOps pipelines) for various Data Science/Data Engineering architecture patternsin the challenging biomedical data space.A Data Ops Engineer IIknows the metrics desired for their tools andservices anditerates to deliver and improve on those metrics in an agile fashion. A Data Ops Engineer II is a highly technical individual contributor, building modern, cloud-native systems for standardizing and templatizing data engineering, such as: Standardized physical storage and search / indexing systems Schema management (data + metadata + versioning + provenance + governance) API semantics and ontology management Standard API architectures Kafka + standard streaming semantics Standard components for publishing data to file-based, relational, and other sorts of data stores Metadata systems Tooling for QA / evaluation Audit as a Service Additional responsibilities also include: Given a well-specified data framework problem, implement end-to-end solutionsusing appropriate programming languages(e.g.Python,Scala, or Go), open-source tools (e.g.Spark,Elasticsearch, ...), and cloud vendor-provided tools (e.g.Amazon S3) Leverage tools provided by Tech (e.g.infrastructure as code, CloudOps,DevOps, logging / alerting, ...) in delivery ofsolutions Write proper documentation in code as well as in wikis/other documentationsystems Writefantastic code along with theproper unit, functional, and integration tests for code and services to ensurequality Stayup to datewith developments in theopen-sourcecommunity around data engineering, data science, and similartooling The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: We are looking for professionals with these required skills to achieve our goals: Master's in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering and 2+ years of experience OR PhD in Computer Science Demonstrated experience with software engineering (testing, documentation, software development lifecycle, source control, ... Experience with DevOps tools and concepts (e.g. Jira, GitLabs / Jenkins / CircleCI / Azure DevOps / ...) Experience with common distributed data tools in a production setting (Spark, Kafka, etc) Experience with basics of search engines/indexing (e.g. Elasticsearch, Lucene) Demonstrated experience in writing Python, Scala, Go, and/or C++ Preferred Qualifications: If you have the following characteristics, it would be a plus: Comfort with specialized data architecture (e.g. optimizing physical layout for access patterns, including bloom filters, optimizing against self-describing formats such as ORC or Parquet, etc) Experience with the CNCF ecosystem / Kubernetes Comfort with search/indexing systems (e.g. Elasticsearch) Experience with schema tools/schema management (Avro, Protobuf) Why GSK? Our values and expectations are at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness LI-GSK If you require an accommodation or other assistance to apply for a job at GSK, please contact the GSK Service Centre at 1- (US Toll Free) or +1 (outside US). GSK is an Equal Opportunity Employer and, in the US, we adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to race, color, national origin, religion, sex, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class. At GSK, the health and safety of our employees are of paramount importance. As a science-led healthcare company on a mission to get ahead of disease together, we believe that supporting vaccination against COVID-19 is the single best thing we can do in the US to ensure the health and safety of our employees, complementary workers, workplaces, customers, consumers, communities, and the patients we serve. GSK has made the decision to require all US employees to be fully vaccinated against COVID-19, where allowed by state or local law and where vaccine supply is readily available. The only exceptions to this requirement are employees who are approved for an accommodation for religious, medical or disability-related reasons. Important notice to Employment businesses/ Agencies GSK does not accept referrals from employment businesses and/or employment agencies in respect of the vacancies posted on this site. All employment businesses/agencies are required to contact GSK's commercial and general procurement/human resources department to obtain prior written authorization before referring any candidates to GSK. The obtaining of prior written authorization is a condition precedent to any agreement (verbal or written) between the employment business/ agency and GSK. In the absence of such written authorization being obtained any actions undertaken by the employment business/agency shall be deemed to have been performed without the consent or contractual agreement of GSK. GSK shall therefore not be liable for any fees arising from such actions or any fees arising from any referrals by employment businesses/agencies in respect of the vacancies posted on this site. Please note that if you are a US Licensed Healthcare Professional or Healthcare Professional as defined by the laws of the state issuing your license, GSK may be required to capture and report expenses GSK incurs, on your behalf, in the event you are afforded an interview for employment. This capture of applicable transfers of value is necessary to ensure GSK's compliance to all federal and state US Transparency requirements. For more information, please visit GSK's Transparency Reporting For the Record site.
21/09/2022
Full time
Site Name: USA - Pennsylvania - Upper Providence, UK - Hertfordshire - Stevenage, UK - London - Brentford, USA - Pennsylvania - Philadelphia Posted Date: Oct The mission of the Data Science and Data Engineering (DSDE) organization within GSK Pharmaceuticals R&D is to get the right data, to the right people, at the right time. TheData Framework and Opsorganization ensures we can do this efficiently, reliably, transparently, and at scale through the creation of a leading-edge, cloud-native data services framework. We focus heavily on developer experience, on strong, semantic abstractions for the data ecosystem, on professional operations and aggressive automation, and on transparency of operations and cost. We are looking for a skilled Data Ops Engineer II to join our growing team. The Data Ops team acceleratesbiomedicaland scientificdata product development and ensures consistent, professional-grade operations for the Data Science and Engineering organization by building templated projects (code repository plus DevOps pipelines) for various Data Science/Data Engineering architecture patternsin the challenging biomedical data space.A Data Ops Engineer IIknows the metrics desired for their tools andservices anditerates to deliver and improve on those metrics in an agile fashion. A Data Ops Engineer II is a highly technical individual contributor, building modern, cloud-native systems for standardizing and templatizing data engineering, such as: Standardized physical storage and search / indexing systems Schema management (data + metadata + versioning + provenance + governance) API semantics and ontology management Standard API architectures Kafka + standard streaming semantics Standard components for publishing data to file-based, relational, and other sorts of data stores Metadata systems Tooling for QA / evaluation Audit as a Service Additional responsibilities also include: Given a well-specified data framework problem, implement end-to-end solutionsusing appropriate programming languages(e.g.Python,Scala, or Go), open-source tools (e.g.Spark,Elasticsearch, ...), and cloud vendor-provided tools (e.g.Amazon S3) Leverage tools provided by Tech (e.g.infrastructure as code, CloudOps,DevOps, logging / alerting, ...) in delivery ofsolutions Write proper documentation in code as well as in wikis/other documentationsystems Writefantastic code along with theproper unit, functional, and integration tests for code and services to ensurequality Stayup to datewith developments in theopen-sourcecommunity around data engineering, data science, and similartooling The DSDE team is built on the principles of ownership, accountability, continuous development, and collaboration. We hire for the long term, and we're motivated to make this a great place to work. Our leaders will be committed to your career and development from day one. Why you? Basic Qualifications: We are looking for professionals with these required skills to achieve our goals: Master's in Computer Science with a focus in Data Engineering, DataOps, DevOps, MLOps, Software Engineering and 2+ years of experience OR PhD in Computer Science Demonstrated experience with software engineering (testing, documentation, software development lifecycle, source control, ... Experience with DevOps tools and concepts (e.g. Jira, GitLabs / Jenkins / CircleCI / Azure DevOps / ...) Experience with common distributed data tools in a production setting (Spark, Kafka, etc) Experience with basics of search engines/indexing (e.g. Elasticsearch, Lucene) Demonstrated experience in writing Python, Scala, Go, and/or C++ Preferred Qualifications: If you have the following characteristics, it would be a plus: Comfort with specialized data architecture (e.g. optimizing physical layout for access patterns, including bloom filters, optimizing against self-describing formats such as ORC or Parquet, etc) Experience with the CNCF ecosystem / Kubernetes Comfort with search/indexing systems (e.g. Elasticsearch) Experience with schema tools/schema management (Avro, Protobuf) Why GSK? Our values and expectations are at the heart of everything we do and form an important part of our culture. These include Patient focus, Transparency, Respect, Integrity along with Courage, Accountability, Development, and Teamwork. As GSK focuses on our values and expectations and a culture of innovation, performance, and trust, the successful candidate will demonstrate the following capabilities: Operating at pace and agile decision making - using evidence and applying judgement to balance pace, rigour and risk. Committed to delivering high-quality results, overcoming challenges, focusing on what matters, execution. Continuously looking for opportunities to learn, build skills and share learning. Sustaining energy and wellbeing Building strong relationships and collaboration, honest and open conversations. Budgeting and cost consciousness LI-GSK If you require an accommodation or other assistance to apply for a job at GSK, please contact the GSK Service Centre at 1- (US Toll Free) or +1 (outside US). GSK is an Equal Opportunity Employer and, in the US, we adhere to Affirmative Action principles. This ensures that all qualified applicants will receive equal consideration for employment without regard to race, color, national origin, religion, sex, pregnancy, marital status, sexual orientation, gender identity/expression, age, disability, genetic information, military service, covered/protected veteran status or any other federal, state or local protected class. At GSK, the health and safety of our employees are of paramount importance. As a science-led healthcare company on a mission to get ahead of disease together, we believe that supporting vaccination against COVID-19 is the single best thing we can do in the US to ensure the health and safety of our employees, complementary workers, workplaces, customers, consumers, communities, and the patients we serve. GSK has made the decision to require all US employees to be fully vaccinated against COVID-19, where allowed by state or local law and where vaccine supply is readily available. The only exceptions to this requirement are employees who are approved for an accommodation for religious, medical or disability-related reasons. Important notice to Employment businesses/ Agencies GSK does not accept referrals from employment businesses and/or employment agencies in respect of the vacancies posted on this site. All employment businesses/agencies are required to contact GSK's commercial and general procurement/human resources department to obtain prior written authorization before referring any candidates to GSK. The obtaining of prior written authorization is a condition precedent to any agreement (verbal or written) between the employment business/ agency and GSK. In the absence of such written authorization being obtained any actions undertaken by the employment business/agency shall be deemed to have been performed without the consent or contractual agreement of GSK. GSK shall therefore not be liable for any fees arising from such actions or any fees arising from any referrals by employment businesses/agencies in respect of the vacancies posted on this site. Please note that if you are a US Licensed Healthcare Professional or Healthcare Professional as defined by the laws of the state issuing your license, GSK may be required to capture and report expenses GSK incurs, on your behalf, in the event you are afforded an interview for employment. This capture of applicable transfers of value is necessary to ensure GSK's compliance to all federal and state US Transparency requirements. For more information, please visit GSK's Transparency Reporting For the Record site.
Team Overview Data is central to Ofcom's work. We use data from a wide range of sources to understand the dynamics of the sectors we regulate and to provide key insights to inform policy decisions. In addition to existing regulatory responsibilities, there will be exciting opportunities to contribute to Ofcom's new duties in relation to the regulation of Online Harms. It is expected that this new area of work will expand the scope of data-driven activities at Ofcom in terms of the variety and volume of data, as well as the range of analysis we do. To ensure that Ofcom has the appropriate data capabilities to undertake the analysis effectively, ICT has created a new role for an Data Solutions Architect Purpose of the Role The Data Solutions Architect works closely with ICT and the Data Innovation Hub. You will architect, design, implement and oversee the operations of data solutions that empower data professionals to efficiently and effectively deliver their work. Candidates will exhibit critical thinking skills, the ability to synthesize complex problems, and have relevant skills and experience for enabling the transformation of data to create solutions that add value to a myriad of business requirements. You must have a deep understanding of the full data lifecycle and the role that high-quality data plays across applications, machine learning, business analytics, and reporting. You will lead the design and development of solution architectures in response to business requirements. This includes identifying and evaluating alternative architectures, their trade-offs in cost, performance and scalability. And ensuring that the relevant technical strategies, policies, standards and practices (including security) are applied correctly. The end to end solution will be fit for purpose - i.e. meet the needs of business, the agreed requirements, and support the strategic direction of Ofcom. Work with IT teams, business analysts and data analytics teams to understand data consumers' needs and develop solutions By maintaining your knowledge of emerging trends in data usage, tools and analysis techniques you will support our on-going development activities and continually promote data innovation as a means to achieve the business outcomes for specific Groups, and Ofcom. You will need to be self-motivated, an effective communicator and have a collaborative delivery approach.You will work in a collaborative cross-functional environment and interact with the full spectrum of colleagues (data engineers, data analysts, data scientists, operational support and policy makers), and you will need to inform and influence senior managers. Requirements of the Role Build strong relationships with colleagues across the business, understanding their motivations behind projects and own technical activities to translate business requirements (both functional and non-functional) into a solution. Ensuring the required business value is delivered. Fostering a customer centric approach to ensure delivery of business value and an iterative approach that responds to feedback and changing needs. Perform deep dives into technical areas to solve a specific solution or design challenges. Using trials or POCs to prove or discount an approach, to critic your own design. You will be responsible for ensuring that the solutions you help deliver form an integral part of the ICT estate and align with the wider reference architecture and domain roadmaps. Manage stakeholder expectations and be flexible, working on many different projects and topics at the same time period. Manage proactive and reactive communication. Facilitate difficult discussions within the team or with diverse senior stakeholders and external / 3rd parties as necessary. Provide documentation of solutions detailing the business, data, application and technology layers. Work with Data Engineers to define data pipelines and data lakes, covering the ingression, ETL or ELT, and the cataloguing of data. Takes overall responsibility for planning effective data storage, cost, security, quality, sharing, availability, retention and publishing within the organisation. Develops analytics policy, standards and guidelines Ensuring successful transitions for solutions into production ensuring production support have the necessary knowledge and documentation to support the service. Skills, knowledge and experience Robust Data and Technical/Solutions Architecture skills - sets direction for and possesses a deep understanding of architecture and strategies which integrate with industry trends e.g. TOGAF Hands-on experience with analytical tools and languages such as: Python, R, SQL, Azure Data Factory (and SSIS), Databricks, Power BI, Git etc. Experience of infrastructure: cloud-based technologies used for storage, data lakes, data warehouses, data streaming, Databases, ETL / ELT / Transformation. Experience of DevOps/ DataOps methods in development of data solutions to ensure pipelines and processes can be automated and controlled. Experience with Cloud based data and analytical initiatives utilising Azure, AWS, Google Cloud or similar cloud services Experience of working closely with Data Professionals (E.g. Data Scientists and Data Analysts) to understanding their needs. Experience of implementating of statistical, Artificial Intelligence, Machine Learning and Deep learning applications. Experience with integrations (e.g. via APIs) with external vendors to share data between organizations Experience of working with external technology supplier and service providers to deliver business solutions SFIA Skill Enterprise and business architecture STPL - Level 5 Solution architecture ARCH - Level 5 Requirements definition and management REQM- Level 5 Database design DBDS- Level 5 Analytics INAN- Level 4 Emerging Technology Monitoring (EMRG)- Level 4 Relationship Management RLMT- Level 5
04/11/2021
Full time
Team Overview Data is central to Ofcom's work. We use data from a wide range of sources to understand the dynamics of the sectors we regulate and to provide key insights to inform policy decisions. In addition to existing regulatory responsibilities, there will be exciting opportunities to contribute to Ofcom's new duties in relation to the regulation of Online Harms. It is expected that this new area of work will expand the scope of data-driven activities at Ofcom in terms of the variety and volume of data, as well as the range of analysis we do. To ensure that Ofcom has the appropriate data capabilities to undertake the analysis effectively, ICT has created a new role for an Data Solutions Architect Purpose of the Role The Data Solutions Architect works closely with ICT and the Data Innovation Hub. You will architect, design, implement and oversee the operations of data solutions that empower data professionals to efficiently and effectively deliver their work. Candidates will exhibit critical thinking skills, the ability to synthesize complex problems, and have relevant skills and experience for enabling the transformation of data to create solutions that add value to a myriad of business requirements. You must have a deep understanding of the full data lifecycle and the role that high-quality data plays across applications, machine learning, business analytics, and reporting. You will lead the design and development of solution architectures in response to business requirements. This includes identifying and evaluating alternative architectures, their trade-offs in cost, performance and scalability. And ensuring that the relevant technical strategies, policies, standards and practices (including security) are applied correctly. The end to end solution will be fit for purpose - i.e. meet the needs of business, the agreed requirements, and support the strategic direction of Ofcom. Work with IT teams, business analysts and data analytics teams to understand data consumers' needs and develop solutions By maintaining your knowledge of emerging trends in data usage, tools and analysis techniques you will support our on-going development activities and continually promote data innovation as a means to achieve the business outcomes for specific Groups, and Ofcom. You will need to be self-motivated, an effective communicator and have a collaborative delivery approach.You will work in a collaborative cross-functional environment and interact with the full spectrum of colleagues (data engineers, data analysts, data scientists, operational support and policy makers), and you will need to inform and influence senior managers. Requirements of the Role Build strong relationships with colleagues across the business, understanding their motivations behind projects and own technical activities to translate business requirements (both functional and non-functional) into a solution. Ensuring the required business value is delivered. Fostering a customer centric approach to ensure delivery of business value and an iterative approach that responds to feedback and changing needs. Perform deep dives into technical areas to solve a specific solution or design challenges. Using trials or POCs to prove or discount an approach, to critic your own design. You will be responsible for ensuring that the solutions you help deliver form an integral part of the ICT estate and align with the wider reference architecture and domain roadmaps. Manage stakeholder expectations and be flexible, working on many different projects and topics at the same time period. Manage proactive and reactive communication. Facilitate difficult discussions within the team or with diverse senior stakeholders and external / 3rd parties as necessary. Provide documentation of solutions detailing the business, data, application and technology layers. Work with Data Engineers to define data pipelines and data lakes, covering the ingression, ETL or ELT, and the cataloguing of data. Takes overall responsibility for planning effective data storage, cost, security, quality, sharing, availability, retention and publishing within the organisation. Develops analytics policy, standards and guidelines Ensuring successful transitions for solutions into production ensuring production support have the necessary knowledge and documentation to support the service. Skills, knowledge and experience Robust Data and Technical/Solutions Architecture skills - sets direction for and possesses a deep understanding of architecture and strategies which integrate with industry trends e.g. TOGAF Hands-on experience with analytical tools and languages such as: Python, R, SQL, Azure Data Factory (and SSIS), Databricks, Power BI, Git etc. Experience of infrastructure: cloud-based technologies used for storage, data lakes, data warehouses, data streaming, Databases, ETL / ELT / Transformation. Experience of DevOps/ DataOps methods in development of data solutions to ensure pipelines and processes can be automated and controlled. Experience with Cloud based data and analytical initiatives utilising Azure, AWS, Google Cloud or similar cloud services Experience of working closely with Data Professionals (E.g. Data Scientists and Data Analysts) to understanding their needs. Experience of implementating of statistical, Artificial Intelligence, Machine Learning and Deep learning applications. Experience with integrations (e.g. via APIs) with external vendors to share data between organizations Experience of working with external technology supplier and service providers to deliver business solutions SFIA Skill Enterprise and business architecture STPL - Level 5 Solution architecture ARCH - Level 5 Requirements definition and management REQM- Level 5 Database design DBDS- Level 5 Analytics INAN- Level 4 Emerging Technology Monitoring (EMRG)- Level 4 Relationship Management RLMT- Level 5
Team Overview Data is central to Ofcom's work. We use data from a wide range of sources to understand the dynamics of the sectors we regulate and to provide key insights to inform policy decisions. In addition to existing regulatory responsibilities, there will be exciting opportunities to contribute to Ofcom's new duties in relation to the regulation of Online Harms. It is expected that this new area of work will expand the scope of data-driven activities at Ofcom in terms of the variety and volume of data, as well as the range of analysis we do. To ensure that Ofcom has the appropriate data capabilities to undertake the analysis effectively, ICT has created a new role for an Data Solutions Architect Purpose of the Role The Data Solutions Architect works closely with ICT and the Data Innovation Hub. You will architect, design, implement and oversee the operations of data solutions that empower data professionals to efficiently and effectively deliver their work. Candidates will exhibit critical thinking skills, the ability to synthesize complex problems, and have relevant skills and experience for enabling the transformation of data to create solutions that add value to a myriad of business requirements. You must have a deep understanding of the full data lifecycle and the role that high-quality data plays across applications, machine learning, business analytics, and reporting. You will lead the design and development of solution architectures in response to business requirements. This includes identifying and evaluating alternative architectures, their trade-offs in cost, performance and scalability. And ensuring that the relevant technical strategies, policies, standards and practices (including security) are applied correctly. The end to end solution will be fit for purpose - i.e. meet the needs of business, the agreed requirements, and support the strategic direction of Ofcom. Work with IT teams, business analysts and data analytics teams to understand data consumers' needs and develop solutions By maintaining your knowledge of emerging trends in data usage, tools and analysis techniques you will support our on-going development activities and continually promote data innovation as a means to achieve the business outcomes for specific Groups, and Ofcom. You will need to be self-motivated, an effective communicator and have a collaborative delivery approach.You will work in a collaborative cross-functional environment and interact with the full spectrum of colleagues (data engineers, data analysts, data scientists, operational support and policy makers), and you will need to inform and influence senior managers. Requirements of the Role Build strong relationships with colleagues across the business, understanding their motivations behind projects and own technical activities to translate business requirements (both functional and non-functional) into a solution. Ensuring the required business value is delivered. Fostering a customer centric approach to ensure delivery of business value and an iterative approach that responds to feedback and changing needs. Perform deep dives into technical areas to solve a specific solution or design challenges. Using trials or POCs to prove or discount an approach, to critic your own design. You will be responsible for ensuring that the solutions you help deliver form an integral part of the ICT estate and align with the wider reference architecture and domain roadmaps. Manage stakeholder expectations and be flexible, working on many different projects and topics at the same time period. Manage proactive and reactive communication. Facilitate difficult discussions within the team or with diverse senior stakeholders and external / 3rd parties as necessary. Provide documentation of solutions detailing the business, data, application and technology layers. Work with Data Engineers to define data pipelines and data lakes, covering the ingression, ETL or ELT, and the cataloguing of data. Takes overall responsibility for planning effective data storage, cost, security, quality, sharing, availability, retention and publishing within the organisation. Develops analytics policy, standards and guidelines Ensuring successful transitions for solutions into production ensuring production support have the necessary knowledge and documentation to support the service. Skills, knowledge and experience Robust Data and Technical/Solutions Architecture skills - sets direction for and possesses a deep understanding of architecture and strategies which integrate with industry trends e.g. TOGAF Hands-on experience with analytical tools and languages such as: Python, R, SQL, Azure Data Factory (and SSIS), Databricks, Power BI, Git etc. Experience of infrastructure: cloud-based technologies used for storage, data lakes, data warehouses, data streaming, Databases, ETL / ELT / Transformation. Experience of DevOps/ DataOps methods in development of data solutions to ensure pipelines and processes can be automated and controlled. Experience with Cloud based data and analytical initiatives utilising Azure, AWS, Google Cloud or similar cloud services Experience of working closely with Data Professionals (E.g. Data Scientists and Data Analysts) to understanding their needs. Experience of implementating of statistical, Artificial Intelligence, Machine Learning and Deep learning applications. Experience with integrations (e.g. via APIs) with external vendors to share data between organizations Experience of working with external technology supplier and service providers to deliver business solutions SFIA Skill Enterprise and business architecture STPL - Level 5 Solution architecture ARCH - Level 5 Requirements definition and management REQM- Level 5 Database design DBDS- Level 5 Analytics INAN- Level 4 Emerging Technology Monitoring (EMRG)- Level 4 Relationship Management RLMT- Level 5
04/11/2021
Full time
Team Overview Data is central to Ofcom's work. We use data from a wide range of sources to understand the dynamics of the sectors we regulate and to provide key insights to inform policy decisions. In addition to existing regulatory responsibilities, there will be exciting opportunities to contribute to Ofcom's new duties in relation to the regulation of Online Harms. It is expected that this new area of work will expand the scope of data-driven activities at Ofcom in terms of the variety and volume of data, as well as the range of analysis we do. To ensure that Ofcom has the appropriate data capabilities to undertake the analysis effectively, ICT has created a new role for an Data Solutions Architect Purpose of the Role The Data Solutions Architect works closely with ICT and the Data Innovation Hub. You will architect, design, implement and oversee the operations of data solutions that empower data professionals to efficiently and effectively deliver their work. Candidates will exhibit critical thinking skills, the ability to synthesize complex problems, and have relevant skills and experience for enabling the transformation of data to create solutions that add value to a myriad of business requirements. You must have a deep understanding of the full data lifecycle and the role that high-quality data plays across applications, machine learning, business analytics, and reporting. You will lead the design and development of solution architectures in response to business requirements. This includes identifying and evaluating alternative architectures, their trade-offs in cost, performance and scalability. And ensuring that the relevant technical strategies, policies, standards and practices (including security) are applied correctly. The end to end solution will be fit for purpose - i.e. meet the needs of business, the agreed requirements, and support the strategic direction of Ofcom. Work with IT teams, business analysts and data analytics teams to understand data consumers' needs and develop solutions By maintaining your knowledge of emerging trends in data usage, tools and analysis techniques you will support our on-going development activities and continually promote data innovation as a means to achieve the business outcomes for specific Groups, and Ofcom. You will need to be self-motivated, an effective communicator and have a collaborative delivery approach.You will work in a collaborative cross-functional environment and interact with the full spectrum of colleagues (data engineers, data analysts, data scientists, operational support and policy makers), and you will need to inform and influence senior managers. Requirements of the Role Build strong relationships with colleagues across the business, understanding their motivations behind projects and own technical activities to translate business requirements (both functional and non-functional) into a solution. Ensuring the required business value is delivered. Fostering a customer centric approach to ensure delivery of business value and an iterative approach that responds to feedback and changing needs. Perform deep dives into technical areas to solve a specific solution or design challenges. Using trials or POCs to prove or discount an approach, to critic your own design. You will be responsible for ensuring that the solutions you help deliver form an integral part of the ICT estate and align with the wider reference architecture and domain roadmaps. Manage stakeholder expectations and be flexible, working on many different projects and topics at the same time period. Manage proactive and reactive communication. Facilitate difficult discussions within the team or with diverse senior stakeholders and external / 3rd parties as necessary. Provide documentation of solutions detailing the business, data, application and technology layers. Work with Data Engineers to define data pipelines and data lakes, covering the ingression, ETL or ELT, and the cataloguing of data. Takes overall responsibility for planning effective data storage, cost, security, quality, sharing, availability, retention and publishing within the organisation. Develops analytics policy, standards and guidelines Ensuring successful transitions for solutions into production ensuring production support have the necessary knowledge and documentation to support the service. Skills, knowledge and experience Robust Data and Technical/Solutions Architecture skills - sets direction for and possesses a deep understanding of architecture and strategies which integrate with industry trends e.g. TOGAF Hands-on experience with analytical tools and languages such as: Python, R, SQL, Azure Data Factory (and SSIS), Databricks, Power BI, Git etc. Experience of infrastructure: cloud-based technologies used for storage, data lakes, data warehouses, data streaming, Databases, ETL / ELT / Transformation. Experience of DevOps/ DataOps methods in development of data solutions to ensure pipelines and processes can be automated and controlled. Experience with Cloud based data and analytical initiatives utilising Azure, AWS, Google Cloud or similar cloud services Experience of working closely with Data Professionals (E.g. Data Scientists and Data Analysts) to understanding their needs. Experience of implementating of statistical, Artificial Intelligence, Machine Learning and Deep learning applications. Experience with integrations (e.g. via APIs) with external vendors to share data between organizations Experience of working with external technology supplier and service providers to deliver business solutions SFIA Skill Enterprise and business architecture STPL - Level 5 Solution architecture ARCH - Level 5 Requirements definition and management REQM- Level 5 Database design DBDS- Level 5 Analytics INAN- Level 4 Emerging Technology Monitoring (EMRG)- Level 4 Relationship Management RLMT- Level 5
Team Overview Data is central to Ofcom's work. We use data from a wide range of sources to understand the dynamics of the sectors we regulate and to provide key insights to inform policy decisions. In addition to existing regulatory responsibilities, there will be exciting opportunities to contribute to Ofcom's new duties in relation to the regulation of Online Harms. It is expected that this new area of work will expand the scope of data-driven activities at Ofcom in terms of the variety and volume of data, as well as the range of analysis we do. To ensure that Ofcom has the appropriate data capabilities to undertake the analysis effectively, ICT has created a new role for an Data Solutions Architect Purpose of the Role The Data Solutions Architect works closely with ICT and the Data Innovation Hub. You will architect, design, implement and oversee the operations of data solutions that empower data professionals to efficiently and effectively deliver their work. Candidates will exhibit critical thinking skills, the ability to synthesize complex problems, and have relevant skills and experience for enabling the transformation of data to create solutions that add value to a myriad of business requirements. You must have a deep understanding of the full data lifecycle and the role that high-quality data plays across applications, machine learning, business analytics, and reporting. You will lead the design and development of solution architectures in response to business requirements. This includes identifying and evaluating alternative architectures, their trade-offs in cost, performance and scalability. And ensuring that the relevant technical strategies, policies, standards and practices (including security) are applied correctly. The end to end solution will be fit for purpose - i.e. meet the needs of business, the agreed requirements, and support the strategic direction of Ofcom. Work with IT teams, business analysts and data analytics teams to understand data consumers' needs and develop solutions By maintaining your knowledge of emerging trends in data usage, tools and analysis techniques you will support our on-going development activities and continually promote data innovation as a means to achieve the business outcomes for specific Groups, and Ofcom. You will need to be self-motivated, an effective communicator and have a collaborative delivery approach.You will work in a collaborative cross-functional environment and interact with the full spectrum of colleagues (data engineers, data analysts, data scientists, operational support and policy makers), and you will need to inform and influence senior managers. Requirements of the Role Build strong relationships with colleagues across the business, understanding their motivations behind projects and own technical activities to translate business requirements (both functional and non-functional) into a solution. Ensuring the required business value is delivered. Fostering a customer centric approach to ensure delivery of business value and an iterative approach that responds to feedback and changing needs. Perform deep dives into technical areas to solve a specific solution or design challenges. Using trials or POCs to prove or discount an approach, to critic your own design. You will be responsible for ensuring that the solutions you help deliver form an integral part of the ICT estate and align with the wider reference architecture and domain roadmaps. Manage stakeholder expectations and be flexible, working on many different projects and topics at the same time period. Manage proactive and reactive communication. Facilitate difficult discussions within the team or with diverse senior stakeholders and external / 3rd parties as necessary. Provide documentation of solutions detailing the business, data, application and technology layers. Work with Data Engineers to define data pipelines and data lakes, covering the ingression, ETL or ELT, and the cataloguing of data. Takes overall responsibility for planning effective data storage, cost, security, quality, sharing, availability, retention and publishing within the organisation. Develops analytics policy, standards and guidelines Ensuring successful transitions for solutions into production ensuring production support have the necessary knowledge and documentation to support the service. Skills, knowledge and experience Robust Data and Technical/Solutions Architecture skills - sets direction for and possesses a deep understanding of architecture and strategies which integrate with industry trends e.g. TOGAF Hands-on experience with analytical tools and languages such as: Python, R, SQL, Azure Data Factory (and SSIS), Databricks, Power BI, Git etc. Experience of infrastructure: cloud-based technologies used for storage, data lakes, data warehouses, data streaming, Databases, ETL / ELT / Transformation. Experience of DevOps/ DataOps methods in development of data solutions to ensure pipelines and processes can be automated and controlled. Experience with Cloud based data and analytical initiatives utilising Azure, AWS, Google Cloud or similar cloud services Experience of working closely with Data Professionals (E.g. Data Scientists and Data Analysts) to understanding their needs. Experience of implementating of statistical, Artificial Intelligence, Machine Learning and Deep learning applications. Experience with integrations (e.g. via APIs) with external vendors to share data between organizations Experience of working with external technology supplier and service providers to deliver business solutions SFIA Skill Enterprise and business architecture STPL - Level 5 Solution architecture ARCH - Level 5 Requirements definition and management REQM- Level 5 Database design DBDS- Level 5 Analytics INAN- Level 4 Emerging Technology Monitoring (EMRG)- Level 4 Relationship Management RLMT- Level 5
04/11/2021
Full time
Team Overview Data is central to Ofcom's work. We use data from a wide range of sources to understand the dynamics of the sectors we regulate and to provide key insights to inform policy decisions. In addition to existing regulatory responsibilities, there will be exciting opportunities to contribute to Ofcom's new duties in relation to the regulation of Online Harms. It is expected that this new area of work will expand the scope of data-driven activities at Ofcom in terms of the variety and volume of data, as well as the range of analysis we do. To ensure that Ofcom has the appropriate data capabilities to undertake the analysis effectively, ICT has created a new role for an Data Solutions Architect Purpose of the Role The Data Solutions Architect works closely with ICT and the Data Innovation Hub. You will architect, design, implement and oversee the operations of data solutions that empower data professionals to efficiently and effectively deliver their work. Candidates will exhibit critical thinking skills, the ability to synthesize complex problems, and have relevant skills and experience for enabling the transformation of data to create solutions that add value to a myriad of business requirements. You must have a deep understanding of the full data lifecycle and the role that high-quality data plays across applications, machine learning, business analytics, and reporting. You will lead the design and development of solution architectures in response to business requirements. This includes identifying and evaluating alternative architectures, their trade-offs in cost, performance and scalability. And ensuring that the relevant technical strategies, policies, standards and practices (including security) are applied correctly. The end to end solution will be fit for purpose - i.e. meet the needs of business, the agreed requirements, and support the strategic direction of Ofcom. Work with IT teams, business analysts and data analytics teams to understand data consumers' needs and develop solutions By maintaining your knowledge of emerging trends in data usage, tools and analysis techniques you will support our on-going development activities and continually promote data innovation as a means to achieve the business outcomes for specific Groups, and Ofcom. You will need to be self-motivated, an effective communicator and have a collaborative delivery approach.You will work in a collaborative cross-functional environment and interact with the full spectrum of colleagues (data engineers, data analysts, data scientists, operational support and policy makers), and you will need to inform and influence senior managers. Requirements of the Role Build strong relationships with colleagues across the business, understanding their motivations behind projects and own technical activities to translate business requirements (both functional and non-functional) into a solution. Ensuring the required business value is delivered. Fostering a customer centric approach to ensure delivery of business value and an iterative approach that responds to feedback and changing needs. Perform deep dives into technical areas to solve a specific solution or design challenges. Using trials or POCs to prove or discount an approach, to critic your own design. You will be responsible for ensuring that the solutions you help deliver form an integral part of the ICT estate and align with the wider reference architecture and domain roadmaps. Manage stakeholder expectations and be flexible, working on many different projects and topics at the same time period. Manage proactive and reactive communication. Facilitate difficult discussions within the team or with diverse senior stakeholders and external / 3rd parties as necessary. Provide documentation of solutions detailing the business, data, application and technology layers. Work with Data Engineers to define data pipelines and data lakes, covering the ingression, ETL or ELT, and the cataloguing of data. Takes overall responsibility for planning effective data storage, cost, security, quality, sharing, availability, retention and publishing within the organisation. Develops analytics policy, standards and guidelines Ensuring successful transitions for solutions into production ensuring production support have the necessary knowledge and documentation to support the service. Skills, knowledge and experience Robust Data and Technical/Solutions Architecture skills - sets direction for and possesses a deep understanding of architecture and strategies which integrate with industry trends e.g. TOGAF Hands-on experience with analytical tools and languages such as: Python, R, SQL, Azure Data Factory (and SSIS), Databricks, Power BI, Git etc. Experience of infrastructure: cloud-based technologies used for storage, data lakes, data warehouses, data streaming, Databases, ETL / ELT / Transformation. Experience of DevOps/ DataOps methods in development of data solutions to ensure pipelines and processes can be automated and controlled. Experience with Cloud based data and analytical initiatives utilising Azure, AWS, Google Cloud or similar cloud services Experience of working closely with Data Professionals (E.g. Data Scientists and Data Analysts) to understanding their needs. Experience of implementating of statistical, Artificial Intelligence, Machine Learning and Deep learning applications. Experience with integrations (e.g. via APIs) with external vendors to share data between organizations Experience of working with external technology supplier and service providers to deliver business solutions SFIA Skill Enterprise and business architecture STPL - Level 5 Solution architecture ARCH - Level 5 Requirements definition and management REQM- Level 5 Database design DBDS- Level 5 Analytics INAN- Level 4 Emerging Technology Monitoring (EMRG)- Level 4 Relationship Management RLMT- Level 5
About Us We're an innovative tech consultancy - a team of problem solvers. Since 1993 we've been finding better ways to solve complex technology problems for some of the world's leading organisations and delivered solutions that millions of people use every day. We bring together experts from diverse backgrounds and experiences in a collaborative and open culture to deliver outstanding outcomes for our clients, and a stimulating and rewarding environment for our people. We are DataOps advocates and use software engineering best practices to build scalable and re-usable data solutions to help clients use their data to gain insights, drive decisions and deliver business value. Clients don't engage BJSS to do the straightforward things, they ask us to help on their biggest challenges which means we get to work with a wide range of tools and technologies and there are always new things to learn. About the Role BJSS data engineers are specialist software engineers that build, optimise and maintain data applications, systems and services. This role combines the discipline of software engineering with the knowledge and experience of building data solutions in order to deliver business value. As a BJSS data engineer you'll help our clients deploy data pipelines and processes in a production-safe manner, using the latest technologies and with a DataOps culture. You'll work in a fast moving, agile environment, within multi-disciplinary teams of highly skilled consultants, delivering modern data platforms into large organisations. You can expect to get involved in variety of projects in the cloud (AWS, Azure, GCP), learning about and using data services such as Databricks, Data Factory, Synapse, Kafka, Redshift, Glue, Athena, BigQuery, S3, Cloud Data Fusion etc. About You You're an engineer at heart and enjoy the challenge of building reliable, efficient data applications systems, services and platforms. You have a good understanding of coding best practices and design patterns and experience with code and data versioning, dependency management, code quality and optimisation, error handling, logging, monitoring, validation and alerting. You have experience in writing well tested object-oriented python. You have experience with using CI/CD tooling to analyse, build, test and deploy your code. You have a good understanding of design choices for data storage and data processing, with a particular focus on cloud data services. You have experience in using parallel computing to process large datasets and to optimise computationally intensive tasks. You have experience in programmatically deploying, scheduling and monitoring components in a workflow. You have experience in writing complex queries against relational and non-relational data stores. Some of the Perks A collaborative and inspiring environment working alongside some of the best tech people in the industry Hybrid working - you can vary your working location to allow you to collaborate better, feed your creativity, and take the time and space to focus when you need it Training opportunities and incentives - we support professional certifications across engineering and non-engineering roles Flexible benefits allowance - you can spend on additional pension contributions, healthcare, dental and more… We partner with Lifeworks to offer wellbeing support to our employees Life Assurance (4 x annual salary) Giving back - the ability to get involved nationally and regionally with partnerships to get people from different backgrounds into tech 25 days annual leave plus bank holidays Discounts - we have preferred rates from dozens of retail, lifestyle and utility brands An industry-leading referral scheme BJSS is committed to equal opportunities and diversity so we want to ensure that our recruitment and selection processes are fair to all who wish to apply.
04/11/2021
Full time
About Us We're an innovative tech consultancy - a team of problem solvers. Since 1993 we've been finding better ways to solve complex technology problems for some of the world's leading organisations and delivered solutions that millions of people use every day. We bring together experts from diverse backgrounds and experiences in a collaborative and open culture to deliver outstanding outcomes for our clients, and a stimulating and rewarding environment for our people. We are DataOps advocates and use software engineering best practices to build scalable and re-usable data solutions to help clients use their data to gain insights, drive decisions and deliver business value. Clients don't engage BJSS to do the straightforward things, they ask us to help on their biggest challenges which means we get to work with a wide range of tools and technologies and there are always new things to learn. About the Role BJSS data engineers are specialist software engineers that build, optimise and maintain data applications, systems and services. This role combines the discipline of software engineering with the knowledge and experience of building data solutions in order to deliver business value. As a BJSS data engineer you'll help our clients deploy data pipelines and processes in a production-safe manner, using the latest technologies and with a DataOps culture. You'll work in a fast moving, agile environment, within multi-disciplinary teams of highly skilled consultants, delivering modern data platforms into large organisations. You can expect to get involved in variety of projects in the cloud (AWS, Azure, GCP), learning about and using data services such as Databricks, Data Factory, Synapse, Kafka, Redshift, Glue, Athena, BigQuery, S3, Cloud Data Fusion etc. About You You're an engineer at heart and enjoy the challenge of building reliable, efficient data applications systems, services and platforms. You have a good understanding of coding best practices and design patterns and experience with code and data versioning, dependency management, code quality and optimisation, error handling, logging, monitoring, validation and alerting. You have experience in writing well tested object-oriented python. You have experience with using CI/CD tooling to analyse, build, test and deploy your code. You have a good understanding of design choices for data storage and data processing, with a particular focus on cloud data services. You have experience in using parallel computing to process large datasets and to optimise computationally intensive tasks. You have experience in programmatically deploying, scheduling and monitoring components in a workflow. You have experience in writing complex queries against relational and non-relational data stores. Some of the Perks A collaborative and inspiring environment working alongside some of the best tech people in the industry Hybrid working - you can vary your working location to allow you to collaborate better, feed your creativity, and take the time and space to focus when you need it Training opportunities and incentives - we support professional certifications across engineering and non-engineering roles Flexible benefits allowance - you can spend on additional pension contributions, healthcare, dental and more… We partner with Lifeworks to offer wellbeing support to our employees Life Assurance (4 x annual salary) Giving back - the ability to get involved nationally and regionally with partnerships to get people from different backgrounds into tech 25 days annual leave plus bank holidays Discounts - we have preferred rates from dozens of retail, lifestyle and utility brands An industry-leading referral scheme BJSS is committed to equal opportunities and diversity so we want to ensure that our recruitment and selection processes are fair to all who wish to apply.
Are you a skilled data engineer who has helped enterprises deploy production-ready data platforms? Are you keen to implement cutting edge cloud data services, with focus on how consumers use the platform? Are you interested in building on your existing data and cloud experience? About Us We're an innovative tech consultancy - a team of problem solvers. Since 1993 we've been finding better ways to solve complex technology problems for some of the world's leading organisationsand?delivered solutions that millions of people use every day. We bring together experts from diverse backgrounds and experiences in a collaborative and open culture to deliver outstanding outcomes for our clients, and a stimulating and rewarding environment for our people. We're looking for data specialists with experience in Data Development, ETL, Data Warehousing and dealing with large sets of structured, semi-structured and unstructured data. About the Role As a BJSS data engineer you'll help our clients deploy data pipelines and processes in a production-safe manner, using the latest technologies and with a DataOps culture. You'll work in a fast moving, agile environment, within multi-disciplinary teams, delivering modern data platforms into large organisations. You'll get to work with some of the brightest and best in the industry on some of the most exciting digital programmes around. About You You'll have the expertise and confidence toworkcollaboratively with engineers, architects and business analysts in multi-disciplinary teams on client site, and haveexperience in several of these areas: Python AWS, Azure or GCP data services (e.g. Data Factory, Synapse, Redshift, Glue, Athena, BigQuery, Cloud Data Fusion etc) At least one distributed NoSQL database(e.g. HBase, Cassandra). Stream processing technologiessuch asKafka, Kinesis etc. Hadoop ecosystem exposure. Some of the Perks A collaborative and inspiring environment working alongside some of the best tech people in the industry Hybrid working - you can vary your working location to allow you to collaborate better, feed your creativity, and take the time and space to focus when you need it Training opportunities and incentives - we support professional certifications across engineering and non-engineering roles Flexible benefits allowance - you can spend on additional pension contributions, healthcare, dental and more… We partner with Lifeworks to offer wellbeing support to our employees Life Assurance (4 x annual salary) Giving back - the ability to get involved nationally and regionally with partnerships to get people from different backgrounds into tech 25 days annual leave plus bank holidays Discounts - we have preferred rates from dozens of retail, lifestyle and utility brands An industry-leading referral scheme BJSS is committed to equal opportunities and diversity so we want to ensure that our recruitment and selection processes are fair to all who wish to apply.
04/11/2021
Full time
Are you a skilled data engineer who has helped enterprises deploy production-ready data platforms? Are you keen to implement cutting edge cloud data services, with focus on how consumers use the platform? Are you interested in building on your existing data and cloud experience? About Us We're an innovative tech consultancy - a team of problem solvers. Since 1993 we've been finding better ways to solve complex technology problems for some of the world's leading organisationsand?delivered solutions that millions of people use every day. We bring together experts from diverse backgrounds and experiences in a collaborative and open culture to deliver outstanding outcomes for our clients, and a stimulating and rewarding environment for our people. We're looking for data specialists with experience in Data Development, ETL, Data Warehousing and dealing with large sets of structured, semi-structured and unstructured data. About the Role As a BJSS data engineer you'll help our clients deploy data pipelines and processes in a production-safe manner, using the latest technologies and with a DataOps culture. You'll work in a fast moving, agile environment, within multi-disciplinary teams, delivering modern data platforms into large organisations. You'll get to work with some of the brightest and best in the industry on some of the most exciting digital programmes around. About You You'll have the expertise and confidence toworkcollaboratively with engineers, architects and business analysts in multi-disciplinary teams on client site, and haveexperience in several of these areas: Python AWS, Azure or GCP data services (e.g. Data Factory, Synapse, Redshift, Glue, Athena, BigQuery, Cloud Data Fusion etc) At least one distributed NoSQL database(e.g. HBase, Cassandra). Stream processing technologiessuch asKafka, Kinesis etc. Hadoop ecosystem exposure. Some of the Perks A collaborative and inspiring environment working alongside some of the best tech people in the industry Hybrid working - you can vary your working location to allow you to collaborate better, feed your creativity, and take the time and space to focus when you need it Training opportunities and incentives - we support professional certifications across engineering and non-engineering roles Flexible benefits allowance - you can spend on additional pension contributions, healthcare, dental and more… We partner with Lifeworks to offer wellbeing support to our employees Life Assurance (4 x annual salary) Giving back - the ability to get involved nationally and regionally with partnerships to get people from different backgrounds into tech 25 days annual leave plus bank holidays Discounts - we have preferred rates from dozens of retail, lifestyle and utility brands An industry-leading referral scheme BJSS is committed to equal opportunities and diversity so we want to ensure that our recruitment and selection processes are fair to all who wish to apply.
Are you a skilled Python Engineer with a passion for Data-driven solutions? Are you keen to implement cutting edge cloud data services, with focus on how consumers use the platform? Are you interested in building on your existing data and cloud experience? About Us We're an innovative tech consultancy - a team of problem solvers. Since 1993 we've been finding better ways to solve complex technology problems for some of the world's leading organisations and?delivered solutions that millions of people use every day. We bring together experts from diverse backgrounds and experiences in a collaborative and open culture to deliver outstanding outcomes for our clients, and a stimulating and rewarding environment for our people. We're looking for data specialists with experience in Data Development, ETL, Data Warehousing and dealing with large sets of structured, semi-structured and unstructured data. About the Role As a BJSS Data Engineer you'll help our clients deploy data pipelines and processes in a production-safe manner, using the latest technologies and driven by a DataOps culture. You'll work in a fast moving, agile environment, within multi-disciplinary teams, delivering modern data platforms into some of the UK's most significant organisations. You'll get to work with and learn from some of the brightest and best in the industry on some of the most exciting digital programmes around. About You You'll have the expertise and confidence to work collaboratively with engineers, architects, and business analysts in multi-disciplinary teams on client site. Experience is not necessary in all of these areas, we just need great Python Engineers, however you will have the opportunity to learn in the following: AWS, Azure and/or GCP data services (e.g. Data Factory, Synapse, Redshift, Glue, Athena, BigQuery, Cloud Data Fusion etc.) At least one distributed NoSQL database (e.g. HBase, Cassandra) Stream processing technologies such as Kafka, Kinesis etc. Hadoop ecosystem exposure Some of the Perks A collaborative and inspiring environment working alongside some of the best tech people in the industry Hybrid working - you can vary your working location to allow you to collaborate better, feed your creativity, and take the time and space to focus when you need it Training opportunities and incentives - we support professional certifications across engineering and non-engineering roles Flexible benefits allowance - you can spend on additional pension contributions, healthcare, dental and more… We partner with Lifeworks to offer wellbeing support to our employees Life Assurance (4 x annual salary) Giving back - the ability to get involved nationally and regionally with partnerships to get people from different backgrounds into tech 25 days annual leave plus bank holidays Discounts - we have preferred rates from dozens of retail, lifestyle and utility brands An industry-leading referral scheme BJSS is committed to equal opportunities and diversity so we want to ensure that our recruitment and selection processes are fair to all who wish to apply.
04/11/2021
Full time
Are you a skilled Python Engineer with a passion for Data-driven solutions? Are you keen to implement cutting edge cloud data services, with focus on how consumers use the platform? Are you interested in building on your existing data and cloud experience? About Us We're an innovative tech consultancy - a team of problem solvers. Since 1993 we've been finding better ways to solve complex technology problems for some of the world's leading organisations and?delivered solutions that millions of people use every day. We bring together experts from diverse backgrounds and experiences in a collaborative and open culture to deliver outstanding outcomes for our clients, and a stimulating and rewarding environment for our people. We're looking for data specialists with experience in Data Development, ETL, Data Warehousing and dealing with large sets of structured, semi-structured and unstructured data. About the Role As a BJSS Data Engineer you'll help our clients deploy data pipelines and processes in a production-safe manner, using the latest technologies and driven by a DataOps culture. You'll work in a fast moving, agile environment, within multi-disciplinary teams, delivering modern data platforms into some of the UK's most significant organisations. You'll get to work with and learn from some of the brightest and best in the industry on some of the most exciting digital programmes around. About You You'll have the expertise and confidence to work collaboratively with engineers, architects, and business analysts in multi-disciplinary teams on client site. Experience is not necessary in all of these areas, we just need great Python Engineers, however you will have the opportunity to learn in the following: AWS, Azure and/or GCP data services (e.g. Data Factory, Synapse, Redshift, Glue, Athena, BigQuery, Cloud Data Fusion etc.) At least one distributed NoSQL database (e.g. HBase, Cassandra) Stream processing technologies such as Kafka, Kinesis etc. Hadoop ecosystem exposure Some of the Perks A collaborative and inspiring environment working alongside some of the best tech people in the industry Hybrid working - you can vary your working location to allow you to collaborate better, feed your creativity, and take the time and space to focus when you need it Training opportunities and incentives - we support professional certifications across engineering and non-engineering roles Flexible benefits allowance - you can spend on additional pension contributions, healthcare, dental and more… We partner with Lifeworks to offer wellbeing support to our employees Life Assurance (4 x annual salary) Giving back - the ability to get involved nationally and regionally with partnerships to get people from different backgrounds into tech 25 days annual leave plus bank holidays Discounts - we have preferred rates from dozens of retail, lifestyle and utility brands An industry-leading referral scheme BJSS is committed to equal opportunities and diversity so we want to ensure that our recruitment and selection processes are fair to all who wish to apply.
Who we are: Nutmeg is Europe's leading Digital Wealth Manager, but we don't want to stop there. We're continuing to build our platform to help us achieve our mission of being the most trusted Digital Wealth Manager in the world. Since being founded in 2011 we've: Grown to 160+ employees Raised over £100M in funding Launched 4 amazing products including JISA and Lifetime ISA Won multiple awards including Best Online Stocks & Shares ISA Provider for the fifth year in a row! We hit the 130,000 investor milestone in early 2021 and now manage over £3 billion AUM. *We offer flexible working* Job in a nutshell: We run a pure AWS-based cloud environment and deliver features using a continuous delivery approach. Our Data platform comprises a mix of services and open-source products fully running in Kubernetes and utilising AWS native Data solutions. Nutmeg Data solution is a mix of batching and streaming processes leveraging Airflow, Apache Kafka and AWS Data tools. Our key characteristic is enabling a self-service experience for all Data stakeholders. Nutmeg products are served by a polyglot mix of microservices designed following Domain-Driven Design principles and composing an Event-Driven Architecture powered by Apache Kafka. As a Senior Data Engineer, you will closely collaborate with technical and non-technical teams to deliver Data solutions supporting Nutmeg's Data strategy. We are looking for someone with previous job experience as a senior engineer and a strong passion for Data challenges. Requirements Your skills: Following Data engineering industry best practice Full ownership of end-to-end Data pipelines Designing, implementing, and maintaining Data models Writing automated test around Data models Understanding of CI/CD principles Experience with cloud platforms for Data (ideally AWS) Experience in converting business requirements into technical deliverables Previous experience with two or more of the following: Airflow, dbt, Kafka Connect, Looker, Python, and Redshift You might also have: DataOps best practice Experience in collaborating with BI and Data Science teams Use of agile/lean methodologies for continuous delivery and improvement Knowledge of monitoring, metrics or Site Reliability Engineering Understanding of Data governance and security standards Benefits 25 days' holiday Birthday day off 2 days' paid community leave Competitive salary Private healthcare with Vitality from day 1 Access to a digital GP and other healthcare resources Season ticket and bike loans Access to a wellbeing platform & regular knowledge sharing Regular homeworking perks and rewards Cycle storage and showers onsite Discounted Nutmeg account for you and your family and friends Part of an inclusive Nutmeg team
15/09/2021
Full time
Who we are: Nutmeg is Europe's leading Digital Wealth Manager, but we don't want to stop there. We're continuing to build our platform to help us achieve our mission of being the most trusted Digital Wealth Manager in the world. Since being founded in 2011 we've: Grown to 160+ employees Raised over £100M in funding Launched 4 amazing products including JISA and Lifetime ISA Won multiple awards including Best Online Stocks & Shares ISA Provider for the fifth year in a row! We hit the 130,000 investor milestone in early 2021 and now manage over £3 billion AUM. *We offer flexible working* Job in a nutshell: We run a pure AWS-based cloud environment and deliver features using a continuous delivery approach. Our Data platform comprises a mix of services and open-source products fully running in Kubernetes and utilising AWS native Data solutions. Nutmeg Data solution is a mix of batching and streaming processes leveraging Airflow, Apache Kafka and AWS Data tools. Our key characteristic is enabling a self-service experience for all Data stakeholders. Nutmeg products are served by a polyglot mix of microservices designed following Domain-Driven Design principles and composing an Event-Driven Architecture powered by Apache Kafka. As a Senior Data Engineer, you will closely collaborate with technical and non-technical teams to deliver Data solutions supporting Nutmeg's Data strategy. We are looking for someone with previous job experience as a senior engineer and a strong passion for Data challenges. Requirements Your skills: Following Data engineering industry best practice Full ownership of end-to-end Data pipelines Designing, implementing, and maintaining Data models Writing automated test around Data models Understanding of CI/CD principles Experience with cloud platforms for Data (ideally AWS) Experience in converting business requirements into technical deliverables Previous experience with two or more of the following: Airflow, dbt, Kafka Connect, Looker, Python, and Redshift You might also have: DataOps best practice Experience in collaborating with BI and Data Science teams Use of agile/lean methodologies for continuous delivery and improvement Knowledge of monitoring, metrics or Site Reliability Engineering Understanding of Data governance and security standards Benefits 25 days' holiday Birthday day off 2 days' paid community leave Competitive salary Private healthcare with Vitality from day 1 Access to a digital GP and other healthcare resources Season ticket and bike loans Access to a wellbeing platform & regular knowledge sharing Regular homeworking perks and rewards Cycle storage and showers onsite Discounted Nutmeg account for you and your family and friends Part of an inclusive Nutmeg team
Lead Data Engineer | Remote working | Gloucester | £65,000 - £80,000 Jonothan Bosworth Recruitment Specialists are currently seeking a Lead Data Engineer where you will join a well-established company who are at the forefront of a new growth plan and underway with an ambitious program of work. You will join a new data team as part of the emerging data strategy. This job opportunity is for an experienced Data Engineer who is looking to progress their career into a lead role innovating with the latest technologies to design and lead technical teams in building internal as well as client-facing solutions using Databricks, Azure Stack, and Power BI. As Lead Data Engineer you will help build high performance data platforms from the ground up and establish and manage the Data Engineering team along the way, ensuring they develop, maintain, and optimise data pipelines using best practice within a DataOps methodology. THE BASICS: You will design and implement numerous complex data flows to connect operational systems, data for analytics and business intelligence (BI) systems. Specifically: Design / Implement data storage and processing solutions. Data security and compliance. Monitor and optimise data solutions. Build Data Engineering capacity through technical support and personal development of Data Engineers. Inspire best practice for data products and services, and work with senior team members to identify, plan, develop and deliver data services. KEY SKILLS: You will have experience leading a team along with Cloud architecture and distributed systems Having worked on Big Data projects you will have experience using Big Data Frameworks to create Data Pipelines with the latest stream processing systems (e.g., Kafka, Storm, Spark-Streaming, etc.) Advanced Programming / Scripting (Java, Python, R etc.) Data Strategy, Architectures and Governance, Data Management and Security Data Integrations using Azure Data Factory, Data Bricks and APIs Data Repositories in SQL Server and Analysis Services Data Modelling, SQL and Azure Data Warehouse and Reporting solutions Able to work well under pressure, flexible, positive & focused during times of change. Travel to Gloucester twice a week. For more information, please contact Claire at Jonothan Bosworth Recruitment Specialists. NC_20_LDE_CE We are an equal opportunities employer, committed to diversity and inclusion. We are active anti-slavery advocates and prohibit discrimination and harassment of any kind based on race, colour, sex, religion, sexual orientation, national origin, disability, genetic information, pregnancy, or any other protected characteristic.
14/09/2021
Full time
Lead Data Engineer | Remote working | Gloucester | £65,000 - £80,000 Jonothan Bosworth Recruitment Specialists are currently seeking a Lead Data Engineer where you will join a well-established company who are at the forefront of a new growth plan and underway with an ambitious program of work. You will join a new data team as part of the emerging data strategy. This job opportunity is for an experienced Data Engineer who is looking to progress their career into a lead role innovating with the latest technologies to design and lead technical teams in building internal as well as client-facing solutions using Databricks, Azure Stack, and Power BI. As Lead Data Engineer you will help build high performance data platforms from the ground up and establish and manage the Data Engineering team along the way, ensuring they develop, maintain, and optimise data pipelines using best practice within a DataOps methodology. THE BASICS: You will design and implement numerous complex data flows to connect operational systems, data for analytics and business intelligence (BI) systems. Specifically: Design / Implement data storage and processing solutions. Data security and compliance. Monitor and optimise data solutions. Build Data Engineering capacity through technical support and personal development of Data Engineers. Inspire best practice for data products and services, and work with senior team members to identify, plan, develop and deliver data services. KEY SKILLS: You will have experience leading a team along with Cloud architecture and distributed systems Having worked on Big Data projects you will have experience using Big Data Frameworks to create Data Pipelines with the latest stream processing systems (e.g., Kafka, Storm, Spark-Streaming, etc.) Advanced Programming / Scripting (Java, Python, R etc.) Data Strategy, Architectures and Governance, Data Management and Security Data Integrations using Azure Data Factory, Data Bricks and APIs Data Repositories in SQL Server and Analysis Services Data Modelling, SQL and Azure Data Warehouse and Reporting solutions Able to work well under pressure, flexible, positive & focused during times of change. Travel to Gloucester twice a week. For more information, please contact Claire at Jonothan Bosworth Recruitment Specialists. NC_20_LDE_CE We are an equal opportunities employer, committed to diversity and inclusion. We are active anti-slavery advocates and prohibit discrimination and harassment of any kind based on race, colour, sex, religion, sexual orientation, national origin, disability, genetic information, pregnancy, or any other protected characteristic.
Who we are: Nutmeg is Europe's leading Digital Wealth Manager, but we don't want to stop there. We're continuing to build our platform to help us achieve our mission of being the most trusted Digital Wealth Manager in the world. Since being founded in 2011 we've: Grown to 160+ employees Raised over £100M in funding Launched 4 amazing products including JISA and Lifetime ISA Won multiple awards including Best Online Stocks & Shares ISA Provider for the fifth year in a row! We hit the 130,000 investor milestone in early 2021 and now manage over £3 billion AUM. *We offer flexible working* Job in a nutshell: We run a pure AWS-based cloud environment and deliver features using a continuous delivery approach. Our Data platform comprises a mix of services and open-source products fully running in Kubernetes and utilising AWS native Data solutions. Nutmeg Data solution is a mix of batching and streaming processes leveraging Airflow, Apache Kafka and AWS Data tools. Our key characteristic is enabling a self-service experience for all Data stakeholders. Nutmeg products are served by a polyglot mix of microservices designed following Domain-Driven Design principles and composing an Event-Driven Architecture powered by Apache Kafka. As a Senior Data Engineer, you will closely collaborate with technical and non-technical teams to deliver Data solutions supporting Nutmeg's Data strategy. We are looking for someone with previous job experience as a senior engineer and a strong passion for Data challenges. Requirements Your skills: Following Data engineering industry best practice Full ownership of end-to-end Data pipelines Designing, implementing, and maintaining Data models Writing automated test around Data models Understanding of CI/CD principles Experience with cloud platforms for Data (ideally AWS) Experience in converting business requirements into technical deliverables Previous experience with two or more of the following: Airflow, dbt, Kafka Connect, Looker, Python, and Redshift You might also have: DataOps best practice Experience in collaborating with BI and Data Science teams Use of agile/lean methodologies for continuous delivery and improvement Knowledge of monitoring, metrics or Site Reliability Engineering Understanding of Data governance and security standards Benefits 25 days' holiday Birthday day off 2 days' paid community leave Competitive salary Private healthcare with Vitality from day 1 Access to a digital GP and other healthcare resources Season ticket and bike loans Access to a wellbeing platform & regular knowledge sharing Regular homeworking perks and rewards Cycle storage and showers onsite Discounted Nutmeg account for you and your family and friends Part of an inclusive Nutmeg team
14/09/2021
Full time
Who we are: Nutmeg is Europe's leading Digital Wealth Manager, but we don't want to stop there. We're continuing to build our platform to help us achieve our mission of being the most trusted Digital Wealth Manager in the world. Since being founded in 2011 we've: Grown to 160+ employees Raised over £100M in funding Launched 4 amazing products including JISA and Lifetime ISA Won multiple awards including Best Online Stocks & Shares ISA Provider for the fifth year in a row! We hit the 130,000 investor milestone in early 2021 and now manage over £3 billion AUM. *We offer flexible working* Job in a nutshell: We run a pure AWS-based cloud environment and deliver features using a continuous delivery approach. Our Data platform comprises a mix of services and open-source products fully running in Kubernetes and utilising AWS native Data solutions. Nutmeg Data solution is a mix of batching and streaming processes leveraging Airflow, Apache Kafka and AWS Data tools. Our key characteristic is enabling a self-service experience for all Data stakeholders. Nutmeg products are served by a polyglot mix of microservices designed following Domain-Driven Design principles and composing an Event-Driven Architecture powered by Apache Kafka. As a Senior Data Engineer, you will closely collaborate with technical and non-technical teams to deliver Data solutions supporting Nutmeg's Data strategy. We are looking for someone with previous job experience as a senior engineer and a strong passion for Data challenges. Requirements Your skills: Following Data engineering industry best practice Full ownership of end-to-end Data pipelines Designing, implementing, and maintaining Data models Writing automated test around Data models Understanding of CI/CD principles Experience with cloud platforms for Data (ideally AWS) Experience in converting business requirements into technical deliverables Previous experience with two or more of the following: Airflow, dbt, Kafka Connect, Looker, Python, and Redshift You might also have: DataOps best practice Experience in collaborating with BI and Data Science teams Use of agile/lean methodologies for continuous delivery and improvement Knowledge of monitoring, metrics or Site Reliability Engineering Understanding of Data governance and security standards Benefits 25 days' holiday Birthday day off 2 days' paid community leave Competitive salary Private healthcare with Vitality from day 1 Access to a digital GP and other healthcare resources Season ticket and bike loans Access to a wellbeing platform & regular knowledge sharing Regular homeworking perks and rewards Cycle storage and showers onsite Discounted Nutmeg account for you and your family and friends Part of an inclusive Nutmeg team
Role Profile - Data Engineer Role Title AWS Data Engineer Business Function DataOps Location Wrexham / London Who we are & what we do Founded in 2016, Chetwood Financial is a digital bank using technology to make people better off. At Chetwood, we think differently...... click apply for full job details
19/03/2021
Full time
Role Profile - Data Engineer Role Title AWS Data Engineer Business Function DataOps Location Wrexham / London Who we are & what we do Founded in 2016, Chetwood Financial is a digital bank using technology to make people better off. At Chetwood, we think differently...... click apply for full job details
Jobs - Frequently Asked Questions
Use the location filter to find IT jobs in cities like London, Manchester, Birmingham, and across the UK.
Entry-level roles include IT support technician, junior developer, QA tester, and helpdesk analyst.
New jobs are posted daily. Set up alerts to be notified as soon as new roles match your preferences.
Key skills include problem-solving, coding, cloud computing, networking, and familiarity with tools like AWS or SQL.
Yes, many employers offer training or junior roles. Focus on building a strong CV with relevant coursework or personal projects.