Disrupt an industry and change lives
We are dedicated to being a Great Place to Work. Where you are empowered to push the boundaries of science and unleash your entrepreneurial spirit. There’s no better place to make a difference to medicine, patients and society. An inclusive culture that champions diversity and collaboration. Always committed to lifelong learning, growth and development
Working in Technology here means you’ll be a self-starter who is comfortable stepping up and taking ownership, willing to constantly explore and challenge the status quo. You’ll be empowered to orchestrate new possibilities, solve challenges and continuously innovate. Here you’ll join hackathons, work with large data and challenge yourself to push new boundaries.
We impact patients lives. Empowering and enabling the business to run faster and better, we play a part in improving lives across the world!
We are looking for a AI/ML Ops Engineer to join our new AI Ops platform team in R&D IT. The ideal candidate will have industry relevant experience working with the Kubeflow Machine Learning Platform. Where they devised and deployed large-scale production machine learning infrastructure and platforms for scientific use cases (We will acknowledge expertise with other Machine Learning Platforms and industries). The position will involve taking these skills and applying them to some of the most exciting data & prediction problems in drug discovery.
The successful candidate will be part of a new, collaborative team of multidisciplinary engineers and together have the chance to create tools that will advance the standard of healthcare, improving the lives of millions of patients across the globe. Our data science environments will support major AI initiatives such as clinical trial data analysis, knowledge graphs, patient safety systems, deep learning led drug discovery, software as a medical device, for our therapy areas. You will also have responsibility to help provide the frameworks for data scientists to develop scalable machine learning and predictive models with our growing data science community, in a safe and robust manner.
As a strong software developer with an interest in building complex systems, you will be responsible for inventing how we use technology, machine learning, and data to enable the productivity of our company. You will help envision, build, deploy and develop our next generation of data engines and tools at scale. You will be bridging the gap between science and engineering and functioning with deep expertise in both worlds.
- Liaise with R&D data scientists to understand their challenges and work with them to help productionise pipelines, models and algorithms for innovative science.
- Be part of a scrum team to build and operationalise our data science environments, platforms and tooling.
- Deployment of systems, applications and tooling for data science on cloud environments.
- Understand the necessary compliance guardrails required for different use cases and data sensitivities.
- Adapt standard machine learning methods to best exploit modern compute and storage environments (e.g. distributed clusters, EFS and GPU).
- Provide the necessary infrastructure and platforms to support the deployment and monitoring of ML solutions in production. Optimising solutions for performance and scalability.
- Liaise with the Data Engineering team to ensure that the platform and the solutions deployment therein benefit from an optimised and scalable data flow between source systems and analytical models
- Liaise with other teams to enhance our technological stack, to enable the adoption of the latest advances in Data Processing and AI
- Facilitate and aid in the implementation of Data sources and bespoke repeatable pipelines
- Work with domain experts within a scientific field to understand and lead data engineering implementations
Candidate Knowledge, Skills and Experience
- Bsc/MSc/Ph.D degree in Computer Science or related quantitative field.
- More than 4 years of experience and demonstrable deep technical skills in one or more of the following areas: machine learning, Kubernetes, AWS, recommendation systems, natural language processing or computer vision.
- Experience with containers and microservice architectures e.g. Kubernetes, Docker and serverless approaches.
- Experience or strong interest in democratising enterprise platforms and services, handling new customer demand and feature requests.
- A demonstrable knowledge of building MLOPs environments to a production standard.
- Experience building GxP compliant life science systems will be looked upon favourably
- Strong software coding skills, with proficiency in Python, however exceptional ability in any language will be recognised.
- Significant experience with orchestrating and scaling cloud environments
- Experience with open source and cloud native Machine Learning Platforms and Toolkits.
- Experience with best practice of data transport and storage within cloud system.
- Experience building large scale data processing pipelines. e. g. Hadoop/Spark and SQL.
- Experience provisioning computational resources in a variety of environments.
- Experience working with tools such as Ansible and Terraform
- Experience with DevOps automation strategies e.g. CI/CD, Jenkins, gitops.
- Creative, collaborative, & product focused.
- Entrepreneurial and enthusiastic approach to solving problems.