humanLearning Ltd
03/02/2026
Full time
At Vyntelligence, we're reimagining how work gets done in the deskless world. Our platform enriches expertly-curated short videos - captured before, during, and after work - with AI-powered automated workflows. We own the world's largest dataset of this kind, enabling field users and customers to make smarter, more efficient decisions about their assets and operations. Our purpose is to deliver transformative commercial and environmental outcomes, helping global clients across utilities, telecoms, retail and energy improve pricing accuracy, speed up delivery, ensure safety, and maintain compliance. Mobile and Web apps are used to capture, manipulate and view structured multimedia data. This data is uploaded to our public cloud systems. Then analysed using machine learning, computer vision, LLM (and Vision-Language models) and other technology. Our AI decision making agents combine with various integrations to support decisions for users, and in other systems, to drive important field workflows. We are continually enhancing the data collection and computer vision and AI capabilities of our platform. Our team includes multiple specialists who are developing and improving our AI capabilities. As well as a wide range of other software engineers to provide our full offering. We use github, github actions, code-pipeline and cdk with a devops/gitops approach to achieve a high release cadence through our CD pipelines. We use Django rest framework and Postgresql to provide our primary REST API interface. Our web-app is built using react. We also have android and ios native apps. Our platform is primarily AWS. Our platform has a pluggable architecture coordinated using queues. Work is distributed to a variety of processing systems and smaller services. We use a combination of commodity analytics APIs (e.g.) and bespoke AI algorithms and models (e.g. aws transcribe, google speech, aws rekognition, claude, tensorflow, bedrock, RT-DETR) to provide advanced multi-step multi-modal speech, image and video analytics. We use aws-lambda, docker, ECS, api gateway, aws-step-functions and other tools to implement various videoprocessing, ML and other AI applications. As you would expect, our system also provides various collaboration, administration, management and security related features around the central video capture and analytics. We offer both shared and dedicated deployments of the software; by defining all of our infrastructure as code we are able to easily deploy dedicated copies of our entire system into dedicated VPCs for our large customers. Many of our customers have stringent security requirements around their video data, so we use a variety of modern systems to provide security and monitoring across our networks and applications. What you'll do: Build and maintain production-ready LLM applications using modern orchestration frameworks, with a strong focus on prompt engineering, context design, and guardrail development. Develop and optimise inference pipelines to ensure accuracy, reliability, and consistency of outputs that meet enterprise-level standards. Implement and run systematic evaluation workflows using both automated tools (e.g., DeepEval, Langfuse) and human-in-the-loop review to validate model performance and safety. Support RAG or retrieval-based features where required, including working with vector databases and document-processing pipelines. Collaborate with product and engineering teams to define use cases, set KPIs, and deliver AI features aligned with business goals. Mentor technical/non-technical stakeholders on Gen AI capabilities, limitations, and responsible adoption. What you'll need: Bachelor's or Master's in Computer Science, AI, Data Science, or a related technical field. At least solid 2+ years of hands-on experience building and deploying LLM-based systems (e.g., fine-tuning, RAG, prompt engineering). Strong proficiency in Python and hands-on experience with LLM orchestration frameworks (e.g., LangChain, CrewAI, AgnosAI or similar). Experience with vector databases (e.g., Pinecone, Weaviate), retrieval optimisation (OpenSearch, FAISS, Chroma). Experience using LLM evaluation tools & methodologies (e.g., DeepEval, Langfuse or equivalent) to assess model quality, reliability, and performance. Strong understanding of responsible AI principles and practical experience in bias detection and model transparency. Ability to communicate technical AI concepts clearly to diverse audiences. Our Environment: We offer competitive remuneration and benefits, tax efficient employee stock ownership plan scheme (ESOP) and private health coverage and related health benefits are available depending on location. We provide family-friendly flexible working time, for example to support school pickup/drop-offs, and home working. We have developed a relaxed, collaborative, supportive, and high performance culture. We value employee health and well-being, and offer the opportunity to apply and develop your skills productively on a novel product with cutting edge technology. Our engineering organisation is distributed across multiple locations and time- zones, so we use a variety of tools and processes to enable effective distributed working. Equal Opportunity: We are an equal opportunity employer committed to creating a workplace where everyone feels empowered to bring their full, authentic selves to work. We celebrate diversity in all forms and make employment decisions based on merit, qualifications, and business needs. We do not discriminate on the basis of race, religion, national origin, ancestry, gender, gender identity or expression, sexual orientation, age, disability, genetic information, veteran status, or any other protected characteristic. If you need accommodations at any point in the hiring process, we're here to support you.