Job Description
AI Lead – Production AI & Data Science
Full‑time | Technology Consultancy
Role Summary
This is a senior leadership role responsible for the design, build, and delivery of production‑grade data‑science and agentic AI systems for complex, data‑rich organisations—primarily within specialty insurance.
You will lead the development of scalable, measurable AI capabilities that materially improve underwriting performance, operational efficiency, and risk decision‑making. The role blends deep hands‑on technical expertise with strategic advisory responsibilities across modelling, optimisation, AI engineering, and platform architecture.
Key Responsibilities
1. AI Engineering & Data Science Leadership
* Architect and deliver end‑to‑end AI systems, spanning forecasting, classification, anomaly detection, optimisation, and agent‑based workflow orchestration.
* Translate business objectives across underwriting, exposure, claims, and delegated authority into deployable data‑science solutions with clear performance KPIs.
* Design and deliver models that drive:
* Loss ratio improvement
* Pricing segmentation and adequacy
* Claims leakage reduction
* Operating expense optimisation
* Delegated authority automation
* Build reusable, enterprise‑ready AI assets including feature pipelines, optimisation engines, evaluation frameworks, and simulators.
2. MLOps, Quality & Productionisation
* Design and implement robust CI/CD and MLOps pipelines using platforms such as Databricks, MLflow, Unity Catalog, Snowflake, and Azure ML.
* Establish model observability across performance, drift, cost controls, and operational reliability.
* Apply AIOps practices to enable automated retraining, rollback, and compliance‑ready monitoring in regulated environments.
3. AI & Data Strategy
* Evaluate organisational AI maturity, data readiness, platform capability gaps, and engineering risks.
* Define capability maps, target architectures, operating models, and 12–24‑month AI roadmaps.
* Advise senior leaders on where AI can deliver measurable commercial impact, particularly in underwriting and claims functions.
Candidate Profile
Technical Expertise
* 8–15+ years’ experience delivering production machine‑learning or advanced analytics solutions.
* Strong background in statistical and ML modelling including:
* Regression and classification
* Time‑series forecasting
* Gradient boosting and clustering
* Anomaly detection and optimisation
* Solid engineering foundation: Python, distributed systems, APIs, and cloud‑native design.
* Experience designing agentic AI workflows (tool use, retrieval, evaluation, automated orchestration).
* Proven ability to build data pipelines, feature stores, and monitoring frameworks suitable for enterprise deployment.
Leadership & Communication
* Demonstrated leadership of cross‑functional teams spanning engineering, data science, and product.
* Comfortable engaging senior stakeholders including CIOs, Heads of Data, Underwriting, and Claims leaders.
* Strong track record of moving from prototype to commercially impactful, production systems.
Insurance Domain Experience (Advantageous)
* Familiarity with specialty insurance processes including underwriting, pricing, exposure management, actuarial inputs, claims triage, and bordereaux.
* Ability to design AI solutions aligned to insurance performance and profitability metrics.
Technical Environment
Model Lifecycle & Governance
* MLflow, Databricks Model Serving, Unity Catalog
* Feature stores (Databricks Feature Store, Feast)
* Model monitoring and governance platforms
Data Engineering
* Databricks Lakehouse, Spark
* Kafka / Confluent
* Delta Lake
* Airflow / Databricks Workflows
Cloud Platforms
* Azure, AWS, or GCP (experience with at least one)
Data Science & Optimisation
* Python ecosystem (Pandas, NumPy, SciPy, Statsmodels)
* Scikit‑learn, XGBoost, LightGBM, CatBoost
* Time‑series forecasting (Prophet, SARIMAX, ML‑based approaches)
* Optimisation frameworks (OR‑Tools, Pyomo)
Agentic AI & LLMs (Nice to Have)
* LangChain or Semantic Kernel
* Vector databases (Pinecone, Weaviate, Milvus, Azure Vector DB)
* Evaluation frameworks (LangSmith, DeepEval)
* LLM integration (OpenAI / Azure OpenAI)
Software Engineering & Platform
* Docker, Kubernetes
* REST / gRPC services
* CI/CD (GitHub Actions, Azure DevOps, GitLab)
* Infrastructure as Code (Terraform)
* Observability tooling (Prometheus, Grafana, ELK)