What you’ll be doing Model Development: Design, train, and optimise machine learning models for user personalisation, including recommendation systems, ranking models, user segmentation, and content understanding, with a strong focus on TensorFlow-based development. Data Pipeline Engineering: Build and maintain scalable data pipelines to support feature engineering and model training across large structured and unstructured datasets, leveraging cloud‑native tooling. Production Deployment: Deploy, monitor, and maintain ML models in production environments, including cloud‑based model serving on GCP. Ensure high availability, strong performance, and continuous model relevance. Experimentation: Lead A/B testing and offline experimentation to evaluate model performance and guide ongoing improvement. Cross‑Functional Collaboration: Work closely with engineering, product, data, and research teams to ensure ML solutions align with product and business goals. Research & Innovation: Stay informed on advances in machine learning, deep learning, and personalisation, and evaluate their integration into existing systems. What you'll bring End‑to‑end experience across the ML lifecycle: model development, training, deployment, monitoring, and continuous maintenance. Strong proficiency in Python and ML frameworks, with expertise in TensorFlow (and experience with PyTorch). Experience with GCP machine learning and data services (e.g., Vertex AI, Dataflow, BigQuery, AI Platform, Pub/Sub). Hands‑on experience with ML training frameworks such as TFX or Kubeflow Pipelines, and model‑serving technologies like TensorFlow Serving, Triton, or TorchServe. Background working with large‑scale batch and real‑time data processing systems. Strong understanding of recommender systems, ranking models, and personalisation algorithms. Familiarity with Generative AI and its use in production environments. Strong communication skills and analytical problem‑solving abilities.