Role Overview
Join our evolving data team at The Printed Group. You'll build and deploy ML models for personalisation, recommendations, anomaly detection, and insights—all while following best practices in ML Ops and leveraging AI-powered tools to boost your productivity.
Our Tech Stack
* Data Pipeline: Data sources → Debezium/Kafka → S3 → Databricks/Lambda → S3 (Delta format) / Embedded → Reporting/Notifications
* Infrastructure: AWS cloud managed via Terraform.
Responsibilities
* Develop & Deploy ML Models: Build models that power personalisation, recommendations, and anomaly detection.
* Implement ML Ops: Set up continuous integration, monitoring, and automated retraining for production models.
* Leverage AI Tools: Use AI-powered coding assistants (e.g., Cursor, Copilot) to enhance development efficiency.
* Collaborate: Work closely with software engineers, DevOps, and the CTO to ensure robust data pipelines and translate requirements into technical solutions.
* Minimum 3 years’ experience in applied machine learning and deploying production models.
* Proficiency in Python, SQL, and ML frameworks like TensorFlow, PyTorch, or scikit-learn.
* Hands-on experience with AWS services and Databricks; familiarity with ML Ops principles is a plus.
* Ability to quickly learn new tools and independently deliver scalable, high-quality solutions.
* Experience with data pipelines (Kafka, Debezium, S3, Lambda, Delta Lake) is advantageous.
#J-18808-Ljbffr