Anaplan is a team of innovators focused on optimizing business decision-making through our leading AI-infused scenario planning and analysis platform.
Our customers rank among the who’s who in the Fortune 50. Coca-Cola, LinkedIn, Adobe, LVMH and Bayer are just a few of the 2,400+ global companies who rely on our best-in-class platform.
Our Winning Culture is the engine that drives our teams of innovators. We champion diversity of thought and ideas, we behave like leaders regardless of title, we are committed to achieving ambitious goals, and we love celebrating our wins – big and small.
Anaplan is looking for a curious, pragmatic, and technically strong Data Scientist to join our growing team in Manchester or York. This is your chance to shape and deliver next-generation data science capabilities within a highly visible, large-scale SaaS cloud company.
We’re a diverse team of thinkers, builders, and innovators. We challenge the status quo and champion bold ideas that create meaningful impact for our customers.
Your Impact
* Build and deploy scalable data science solutions using Databricks, Apache Spark, and the broader Apache ecosystem.
* Apply statistical analysis and machine learning to generate actionable insights from complex datasets.
* Design and run experiments (e.g., A/B testing) to support evidence-based decision making.
* Collaborate with Data Engineers to develop clean, validated data pipelines for analytics and modelling.
* Work with Product Owners, Designers, and Engineers to embed intelligent decision-making into our platform.
* Use Apache Pulsar to stream real-time data and enable low-latency decision systems.
* Clearly communicate results, assumptions, and model limitations to technical and non-technical audiences.
* Maintain a strong focus on reproducibility, testing, documentation, and continuous improvement.
Your Skills
* Degree in Computer Science, Mathematics, Statistics, Physics, or a related quantitative field.
* Proficiency in Python for data science and machine learning (e.g., pandas, scikit-learn, PySpark).
* Strong grasp of SQL, data wrangling, and statistical modelling techniques.
* Experience building and deploying models on platforms such as Databricks.
* Familiarity with Apache Spark and distributed computing.
* Understanding of REST APIs, microservice architecture, and model deployment practices.
* Comfort working with CI/CD pipelines, version control (Git/GitHub), and unit testing for data pipelines.
Bonus points
* Experience with Apache Pulsar or other message queues (Kafka, RabbitMQ, etc).
* Familiarity with MLOps workflows and tools.
* Understanding of cloud infrastructure, particularly AWS or GCP.
* Exposure to monitoring tools (Grafana, Prometheus) and model performance tracking (MLflow).
* Experience applying your skills in a SaaS or enterprise software context.
We believe attracting and retaining the best talent and fostering an inclusive culture strengthens our business. DEIB improves our workforce, enhances trust with our partners and customers, and drives business success.
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, perform essential job functions, and receive equitable benefits and all privileges of employment.
#J-18808-Ljbffr