Jobs
My ads
My job alerts
Sign in
Find a job Career Tips Companies
Find

Senior data scientist/researcher

Newcastle Upon Tyne (Tyne and Wear)
Monq
Data scientist
Posted: 24 February
Offer description

The Problem

Every major enterprise procurement deal - a mining company locking in steel supply, a manufacturer negotiating energy contracts, a retailer securing food commodities - lives or dies on one question: what will this cost six months from now?


Today, that question is answered with spreadsheets, gut instinct, and analyst reports written days after markets have already moved. Billions of dollars in value are left on the table - or surrendered across the negotiating table - because procurement teams are flying blind on price.

At Monq, we're building AI agents that negotiate high-value enterprise contracts - and we're expanding what that platform can do. The next frontier is price intelligence: giving procurement teams the foresight to know what a deal should cost before they even sit down to negotiate. That's what you'll build.


What You'll Build

This is genuinely 0-to-1. There is no existing model, no data pipeline, no baseline to iterate on. You're starting from a blank sheet - which means you'll need to be comfortable with ambiguity, scrappy about data sourcing, and confident making architectural decisions without a committee to approve them. In return, everything you build will matter immediately, and you'll own it completely.

Concretely, you will:


* Build multivariate commodity price prediction models from scratch. You'll work across energy, metals, agricultural inputs, and industrial materials — constructing models that capture the full complexity of cross-commodity dependencies, supply chain dynamics, macroeconomic signals, and geopolitical risk. This isn't univariate time series. This is structured, high-dimensional, real-world forecasting at a level that actually moves enterprise decisions.
* Own the full modelling lifecycle. Feature engineering, model selection, validation strategy, uncertainty quantification, production deployment. You'll make the calls and live with the results - which means you'll learn fast.
* Design forecasting architectures that go beyond the obvious. We're not looking for someone who fits an ARIMA model and calls it done. We want someone who knows when to reach for Gaussian processes, gradient-boosted ensembles, neural state-space models, or hybrid symbolic-statistical approaches - and, critically, knows why.
* Integrate alternative data sources. Satellite imagery, shipping data, weather signals, procurement index feeds, news sentiment — the edge is often in the signal no one else has thought to use. You'll identify and incorporate these into production-grade pipelines.
* Shape how predictions become decisions. The end goal is for your models to inform what Monq recommends inside live procurement negotiations. Getting there requires working closely with product and engineering - translating probabilistic outputs into something a procurement professional can act on in the moment, not just admire in a dashboard.
* Bridge research and engineering to ship production-grade systems. You won't be throwing models over the fence. You'll work in close collaboration with our engineering team to take research from notebook to production - defining clean interfaces, writing model-serving code that engineers can build on, and making sure what you've validated in a research context actually holds up in a live enterprise environment. The expectation is that your work ships, not just publishes.


You Might Be a Fit If

* You have 6+ years of experience in applied data science or quantitative research, with a strong track record in forecasting or time series modelling in production environments (an ongoing PhD or track record of research would be wonderful to have)
* You've worked on commodity, energy, or financial market price prediction - you understand basis risk, seasonality, mean-reversion, and regime shifts intuitively
* You're fluent in multivariate modelling: VAR/VECM, Bayesian hierarchical models, factor models, LSTM/transformer-based temporal architectures - and you can speak clearly to the tradeoffs between them
* You're rigorous about uncertainty. You know the difference between epistemic and aleatoric uncertainty, and you build that distinction into how you communicate predictions to stakeholders
* You're comfortable working with messy, heterogeneous, real-world data — incomplete time series, mixed frequencies, structural breaks, and sources that require significant wrangling before they're useful
* You can write production-quality Python and know how to deploy models in a way that engineers can actually build on
* You care about impact, not just accuracy metrics. A model that moves a negotiation outcome is worth more than one that wins a Kaggle leaderboard



Nice to Have

* Experience with causal inference methods applied to market dynamics (synthetic control, difference-in-differences, IV)
* Familiarity with procurement indices (PPI, ISM, commodity spot/futures markets) and how to incorporate forward curve data
* Experience building real-time or near-real-time inference pipelines at scale
* Background in operations research or supply chain optimisation
* Exposure to LLMs as signal sources — extracting structured market intelligence from unstructured text



ML Skills We're Looking For

This role sits at the intersection of classical econometrics and modern machine learning. You don't need to be a world-class expert in every area below - but you should be genuinely strong across most of them and honest about where you want to grow.

* Supervised & Ensemble Methods. Gradient-boosted trees (XGBoost, LightGBM, CatBoost) for tabular forecasting; understanding of when tree-based models outperform neural approaches on structured data, and vice versa. Strong intuition for regularisation, hyperparameter tuning, and avoiding leakage in time series cross-validation.
* Deep Learning for Sequences. Hands-on experience with temporal architectures - LSTMs, GRUs, Temporal Fusion Transformers, N-BEATS, or similar. Understanding of attention mechanisms and when transformer-based sequence models are worth the complexity cost over simpler recurrent approaches.
* Probabilistic & Bayesian Modelling. Comfort with probabilistic forecasting: quantile regression, conformal prediction, Monte Carlo dropout, or full Bayesian inference via PyMC or NumPyro. The ability to communicate uncertainty intervals credibly to non-technical stakeholders is as important as computing them correctly.
* Feature Engineering at Scale. Lag features, rolling statistics, Fourier transforms for seasonality decomposition, target encoding with temporal leakage guards, embeddings for categorical market variables. You understand that the quality of your features usually matters more than the choice of model.
* Model Evaluation & Validation. Walk-forward validation, purged k-fold cross-validation, backtesting under realistic execution constraints. You know why naive train/test splits are dangerous in time series and what to do about it.
* MLOps & Productionisation. Experience taking models from notebook to production: experiment tracking (MLflow, W&B), model versioning, feature stores, drift detection, and retraining triggers. You can build a model that doesn't just work once - it works reliably over time as markets evolve.
* Explainability & Interpretability. SHAP values, partial dependence plots, and the ability to explain model behaviour to procurement professionals who need to trust and act on predictions. Black-box accuracy means nothing if the model can't be interrogated when it's wrong.


The Stack

You'll have significant input into tooling choices: experiment tracking, feature stores, deployment infrastructure. We have strong engineering support and ship on AWS, but we're building the MLOps layer on the go and you'll help define it. We use AI coding tools - Cursor and Claude Code - as part of the daily workflow, not as a novelty.


Why This Is a Rare Opportunity

Most data science roles hand you a well-defined problem with existing pipelines and ask you to improve on a baseline. This one doesn't. There is no baseline. You'll spend real time figuring out what data is available, what's worth acquiring, and what's even feasible to predict - before writing a single model.


That's harder than most JDs admit. But it also means your decisions have a direct and permanent impact on the direction of the feature we are developing. The procurement market is a $4.2 trillion opportunity that existing AI solutions have almost entirely ignored. If you can build something that genuinely predicts commodity price movements - even imperfectly, even partially - it changes what Monq can offer enterprise customers and how we compete. This is the kind of problem that a good data scientist can spend years on and still find interesting.


About Monq

Monq is building the first AI platform purpose-built for strategic procurement negotiation. We're early-stage and moving fast — backed by executives from Revolut and HSBC, working with enterprise customers, and actively building the team that will define what this product becomes.

We're a small, flat team. We use AI tools not as a novelty but because they make us better and faster. We value simplicity, ownership, and shipping — and we're looking for people who hold themselves to high standards while staying pragmatic about what matters right now.


Equal Opportunities

Monq is committed to creating a diverse and inclusive workplace and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, colour, religion, gender, gender reassignment, marital or civil partnership status, age, disability, pregnancy or maternity, or any other basis as protected by the Equality Act 2010.


We actively encourage applications from people with diverse backgrounds and experiences.


For accommodations during the recruitment process, please contact .

Apply
Create E-mail Alert
Job alert activated
Saved
Save
Similar job
Data scientist
Wideopen
Permanent
CGI
Data scientist
€35,000 a year
Similar job
Data scientist - £50k + bonus
Newcastle Upon Tyne (Tyne and Wear)
Permanent
Oliver Bernard
Data scientist
€47,500 a year
Similar job
Water sector data scientist — hybrid ai and insights
Newcastle Upon Tyne (Tyne and Wear)
Permanent
STANTEC
Data scientist
€50,000 a year
See more jobs
Similar jobs
It jobs in Newcastle Upon Tyne (Tyne and Wear)
jobs Newcastle Upon Tyne (Tyne and Wear)
jobs Tyne and Wear
jobs England
Home > Jobs > It jobs > Data scientist jobs > Data scientist jobs in Newcastle Upon Tyne (Tyne and Wear) > Senior Data Scientist/Researcher

About Jobijoba

  • Career Advice
  • Company Reviews

Search for jobs

  • Jobs by Job Title
  • Jobs by Industry
  • Jobs by Company
  • Jobs by Location
  • Jobs by Keywords

Contact / Partnership

  • Contact
  • Publish your job offers on Jobijoba

Legal notice - Terms of Service - Privacy Policy - Manage my cookies - Accessibility: Not compliant

© 2026 Jobijoba - All Rights Reserved

Apply
Create E-mail Alert
Job alert activated
Saved
Save