Data Science Expert – AI Content Specialist
About The Role
What if your data science knowledge could directly shape the intelligence of tomorrow's AI systems? We're looking for Data Science Experts to stress-test, evaluate, and improve advanced AI models – exposing gaps in their reasoning and helping build more accurate, reliable machine learning capabilities.
This is a fully remote, flexible contract role built for experienced data scientists and quantitative researchers who want meaningful, intellectually stimulating work on their own schedule.
* Organization: Alignerr
* Type: Hourly Contract
* Location: Remote
* Commitment: 10–40 hours/week
What You'll Do
* Design Advanced Challenges: Craft complex, domain-rich data science problems spanning hyperparameter optimization, Bayesian inference, cross-validation strategies, dimensionality reduction, and more — specifically engineered to probe the limits of AI reasoning
* Author Ground-Truth Solutions: Develop rigorous, step-by-step reference solutions including Python/R scripts, SQL queries, and mathematical derivations that serve as the definitive standard for model evaluation
* Audit AI-Generated Code: Review and assess AI outputs across libraries like Scikit-Learn, PyTorch, and TensorFlow — evaluating technical accuracy, efficiency, and correctness of data visualizations and statistical summaries
* Refine AI Reasoning: Identify subtle logical failures — data leakage, overfitting, improper handling of imbalanced datasets — and provide structured, actionable feedback that directly improves how models think
Who You Are
* Holds or is pursuing a Master's or PhD in Data Science, Statistics, Computer Science, or a quantitative discipline with a strong data focus
* Deep knowledge across core areas: supervised/unsupervised learning, deep learning, NLP, or big data technologies (Spark, Hadoop)
* Able to communicate complex algorithmic and statistical concepts clearly and precisely in writing
* Exceptionally detail-oriented — code syntax, mathematical notation, and statistical validity all matter to you
* No prior AI or annotation experience required
Nice to Have
* Experience with data annotation, quality evaluation, or labeling systems
* Familiarity with production-level data science workflows — MLOps, CI/CD for models, experiment tracking
* Background in academic research or technical writing
Why Join Us
* Work directly with industry-leading AI language models at the frontier of research
* Fully remote and asynchronous — work when and where you perform best
* Freelance flexibility with consistent, substantive task-based work
* Contribute to AI development that has a real and lasting impact on how models reason about data science
* Potential for ongoing work and contract extension as new projects launch
#J-18808-Ljbffr