Overview
Research Scientist / Research Engineer, Pre-training — London, UK
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic is at the forefront of AI research, dedicated to developing safe, ethical, and powerful artificial intelligence. Our mission is to ensure that transformative AI systems are aligned with human interests. We are seeking a Research Engineer to join our Pretraining team, responsible for developing the next generation of large language models. In this role, you will work at the intersection of cutting-edge research and practical engineering, contributing to the development of safe, steerable, and trustworthy AI systems.
Responsibilities
* Conduct research and implement solutions in areas such as model architecture, algorithms, data processing, and optimizer development
* Independently lead small research projects while collaborating with team members on larger initiatives
* Design, run, and analyze scientific experiments to advance our understanding of large language models
* Optimize and scale our training infrastructure to improve efficiency and reliability
* Develop and improve dev tooling to enhance team productivity
* Contribute to the entire stack, from low-level optimizations to high-level model design
Qualifications
* Advanced degree (MS or PhD) in Computer Science, Machine Learning, or a related field
* Strong software engineering skills with a proven track record of building complex systems
* Expertise in Python and experience with deep learning frameworks (PyTorch preferred)
* Familiarity with large-scale machine learning, particularly in the context of language models
* Ability to balance research goals with practical engineering constraints
* Strong problem-solving skills and a results-oriented mindset
* Excellent communication skills and ability to work in a collaborative environment
* Care about the societal impacts of your work
Preferred Experience
* Work on high-performance, large-scale ML systems
* Familiarity with GPUs, Kubernetes, and OS internals
* Experience with language modeling using transformer architectures
* Knowledge of reinforcement learning techniques
* Background in large-scale ETL processes
What you\'ll bring
* Significant software engineering experience
* Results-oriented with a bias towards flexibility and impact
* Willingness to take on tasks outside your job description to support the team
* Enjoy pair programming and collaborative work
* Desire to learn more about machine learning research
* Eagerness to work at an organization that functions as a single, cohesive team pursuing large-scale AI research projects
* Interest in aligning state-of-the-art models with human values and preferences, understanding and interpreting deep neural networks, or developing new models to support these areas
* View research and engineering as two sides of the same coin, and seek to understand all aspects of our research program to maximize impact
* Ambitious goals for AI safety and general progress, with a mindset of creating the best outcomes over the long term
Sample Projects
* Optimizing the throughput of novel attention mechanisms
* Comparing compute efficiency of different Transformer variants
* Scaling distributed training jobs to thousands of GPUs
* Designing fault tolerance strategies for our training infrastructure
* Creating interactive visualizations of model internals, such as attention patterns
Teams & Projects
* Pre-training — The Pre-training team trains large language models that are used by our product, alignment, and interpretability teams. Some projects include figuring out the optimal dataset, architecture, hyper-parameters, and scaling and managing large training runs on our cluster.
* AI Alignment Research — The Alignment team works to train more aligned (helpful, honest, and harmless) models and does aintelligence science to understand how alignment techniques work and try to extrapolate to address new failure modes.
* Reinforcement Learning — Reinforcement Learning is used by a variety of different teams, both for alignment and to teach models to be more capable at specific tasks.
* Platform — The Platform team builds shared infrastructure used by Anthropic's research and product teams. Areas of ownership include: the inference service that generates predictions from language models; extensive continuous integration and testing infrastructure; several very large supercomputing clusters and the associated tooling.
* Interpretability — The Interpretability team investigates what\u2019s going on inside large language models — their goal is to ensure that AI systems are safe by being able to assess whether they\u2019re doing what we actually want, all the way down to the individual neurons.
* Societal Impacts — Our Societal Impacts team designs and executes experiments that evaluate the capabilities and harms of the technologies we build. They also support the policy team with empirical evidence.
* Product — The Product research team trains, evaluates, and improves upon Claude, integrating all of our research techniques to make our AI systems as safe and helpful as possible.
Diversity & Inclusion
At Anthropic, we are committed to fostering a diverse and inclusive workplace. We strongly encourage applications from candidates of all backgrounds, including those from underrepresented groups in tech.
If you\'re excited about pushing the boundaries of AI while prioritizing safety and ethics, we want to hear from you!
How we\'re different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. We value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We\'re an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.
Guidance on Candidates\' AI Usage: Learn about our policy for using AI in our application process.
#J-18808-Ljbffr