Responsibilities
:
1. Conduct pre-training AI models on large, distributed servers equipped with thousands of NVIDIA GPUs.
2. Design, prototype, and scale innovative architectures to enhance model intelligence.
3. Independently and collaboratively execute experiments, analyze results, and refine methodologies for optimal performance.
4. Investigate, debug, and improve both model efficiency andputational performance.
5. Contribute to the advancement of training systems to ensure seamless scalability and efficiency on target platforms.
6. A degree inputer Science or related field. Ideally PhD in NLP, Machine Learning, or a related field,plemented by a solid track record in AI R&D (with good publications in A* conferences).
7. Hands-on experience contributing to large-scale LLM training runs on large, distributed servers equipped with thousands of NVIDIA GPUs, ensuring scalability and impactful advancements in model performance.
8. Familiarity and practical experience with large-scale, distributed training frameworks, libraries and tools.
9. Deep knowledge of state-of-the-art transformer and non-transformer modifications aimed at enhancing intelligence, efficiency and scalability.
10. Strong expertise in PyTorch and Hugging Face libraries with practical experience in model development, continual pretraining, and deployment.
Job ID t5gevkAwXymJ