GDM Coding - Freelance (20 to 40 hours a week)
Model Quality Assessment: Evaluate the quality of AI model responses that include code, machine learning, AI, identifying errors, inefficiencies, and non-compliance with established standards.
Accurately generate, annotate and label code snippets, algorithms, and technical documentation according to project-specific guidelines.
Comparative Analysis: Compare multiple outputs and rank them based on criteria such as correctness, efficiency, readability, and adherence to programming best practices.
Data Validation: Validate and correct datasets to ensure high-quality data for model training and evaluation.
Collaboration: Work closely with data scientists and engineers to identify new annotation guidelines, resolve ambiguities, and contribute to the overall project strategy.
Strong background in software engineering/development, computer science, ML/AI, or related technical field, with a keen eye for detail and a passion for data accuracy
~ Programming Proficiency: Python (must-have) and at least one or more common programming languages such as: JavaScript, Rust, Node.js, Typescript, C, C++, Shell
~(Bonus points) At least 1 or more less common programming languages such as: Rust, Shell, Go, Ruby, Swift, PHP, Kotlin
~ Knowledge of web technologies & frameworks
~ Web Scraping, API integration,
~ HTML/CSS/JavaScript
~ Web application development (e.g. Frontend (e.g. React) and backend (e.g. Node.js) development
~ Machine Learning & Artificial Intelligence
~ Machine Learning (General concepts, model development, experimentation, training, evaluation)
~ Deep Learning (General, frameworks like TensorFlow, PyTorch, JAX, Keras, neural networks, CNNs, RNNs, transformer architecture, LSTM)
~ Natural Language Processing (NLP)
~ PPO, Q-learning, policy gradients, A2C, DQN, AlphaZero)
~ Computer Vision (e.g.,