About The Role
What if your knowledge of how attacks unfold — how adversaries move, where defenses crack, and how risk cascades across modern infrastructure — could directly shape the AI systems that millions of people will rely on for security guidance?
Were looking for offensive security professionals to apply their adversarial mindset to a new kind of challenge: training and evaluating frontier AI models. This role is about structured threat reasoning, not exploit development. If you can think like an attacker and explain it clearly, youre exactly who we need.
This is a fully remote, flexible contract role — work on your own schedule, on your own terms.
* Organization: Alignerr
* Type: Hourly Contract
* Location: Remote
* Commitment: 10–40 hours/week
What Youll Do
* Analyze realistic attack paths, kill chains, and adversary strategies across modern production environments
* Identify weaknesses, misconfigurations, and defensive gaps in system and network scenarios
* Review red‑team‑style intrusion narratives and evaluate their technical accuracy and completeness
* Generate, label, and validate adversarial reasoning data used to train and evaluate AI systems
* Articulate attack chains, blast radius, and risk tradeoffs in clear, structured written form
* Work independently and asynchronously — on your own schedule
Who You Are
* 2+ years of hands‑on experience in pentesting, red teaming, or a blue‑team role with deep attack‑side knowledge
* Solid understanding of how real attacks unfold across endpoints, networks, cloud environments, and identity systems
* Able to map adversary behavior to frameworks like MITRE ATT&CK and explain the why behind attack decisions
* Clear, precise communicator — you can explain complex attack chains to both technical and non‑technical audiences
* Detail‑oriented and consistent when working through structured evaluation tasks
Nice to Have
* Experience with threat modeling, purple teaming, or adversary simulation exercises
* Familiarity with cloud attack surfaces (AWS, Azure, GCP)
* Background in incident response or threat intelligence
* Certifications such as OSCP, GPEN, GWAPT, or equivalent real‑world experience
* Prior exposure to AI tools or data labeling workflows
Why Join Us
* Work directly on frontier AI systems alongside leading AI research labs
* Fully remote and flexible — work when and where it suits you
* Freelance autonomy with the structure of meaningful, task‑based work
* Apply your offensive security expertise in a novel, high‑impact domain
* Potential for ongoing work and contract extension as new projects launch
J-18808-Ljbffr