FlexHired LogoFlexHired
Logo of Anthropic

Anthropic

Research Engineer / Scientist, Alignment Science

Job Summary

This role involves building and running machine learning experiments focused on AI safety and alignment, particularly with large language models. Candidates should have experience in software, ML, or research engineering, with some familiarity with AI safety research and reinforcement learning. The position emphasizes collaborative, fast-moving projects that aim to develop techniques ensuring AI systems remain helpful and safe. Responsibilities include designing experiments, building tooling, analyzing results, and contributing to research publications and safety policies.

Required Skills

Machine Learning
Model Evaluation
Reinforcement Learning
Collaboration
Software Development
NLP
Kubernetes
Prompt Engineering
Multi-agent Systems
AI Safety
Empirical Research
Research Engineering
Research Paper Writing

Benefits

Health Insurance
Competitive Compensation
Parental Leave
Flexible Working Hours
Equity Donation Matching
Office Space
Generous Vacation

Job Description

About Anthropic

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About the role:

You want to build and run elegant and thorough machine learning experiments to help us understand and steer the behavior of powerful AI systems. You care about making AI helpful, honest, and harmless, and are interested in the ways that this could be challenging in the context of human-level capabilities. You could describe yourself as both a scientist and an engineer. As a Research Engineer on Alignment Science, you'll contribute to exploratory experimental research on AI safety, with a focus on risks from powerful future systems (like those we would designate as ASL-3 or ASL-4 under our Responsible Scaling Policy), often in collaboration with other teams including Interpretability, Fine-Tuning, and the Frontier Red Team.
Our blog provides an overview of topics that the Alignment Science team is either currently exploring or has previously explored. Our current topics of focus include...
  • Scalable Oversight: Developing techniques to keep highly capable models helpful and honest, even as they surpass human-level intelligence in various domains.
  • AI Control: Creating methods to ensure advanced AI systems remain safe and harmless in unfamiliar or adversarial scenarios.
  • Alignment Stress-testing: Creating model organisms of misalignment to improve our empirical understanding of how alignment failures might arise.
  • Automated Alignment Research: Building and aligning a system that can speed up & improve alignment research.
Note: Currently, the team has a preference for candidates who are able to be based in the Bay Area. However, we remain open to any candidate who can travel 25% to the Bay Area.

Representative projects:

  • Testing the robustness of our safety techniques by training language models to subvert our safety techniques, and seeing how effective they are at subverting our interventions.
  • Run multi-agent reinforcement learning experiments to test out techniques like AI Debate.
  • Build tooling to efficiently evaluate the effectiveness of novel LLM-generated jailbreaks.
  • Write scripts and prompts to efficiently produce evaluation questions to test models’ reasoning abilities in safety-relevant contexts.
  • Contribute ideas, figures, and writing to research papers, blog posts, and talks.
  • Run experiments that feed into key AI safety efforts at Anthropic, like the design and implementation of our Responsible Scaling Policy.

You may be a good fit if you:

  • Have significant software, ML, or research engineering experience
  • Have some experience contributing to empirical AI research projects
  • Have some familiarity with technical AI safety research
  • Prefer fast-moving collaborative projects to extensive solo efforts
  • Pick up slack, even if it goes outside your job description
  • Care about the impacts of AI

Strong candidates may also:

  • Have experience authoring research papers in machine learning, NLP, or AI safety
  • Have experience with LLMs
  • Have experience with reinforcement learning
  • Have experience with Kubernetes clusters and complex shared codebases

Candidates need not have:

  • 100% of the skills needed to perform the job
  • Formal certifications or education credentials

The expected salary range for this position is:

Annual Salary:
$280,000$690,000 USD

Logistics

Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.

Location-based hybrid policy:
Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.

Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.

How we're different

We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.

Come work with us!

Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.

Interested in this job?

Application deadline: Open until filled

Logo of Anthropic

Anthropic

Anthropic is an AI safety and research company. We build reliable, interpretable, and steerable AI systems.

See more jobs
Date PostedMay 9th, 2025
Job TypeFull Time
LocationRemote-Friendly (Travel-Required) | San Francisco, CA | Seattle, WA | New York City, NY
SalaryCompetitive rates
Exciting remote opportunity (requires residency in Canada) for a Research Engineer / Scientist, Alignment Science at Anthropic. Offering competitive salary (full time). Explore more remote jobs on FlexHired!

Safe Remote Job Search Tips

Verify Employer Thoroughly

Research the company's identity thoroughly before applying. Check for a professional website with contacts, active social media, and LinkedIn profiles. Verify details across platforms and look for reviews on Glassdoor or Trustpilot to confirm legitimacy.

Never Pay to Get a Job

Legitimate employers never require payment for applications, training, background checks, or equipment. Always reject upfront payment requests or demands for bank details, even if they claim it's for purchasing necessary work gear on your behalf.

Safeguard Your Personal Information

Protect sensitive data like SSN, bank details, or ID copies. Share this only after accepting a formal, written job offer. Ensure it's submitted via a secure company system or portal, never through insecure channels like standard email attachments.

Scrutinize Communication & Interviews

Watch for communication red flags: poor grammar, generic emails (@gmail), vague details, or undue pressure. Be highly suspicious of interviews held only via text or chat apps; legitimate companies typically use video or phone calls.

Beware of Unrealistic Offers

If an offer's salary or benefits seem unrealistically high for the work involved, be cautious. Research standard pay for similar roles. Offers that appear 'too good to be true' are often scams designed to lure you into providing information or payment.

Insist on a Formal Contract

Always secure and review a formal, written job offer or employment contract before starting work or sharing final personal details. Ensure it clearly defines your role, compensation, key terms, and conditions to avoid misunderstandings or scams.

Related Jobs

Full Time
$315,000 - $340,000
Remote-Friendly (Travel-Required) | San Francisco, CA | Seattle, WA | New York City, NY
Full Time
$280,000 - $425,000
Remote-Friendly (Travel-Required) | San Francisco, CA | Seattle, WA | New York City, NY
Full Time
$315,000 - $425,000
Remote-Friendly (Travel-Required) | San Francisco, CA | Seattle, WA | New York City, NY
Full Time
Remote-Friendly (Travel-Required) | San Francisco, CA | Seattle, WA | New York City, NY
Full Time
$230,000 - $285,000
Remote-Friendly (Travel-Required) | San Francisco, CA | Seattle, WA | New York City, NY

Subscribe Newsletter

Never miss a remote job opportunity. Subscribe to our newsletter today and receive exclusive job alerts, career advice, and industry insights delivered straight to your inbox.