Subscribe
Frontier AI

AI Security Engineer at Anthropic

Anthropic is a frontier AI safety company that builds the Claude family of large language models. Founded by former OpenAI research leaders, the company differentiates on safety research, constitutional AI techniques, and building models designed to be helpful, harmless, and honest. Security and safety are central to Anthropic's mission, not an afterthought. The company has published extensively on AI alignment, model behavior evaluation, and safety benchmarks.

$180,000 to $275,000
Total Compensation
Hybrid (San Francisco)
Work Model
20 to 35 security engineers
Team Size Estimate

AI Security Focus

Security engineers at Anthropic work on model safety evaluation, adversarial robustness testing, platform security for the Claude API and enterprise deployments, and research into novel AI security threats. The team designs and runs red team evaluations of new Claude releases, builds automated safety testing pipelines, secures the API infrastructure serving enterprise customers through direct access and cloud partner integrations (Amazon Bedrock, Google Cloud Vertex AI), and contributes to research on emerging threat vectors like indirect prompt injection and multi-turn manipulation.

Why AI Security Engineers Join Anthropic

The AI Security Opportunity at Anthropic

The AI security landscape is evolving rapidly, and Anthropic sits at a particularly interesting position within it. The AI-in-cybersecurity market reached approximately $30.9 billion in 2025 and continues growing at 22% to 24% annually. Every company deploying AI systems needs security professionals who understand the unique threat surface that ML models create, from adversarial inputs and training data poisoning to model extraction and supply chain attacks.

At Anthropic, the scale of AI deployment creates security challenges that most companies will not encounter for years. The threats you face and the defenses you build here become reference points for the broader industry. Engineers who develop expertise in this environment are positioned for leadership roles as AI security matures from a niche specialty into a standard function within every security organization.

The EU AI Act, with high-risk system requirements taking effect August 2026, adds a compliance dimension that makes this work even more critical. Companies with global operations need security engineers who can translate regulatory requirements into technical controls. Experience doing this at a company like Anthropic is transferable to any organization deploying AI systems in regulated environments.

Technical Requirements

Interview Process

Anthropic's interview process runs four to five weeks and is thorough. It starts with a recruiter screen and hiring manager call, followed by a technical assessment focused on security fundamentals and AI-specific threats, a research discussion where you walk through your thinking on an AI security problem, and a final round with senior leadership. Anthropic values intellectual curiosity and the ability to reason about novel threats. They want engineers who can think from first principles about security problems that do not have established playbooks.

Compensation Details

Base salary for security engineers at Anthropic typically ranges from $150,000 to $200,000, with total compensation reaching $180,000 to $275,000 in cash. Equity grants are meaningful given the company's rapid valuation growth. Benefits include comprehensive health coverage, 401(k), and learning stipends. The smaller team size means strong performers can advance quickly as the organization scales.

Career Development and Growth

AI security is early enough as a discipline that career paths are still being defined. At Anthropic, common growth trajectories include advancing into senior and staff security engineer roles with increasing scope and strategic responsibility. Engineers who demonstrate both technical depth and leadership ability often move into team lead or management positions as AI security organizations scale.

Beyond the engineering ladder, AI security experience at Anthropic opens paths into security architecture (designing AI security frameworks at the organizational level), product security leadership (owning the security posture of AI product lines), and advisory roles that shape how the industry approaches AI threats. The regulatory dimension, particularly the EU AI Act and NIST AI RMF, also creates opportunities for engineers who combine technical expertise with governance knowledge to move into CISO-track positions.

The experience you build here is transferable across the industry. Companies of all sizes are building AI security capabilities, and professionals with hands-on experience at a company operating at this scale are in high demand. Whether you stay long-term or use the experience as a career accelerator, the skills and credibility compound over time. Conference presentations, published research, and open-source contributions from your work here become career assets that follow you regardless of where you go next.

The AI security community is small enough that your reputation matters and large enough that there are meaningful career options. Building that reputation through work at Anthropic gives you visibility with hiring managers, conference organizers, and investors across the AI security ecosystem. The professionals defining this field today will be the directors, VPs, and CISOs leading it in five years. Getting in now, at a company where the problems are real and the impact is measurable, is the best way to position yourself for that trajectory.

Get the AISec Brief

Weekly career intelligence for AI Security Engineers. Salary trends, who's hiring, threat landscape shifts, and certification updates. Free.

Frequently Asked Questions

What does a Security Engineer do at Anthropic?
Security Engineers at Anthropic work on model safety evaluation, adversarial robustness testing, platform security for the Claude API, and research into novel AI security threats. The role blends security engineering with safety research.
How much do Anthropic Security Engineers earn?
Total cash compensation ranges from approximately $180,000 to $275,000. Equity grants add significant upside given the company valuation trajectory.
How does Anthropic security differ from OpenAI security?
Anthropic places safety at the center of its mission, so security engineers are deeply integrated with research teams. The team is smaller, which means more ownership but also more breadth of responsibility. Constitutional AI and alignment research create unique security challenges.
What is the best background for an Anthropic security role?
Anthropic values candidates who combine security engineering experience with genuine curiosity about AI safety. Familiarity with alignment research, model evaluation, and the ability to reason about novel threats from first principles matters more than checking specific credential boxes.
Is Anthropic hiring security engineers in 2026?
Yes. As Claude adoption grows through direct API access and cloud partner integrations, the security team is expanding. Enterprise customers increasingly require security assessments and compliance documentation, driving headcount growth.

Get the AISec Brief

Weekly career intelligence for AI Security Engineers. Salary data, threat landscape, new roles. Free.

Free weekly email. Unsubscribe anytime.