Subscribe
Big Tech AI

AI Security Engineer at Meta

Meta deploys AI across its family of applications (Facebook, Instagram, WhatsApp, Threads) serving over 3 billion monthly active users. The company also develops the Llama family of open-source large language models, which creates a unique security challenge: securing models that are intentionally released for public use. Meta's AI security team protects both internal AI systems and the open-source model ecosystem.

$170,000 to $270,000
Total Compensation
Hybrid (Multiple Locations)
Work Model
60+ AI security engineers
Team Size Estimate

AI Security Focus

AI security engineers at Meta work on securing the content recommendation and moderation systems that power the social media platforms, protecting the Llama model releases from misuse through built-in safety features, securing the AI infrastructure that processes billions of daily interactions, and researching adversarial attacks on computer vision and NLP models. The open-source strategy creates distinct challenges: Meta must anticipate how released models will be used and misused, and build safety guardrails that cannot simply be removed.

Why AI Security Engineers Join Meta

The AI Security Opportunity at Meta

The AI security landscape is evolving rapidly, and Meta sits at a particularly interesting position within it. The AI-in-cybersecurity market reached approximately $30.9 billion in 2025 and continues growing at 22% to 24% annually. Every company deploying AI systems needs security professionals who understand the unique threat surface that ML models create, from adversarial inputs and training data poisoning to model extraction and supply chain attacks.

At Meta, the scale of AI deployment creates security challenges that most companies will not encounter for years. The threats you face and the defenses you build here become reference points for the broader industry. Engineers who develop expertise in this environment are positioned for leadership roles as AI security matures from a niche specialty into a standard function within every security organization.

The EU AI Act, with high-risk system requirements taking effect August 2026, adds a compliance dimension that makes this work even more critical. Companies with global operations need security engineers who can translate regulatory requirements into technical controls. Experience doing this at a company like Meta is transferable to any organization deploying AI systems in regulated environments.

Technical Requirements

Interview Process

Meta's interview process runs four to five weeks with a recruiter screen, a technical phone screen, and four to five on-site interviews covering security architecture, coding, ML systems design, and behavioral questions. AI security candidates should expect questions on adversarial ML, content integrity systems, and how to evaluate model safety for open-source releases. Meta values scale thinking and the ability to design security systems that work across billions of interactions.

Compensation Details

Total compensation for AI security engineers at Meta ranges from $170,000 to $270,000 in cash, with RSU grants that can add $50,000 to $120,000+ annually at senior levels. Meta uses an engineering level system (E3 through E8), with AI security roles typically at E4 to E6. Benefits include comprehensive health coverage, 401(k) match, generous parental leave, and wellness stipends.

Career Development and Growth

AI security is early enough as a discipline that career paths are still being defined. At Meta, common growth trajectories include advancing into senior and staff security engineer roles with increasing scope and strategic responsibility. Engineers who demonstrate both technical depth and leadership ability often move into team lead or management positions as AI security organizations scale.

Beyond the engineering ladder, AI security experience at Meta opens paths into security architecture (designing AI security frameworks at the organizational level), product security leadership (owning the security posture of AI product lines), and advisory roles that shape how the industry approaches AI threats. The regulatory dimension, particularly the EU AI Act and NIST AI RMF, also creates opportunities for engineers who combine technical expertise with governance knowledge to move into CISO-track positions.

The experience you build here is transferable across the industry. Companies of all sizes are building AI security capabilities, and professionals with hands-on experience at a company operating at this scale are in high demand. Whether you stay long-term or use the experience as a career accelerator, the skills and credibility compound over time. Conference presentations, published research, and open-source contributions from your work here become career assets that follow you regardless of where you go next.

The AI security community is small enough that your reputation matters and large enough that there are meaningful career options. Building that reputation through work at Meta gives you visibility with hiring managers, conference organizers, and investors across the AI security ecosystem. The professionals defining this field today will be the directors, VPs, and CISOs leading it in five years. Getting in now, at a company where the problems are real and the impact is measurable, is the best way to position yourself for that trajectory.

Get the AISec Brief

Weekly career intelligence for AI Security Engineers. Salary trends, who's hiring, threat landscape shifts, and certification updates. Free.

Frequently Asked Questions

What does an AI Security Engineer do at Meta?
AI Security Engineers at Meta secure content recommendation and moderation systems, protect Llama open-source model releases, secure AI infrastructure serving 3+ billion users, and research adversarial attacks on computer vision and NLP models.
What is the salary range at Meta for AI security?
Total cash compensation ranges from approximately $170,000 to $270,000. RSU grants can add $50,000 to $120,000+ annually depending on level.
How does open-source model security differ from proprietary model security?
Open-source models like Llama are released publicly, so adversaries have full access to model weights and architecture. Security must be built into the model (safety training, refusal mechanisms) rather than relying on access controls. This is a fundamentally different security paradigm.
What background is best for Meta AI security roles?
Meta values candidates with large-scale systems experience. Backgrounds in content integrity, platform abuse prevention, or ML infrastructure security are strong fits. Understanding how adversarial actors operate at scale matters more than niche ML attack knowledge.
Is Meta hiring AI security engineers in 2026?
Yes. The expansion of AI across all Meta products and the continued development of Llama models are driving demand for AI security professionals. Open roles span both the core AI infrastructure team and the integrity (content safety) team.

Get the AISec Brief

Weekly career intelligence for AI Security Engineers. Salary data, threat landscape, new roles. Free.

Free weekly email. Unsubscribe anytime.