Subscribe
AI Security Startup

AI Security Engineer at Lakera

Lakera is a dedicated AI security company focused on protecting LLM applications from prompt injection, data leakage, and other AI-specific threats. The company's Lakera Guard product provides real-time protection for LLM APIs, detecting and blocking malicious inputs before they reach the model. Lakera gained significant attention for its Gandalf challenge, a public prompt injection game that attracted millions of attempts and generated a rich dataset of real-world attack patterns.

$140,000 to $220,000
Total Compensation
Remote (Global)
Work Model
15 to 25 engineers
Team Size Estimate

AI Security Focus

Engineers at Lakera build the core detection systems that identify prompt injection, jailbreak attempts, data exfiltration, and other adversarial inputs in real time. The work involves training and fine-tuning classifiers on adversarial input datasets, building low-latency inference pipelines that can process API requests without adding significant overhead, researching new attack techniques, and maintaining the detection accuracy as attackers adapt their methods.

Why AI Security Engineers Join Lakera

The AI Security Opportunity at Lakera

The AI security landscape is evolving rapidly, and Lakera sits at a particularly interesting position within it. The AI-in-cybersecurity market reached approximately $30.9 billion in 2025 and continues growing at 22% to 24% annually. Every company deploying AI systems needs security professionals who understand the unique threat surface that ML models create, from adversarial inputs and training data poisoning to model extraction and supply chain attacks.

At Lakera, the scale of AI deployment creates security challenges that most companies will not encounter for years. The threats you face and the defenses you build here become reference points for the broader industry. Engineers who develop expertise in this environment are positioned for leadership roles as AI security matures from a niche specialty into a standard function within every security organization.

The EU AI Act, with high-risk system requirements taking effect August 2026, adds a compliance dimension that makes this work even more critical. Companies with global operations need security engineers who can translate regulatory requirements into technical controls. Experience doing this at a company like Lakera is transferable to any organization deploying AI systems in regulated environments.

Technical Requirements

Interview Process

Lakera runs a focused interview process over two to three weeks. It includes a recruiter call, a technical deep-dive on AI security concepts and adversarial ML, a practical coding exercise involving detection system design, and a culture fit conversation with founders. The company looks for candidates who are genuinely passionate about AI security and can contribute to both research and engineering. Given the startup stage, breadth of skills matters as much as depth.

Compensation Details

Total cash compensation at Lakera ranges from $140,000 to $220,000, which is below frontier AI lab rates but competitive for a Series A/B startup. Equity grants are significant and represent the primary upside. Compensation may vary based on location within the global remote team. Benefits vary by country but typically include health coverage, flexible PTO, and learning stipends.

Career Development and Growth

AI security is early enough as a discipline that career paths are still being defined. At Lakera, common growth trajectories include advancing into senior and staff security engineer roles with increasing scope and strategic responsibility. Engineers who demonstrate both technical depth and leadership ability often move into team lead or management positions as AI security organizations scale.

Beyond the engineering ladder, AI security experience at Lakera opens paths into security architecture (designing AI security frameworks at the organizational level), product security leadership (owning the security posture of AI product lines), and advisory roles that shape how the industry approaches AI threats. The regulatory dimension, particularly the EU AI Act and NIST AI RMF, also creates opportunities for engineers who combine technical expertise with governance knowledge to move into CISO-track positions.

The experience you build here is transferable across the industry. Companies of all sizes are building AI security capabilities, and professionals with hands-on experience at a company operating at this scale are in high demand. Whether you stay long-term or use the experience as a career accelerator, the skills and credibility compound over time. Conference presentations, published research, and open-source contributions from your work here become career assets that follow you regardless of where you go next.

The AI security community is small enough that your reputation matters and large enough that there are meaningful career options. Building that reputation through work at Lakera gives you visibility with hiring managers, conference organizers, and investors across the AI security ecosystem. The professionals defining this field today will be the directors, VPs, and CISOs leading it in five years. Getting in now, at a company where the problems are real and the impact is measurable, is the best way to position yourself for that trajectory.

Get the AISec Brief

Weekly career intelligence for AI Security Engineers. Salary trends, who's hiring, threat landscape shifts, and certification updates. Free.

Frequently Asked Questions

What does an engineer do at Lakera?
Engineers at Lakera build detection systems for prompt injection, jailbreak attempts, and data exfiltration in LLM applications. The work spans ML classifier training, real-time inference pipeline engineering, and adversarial research.
What is the salary range at Lakera?
Total cash compensation ranges from approximately $140,000 to $220,000. Equity grants represent significant additional upside at the startup stage.
What is Lakera Guard?
Lakera Guard is a real-time protection layer for LLM APIs. It sits between the application and the LLM, detecting and blocking adversarial inputs before they reach the model. It is the company core product.
What is the Gandalf challenge?
Gandalf is a public prompt injection game created by Lakera. Participants try to extract a secret password from an LLM through prompt manipulation. It attracted millions of attempts and generated a valuable dataset of real-world attack patterns.
Is Lakera fully remote?
Yes. Lakera operates as a remote-first company with a global team. The company is headquartered in Zurich, Switzerland, but hires across multiple countries and time zones.

Get the AISec Brief

Weekly career intelligence for AI Security Engineers. Salary data, threat landscape, new roles. Free.

Free weekly email. Unsubscribe anytime.