Subscribe
AI Security Startup

AI Security Engineer at HiddenLayer

HiddenLayer is a venture-backed startup focused on securing machine learning models in production. Unlike Lakera, which focuses primarily on LLM input/output security, HiddenLayer protects the models themselves from supply chain attacks, adversarial manipulation, and model theft. The company raised $50 million in Series A funding and works with enterprise customers deploying ML models in critical applications including financial services, healthcare, and government.

$145,000 to $225,000
Total Compensation
Hybrid (Austin)
Work Model
20 to 30 engineers
Team Size Estimate

AI Security Focus

Engineers at HiddenLayer build model scanning tools that detect malicious code hidden in ML model files (a growing supply chain threat), runtime monitoring systems that identify adversarial manipulation of models in production, and model integrity verification systems. The work addresses threats that traditional security tools cannot detect: backdoors embedded during training, adversarial inputs designed to cause specific misclassifications, and model extraction attacks that steal proprietary model weights through API queries.

Why AI Security Engineers Join HiddenLayer

The AI Security Opportunity at HiddenLayer

The AI security landscape is evolving rapidly, and HiddenLayer sits at a particularly interesting position within it. The AI-in-cybersecurity market reached approximately $30.9 billion in 2025 and continues growing at 22% to 24% annually. Every company deploying AI systems needs security professionals who understand the unique threat surface that ML models create, from adversarial inputs and training data poisoning to model extraction and supply chain attacks.

At HiddenLayer, the scale of AI deployment creates security challenges that most companies will not encounter for years. The threats you face and the defenses you build here become reference points for the broader industry. Engineers who develop expertise in this environment are positioned for leadership roles as AI security matures from a niche specialty into a standard function within every security organization.

The EU AI Act, with high-risk system requirements taking effect August 2026, adds a compliance dimension that makes this work even more critical. Companies with global operations need security engineers who can translate regulatory requirements into technical controls. Experience doing this at a company like HiddenLayer is transferable to any organization deploying AI systems in regulated environments.

Technical Requirements

Interview Process

HiddenLayer's interview process runs two to three weeks and includes a recruiter screen, a technical discussion on ML model security threats, a hands-on exercise involving model analysis or vulnerability identification, and conversations with the engineering team. The company values candidates who understand the model lifecycle deeply, from training through deployment, and can identify security risks at each stage.

Compensation Details

Total cash compensation at HiddenLayer ranges from $145,000 to $225,000. Equity grants are meaningful given the early stage and recent $50M Series A. Austin-based roles benefit from the lower cost of living compared to the Bay Area while maintaining competitive compensation. Benefits include health insurance, 401(k), flexible PTO, and conference attendance budgets.

Career Development and Growth

AI security is early enough as a discipline that career paths are still being defined. At HiddenLayer, common growth trajectories include advancing into senior and staff security engineer roles with increasing scope and strategic responsibility. Engineers who demonstrate both technical depth and leadership ability often move into team lead or management positions as AI security organizations scale.

Beyond the engineering ladder, AI security experience at HiddenLayer opens paths into security architecture (designing AI security frameworks at the organizational level), product security leadership (owning the security posture of AI product lines), and advisory roles that shape how the industry approaches AI threats. The regulatory dimension, particularly the EU AI Act and NIST AI RMF, also creates opportunities for engineers who combine technical expertise with governance knowledge to move into CISO-track positions.

The experience you build here is transferable across the industry. Companies of all sizes are building AI security capabilities, and professionals with hands-on experience at a company operating at this scale are in high demand. Whether you stay long-term or use the experience as a career accelerator, the skills and credibility compound over time. Conference presentations, published research, and open-source contributions from your work here become career assets that follow you regardless of where you go next.

The AI security community is small enough that your reputation matters and large enough that there are meaningful career options. Building that reputation through work at HiddenLayer gives you visibility with hiring managers, conference organizers, and investors across the AI security ecosystem. The professionals defining this field today will be the directors, VPs, and CISOs leading it in five years. Getting in now, at a company where the problems are real and the impact is measurable, is the best way to position yourself for that trajectory.

Get the AISec Brief

Weekly career intelligence for AI Security Engineers. Salary trends, who's hiring, threat landscape shifts, and certification updates. Free.

Frequently Asked Questions

What does an engineer do at HiddenLayer?
Engineers at HiddenLayer build model scanning tools for supply chain security, runtime monitoring for adversarial detection, and model integrity verification systems. The work focuses on protecting ML models themselves, not just their inputs and outputs.
What is the salary range at HiddenLayer?
Total cash compensation ranges from approximately $145,000 to $225,000. Equity grants add significant upside at the Series A stage.
How is HiddenLayer different from Lakera?
Lakera focuses on LLM input/output security (prompt injection, jailbreaks). HiddenLayer focuses on model-level security (supply chain attacks, model integrity, runtime manipulation). They address complementary threat vectors.
What ML model threats does HiddenLayer detect?
HiddenLayer detects malicious code hidden in model files (supply chain attacks), backdoors embedded during training, adversarial inputs causing targeted misclassifications, and model extraction attempts through API probing.
Do I need ML experience for HiddenLayer?
Yes. HiddenLayer requires deep understanding of ML model internals because the product operates at the model level. Candidates from ML engineering backgrounds with security interest, or security engineers with strong ML knowledge, are both competitive.

Get the AISec Brief

Weekly career intelligence for AI Security Engineers. Salary data, threat landscape, new roles. Free.

Free weekly email. Unsubscribe anytime.