HiddenLayer raises $50M for its AI-defending cybersecurity tools | TechCrunch

4 min read


HiddenLayer, a security startup focused on protecting AI systems from adversarial attacks, today announced that it raised $50 million in a funding round co-led by M12 and Moore Strategic Ventures with participation from Booz Allen Hamilton, IBM, Capital One and TenEleven.

Bringing the company’s total raised to $56 million, the new funds will be put toward supporting HiddenLayer’s go-to-market efforts, expanding its headcount from 50 employees to 90 by the end of the year and further investing in R&D, co-founder and CEO Chris Sestito told TechCrunch via email.

“HiddenLayer is a cybersecurity company focused on protecting AI from adversarial attacks. Specifically, we extend detection and response to AI,” Sestito said. “We’re scaling quickly to meet market demand for our machine learning security platform which is coming from all industries across the globe.”

Sestito co-founded HiddenLayer with Jim Ballard and Tanner Burns in 2019. Shortly before, Sestito was leading threat research at Cylance, the antivirus startup later acquired by BlackBerry.

HiddenLayer’s platform provides tools to protect AI models against adversarial attacks, vulnerabilities and malicious code injections. It monitors the inputs and outputs of AI systems, testing models’ integrities prior to deployment.

“Many data scientists rely on pre-trained, open source or proprietary machine learning models to shorten analysis time and simplify the testing effort before gleaning insight from complex datasets.” Sestito said. “This involves using pre-trained, open-source models available for public use – exposing organizations to transfer learning attacks from tampered publicly available models.”

Lest customers be concerned HiddenLayer has access to their proprietary models, the company claims it uses techniques to observe only vectors — or mathematical representations — of inputs to models and the outputs reslting from them.

“The system learns what’s normal for a unique AI application without ever needing to be explicitly told,” Sestito said.

HiddenLayer also contributes to the MITRE ATLAS, a knowledge base of adversarial AI tactics and techniques maintained by the not-for-profit MITRE corporqation. Sestito claims that HiddenLaycer can protect against all 64 unique attack types listed in ATLAS, including IP theft, model extraction, inferencing attacks, model evasion and data poisoning.

When I last spoke to an expert — AI researcher Mike Cook at the Knives and Paintbrushes collective — about what HiddenLayer’s doing, they said it’s unclear whether the platform’s “truly groundbreaking or new.” But the expert did point out that there’s a benefit to the platform’s packaging up of knowledge about attacks on AI to make them more widely accessible

It’s difficult to pin down real-world examples of attacks at scale against AI. Research into the topic has exploded, with more than 1,500 papers on AI security published in 2019 on the scientific publishing site Arxiv.org, up from 56 in 2016, according to a study from Adversara. But there’s little public reporting on attempts by hackers to, for example, attack commercial facial recognition systems — assuming such attempts are happening in the first place.

On the other hand, some government agencies are sounding the alarm over potential attacks on AI systems.

Recently, the National Cyber Security Center, the U.K.’s cybersecurity governing body, warned of threat actors manipulating the tech behind large language model chatbots (e.g. ChatGPT) to access confidential information, generate offensive content and “trigger unintended consequences.” Elsewhere, last year, the U.S. Government’s Office of Science and Technology Policy published an “AI Bills of Rights,” which recommends that AI systems undergo pre-deployment testing, risk identification and mitigation and ongoing monitoring to demonstrate that they’re safe and effective based on their intended use.

Companies are coming around to this viewpoint, as well — allegedly.

In a Forrester study commission by HiddenLayer (and thus to be taken with a grain of salt), the majority of companies responding said they currently rely on manual processes to address AI model threats and 86% were “extremely concerned or concerned” about their organization’s machine learning model security. Meanwhile, Gartner reported in 2022 that 2 in 5 organizations had an AI privacy breach or security incident within the past year and that 1 in 4 of those attacks were malicious.

Sestito asserts the threat — regardless of its size today — will grow with the AI market, implicitly to the advantage of HiddenLayer. He acknowledges that several startups already offer products designed to make AI systems more robust, including Robust Intelligence, CalypsoAI and Troj.ai. But Sestito claims that HiddenLayer stands alone in its AI-driven detection and response approach.

The platform’s gained traction, certainly. Beyond partnerships with Databricks and Intel, HiddenLayer claims to have Fortune 100 customers in the financial, government and defense — including the U.S. Air Force and Space Force — and cybersecurity industries.

“The breakneck pace of AI adoption has left many organizations struggling to put in place the proper processes, people, and controls necessary to protect against the risks and attacks inherent to machine learning.” Sestito said. “The risk of implementing AI and machine learning into an organization only continues to grow … We are scaling quickly to meet market demand for our platform, which is coming from all industries across the globe.”


Source link