in

WitnessAI builds guardrails for generative AI models

Generative artificial intelligence invents things. It can be biased. Sometimes he spits out toxic text. So can it be «safe»?

Rick Caccia, CEO of the company Witness AIhe believes he can.

«Securing AI models is a real problem, and it’s one that’s particularly glaring for AI researchers, but it’s different from securing usage,» said Caccia, the former senior vice president of marketing at Palo Alto Networks, in an interview with TechCrunch. “I think of it like a sports car: a more powerful engine – i.e. a model – can’t buy you anything unless you have good brakes and steering. The controls are just as important to driving fast as the engine.”

There is certainly demand for such controls among businesses, which — while cautiously optimistic about the productivity-enhancing potential of generative artificial intelligence — are concerned about the technology’s limitations.

Fifty-one percent of CEOs are hiring for AI-related generative roles that didn’t exist until this year, according to IBM poll findings. Yet only 9% of companies say they are prepared to manage threats — including threats related to privacy and intellectual property — arising from their use of generative artificial intelligence, according to Riskonnect review.

WitnessAI’s platform that intercepts activity between employees and the custom generative AI models their employer uses — not API-bound models like OpenAI’s GPT-4, but more in line with Meta’s Llama 3 — and applies policies and safeguards to reduce risk.

«One of the promises of enterprise AI is to unlock and democratize business data for employees so they can do their jobs better. But unlocking all that sensitive data too good –– or if it leaks or gets stolen – that’s a problem.”

Recommended Article
Lydia, a French payments app with 8 million users, launches mobile banking app Sumeria

WitnessAI sells access to several modules, each aimed at addressing different forms of AI generative risk. One allows organizations to implement rules to prevent staff from certain teams from using AI-powered generative tools in ways they shouldn’t (eg asking about earnings reports before publication or pasting internal codebases). Another redacts proprietary and sensitive information from queries sent to the models and implements techniques to protect the models from attacks that could force them to deviate from the scenario.

«We think the best way to help businesses is to define a problem in a way that makes sense, like safe AI adoption, and then sell a solution that solves the problem,» Caccia said. “CISOs want to protect the business, and WitnessAI helps them do that by ensuring data protection, preventing rapid data injection and enforcing identity-based policies. The chief privacy officer wants to ensure that existing — and upcoming — regulations are followed, and we give them visibility and a way to report on activity and risk.”

But there’s one tricky thing about WitnessAI from a privacy perspective: All data goes through its platform before reaching the model. The company is transparent about this, even offering tools to track which models employees access, the questions they ask the models, and the answers they get. But that could create its own privacy risks.

In response to questions about WitnessAI’s privacy policy, Caccia said the platform is «insulated» and encrypted to prevent users’ secrets from becoming public.

«We’ve built a millisecond latency platform with built-in regulatory decoupling — a unique, isolated design to protect enterprise AI activities in a way that’s fundamentally different from traditional multi-tenant software-as-a-service,» He said. “We create a separate instance of our platform for each customer, encrypted with their keys. Data about their AI activity is isolated to them — we can’t see it.”

Recommended Article
Garry Tan has revealed his 'secret sauce' to getting into Y Combinator

Perhaps this will ease the fears of buyers. As for the workers worried as for the surveillance potential of the WitnessAI platform, that is a more difficult decision.

Surveys show that people generally don’t appreciate monitoring workplace activity, regardless of the reason — and believe it has a negative effect on company morale. Almost a third of Forbes respondents review they said they might consider leaving their jobs if their employer tracked their online activity and communications.

But Caccia claims that interest in the WitnessAI platform has been and remains strong, with a number of 25 early corporate users in the proof-of-concept phase. (It won’t become generally available until the third quarter.) And, in a vote of confidence from VCs, WitnessAI has raised $27.5 million from Ballistic Ventures (which incubated WitnessAI) and GV, Google’s corporate venture arm.

The plan is to focus the funding tranche on increasing WitnessAI’s team from 18 people to 40 by the end of the year. Growth will certainly be key to defeating WitnessAI’s rivals in the emerging space of model compliance and management solutions, not only from tech giants like AWS, Google and Salesforce, but also from startups like CalypsoAI.

«We’ve made our plan to go well into 2026 even if we don’t have any sales at all, but we already have almost 20 times what we need to meet our sales goals this year,» Caccia said. «This is our initial funding round and public launch, but safely enabling and using AI is new territory, and all of our features are evolving with this new market.»

Recommended Article
Last day to vote for the TC Disrupt 2024 audience choice program

#WitnessAI #builds #guardrails #generative #models

What do you think?

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *

GIPHY App Key not set. Please check settings

Martin Freeman (52) makes a rare red carpet appearance with glamorous girlfriend Rachel Benissa (30) at the Prince’s Trust Awards.

Jennifer Lopez’s ex Ryan Guzman reveals attempted suicide before landing his hit show 9-1-1 … as he reflects on pal tWitch’s death