Greetings,
Our CTO, Bob Wall, is at RMISC this week demonstrating various attacks to extract private data out of AI systems. He shows how to attack system prompts with sensitive data, how to get a model -- even one that's trained not to do so -- to cough up private data that's been fine-tuned in, and, of course, how to invert text embeddings back to text.
If you're using private data in an AI system, it's pretty likely that you have at least one copy of all your private data and potentially three or more copies of that data in new places that your security tools can't see. We'll be posting more content showing our research next month.
On another note, we just published a new blog, Breaches Happen — But Data Theft Shouldn't, which looks at recent breaches, the proximal causes, and the root causes. In all cases, the media and others involved see things at a microscopic level while missing the big picture. A single vulnerability or stolen credential or cloud misconfiguration is often enough to compromise sensitive data and this kills me. There's no excuse for approaches to security that stop at one layer of defense.
That's it for this month. Summer is here in the Northern hemisphere and I hope you have great plans to enjoy it.
PS - if you missed the interview/webinar with CNXN Helix earlier this month, it can be found on their on-demand page -- just search for "securing ai."