New content, new talks, and new thoughts on security, encryption, and hacking AI
View in browser
AdobeStock_705190755-with-peg-newsletter-header

Greetings,

 

Our CTO, Bob Wall, is at RMISC this week demonstrating various attacks to extract private data out of AI systems.  He shows how to attack system prompts with sensitive data, how to get a model -- even one that's trained not to do so -- to cough up private data that's been fine-tuned in, and, of course, how to invert text embeddings back to text.

 

If you're using private data in an AI system, it's pretty likely that you have at least one copy of all your private data and potentially three or more copies of that data in new places that your security tools can't see.  We'll be posting more content showing our research next month.

 

On another note, we just published a new blog, Breaches Happen — But Data Theft Shouldn't, which looks at recent breaches, the proximal causes, and the root causes. In all cases, the media and others involved see things at a microscopic level while missing the big picture.  A single vulnerability or stolen credential or cloud misconfiguration is often enough to compromise sensitive data and this kills me.  There's no excuse for approaches to security that stop at one layer of defense.

 

That's it for this month.  Summer is here in the Northern hemisphere and I hope you have great plans to enjoy it.

 

 

PS - if you missed the interview/webinar with CNXN Helix earlier this month, it can be found on their on-demand page -- just search for "securing ai."

Patrick Walsh CEO IronCore Labs

Patrick Walsh
CEO, IronCore 

breaches-happen-blog-newsletter

Breaches Happen — But Data Theft Shouldn't

Why Hackers Keep Winning
(and How One Change Could Flip the Script)
 

 

Major breaches expose the data of millions of people and all due to single failure points and a lack of security in depth. Read about why journalists and the industry are pointing fingers in the wrong places and focusing too much on predictable problems and not enough on resiliency.

 

> Read the full blog

 

training-ai-white-paper-newsletter-2

Training AI Without Leaking Data

How Encrypted Embeddings Protect Privacy

 

New white paper, not yet publicly available, on the topic of encrypting embeddings and then training models on the encrypted training set as a way to preserve privacy and security of the training data even when it is used in non-production or lab environments with unfettered access granted to engineers.

 

> Download the PDF

 

LinkedIn
X
GitHub
Mastadon
YouTube

IronCore Labs, 1750 30th Street #500, Boulder, CO 80301, United States, 3032615067

Unsubscribe Manage preferences