Meta’s AI is leaking user prompts, we’re talking encryption standards, and Defcon is around the corner.
View in browser
AdobeStock_705190755-with-peg-newsletter-header

Greetings,

 

AI is inherently problematic for privacy, and that's before Big Companies make Terrible Decisions.  The latest example comes from Facebook/Meta and their new AI app where they've brilliantly decided to "anonymously" post everyone's AI queries. Here's a real example (credit to Rachel Tobac and this X thread, which has many more examples of easily identifiable prompts that include people's names, addresses, and much more):

 

My sister is a vp development for a small incorporated company, [REDACTED]. The incorporated company has not paid its corp taxes in 12 years. Would my sister be liable for the taxes even though she is just a vp in charge of business development?

 

Things we're doing to encrypt models, vector embeddings, and search data are great for privacy and security, but can't stop stupid.

 

One question we get a lot is about standards and if the encryption we're using is on its way to becoming a NIST standard.  The short answer is, "not yet."  The longer answer can be found in our latest blog talking about NIST and what they're doing (or not doing).

 

Hope you're having a great summer if you're in in the Northern hemisphere, and let me know if you'll be in Vegas for BlackHat or Defcon as I'll be there to give a talk at Defcon and would love to see you.

Patrick Walsh CEO IronCore Labs

Patrick Walsh
CEO, IronCore 

 

Upcoming events:

  • DEFCON 33
    • Aug 7-10 in Las Vegas, NV
    • Title: Illuminating the Dark Corners of AI: Exploiting Data in AI Models and Vector Embeddings
    • Abstract: This talk explores the hidden risks in apps leveraging modern AI systems—especially those using large language models (LLMs) and retrieval-augmented generation (RAG) workflows—and demonstrates how sensitive data, such as personally identifiable information (PII) and social security numbers, can be extracted through real-world attacks.
         
  • OWASP LASCon
    • Oct 23-24 in Austin, TX
    • Title: Hidden Risks of Integrating AI: Extracting Private Data with Real-World Exploits
    • Abstract: We’ll dive into techniques like model inversion attacks targeting fine-tuned models, and embedding inversion attacks on vector databases—key components in RAG architectures that supply private data to LLMs for answering specific queries.

 

nist-encryption-standards-blog-newsletter

Vector Encryption, AI, and the Slow Pace of Standards

When Will NIST Bless AI Data Protection?

 

Why Distance-Comparison-Preserving Encryption (DCPE) may be the best option for securing AI data while NIST standards for privacy-preserving encryption remain years away.

 

> Read the full blog

 

training-ai-white-paper-newsletter-2

Training AI Without Leaking Data

How Encrypted Embeddings Protect Privacy

 

New white paper, not yet publicly available, on the topic of encrypting embeddings and then training models on the encrypted training set as a way to preserve privacy and security of the training data even when it is used in non-production or lab environments with unfettered access granted to engineers.

 

> Download the PDF

 

LinkedIn
X
GitHub
Mastadon
YouTube

IronCore Labs, 1750 30th Street #500, Boulder, CO 80301, United States, 3032615067

Unsubscribe Manage preferences