Tips for evaluating AI features in security products at RSA, upcoming events, and a new blog on protecting AI models with private data
View in browser
AdobeStock_705190755-with-peg-newsletter-header

Greetings,

 

I'm not going to be at RSA this year due to conflicts, but I've been reading about the vendor floor and it sounds to me like you can't turn in any direction without a sign trumpeting "AI" at you.

 

AI is certainly amazing and has a ton of potential, but just because these are security vendors doesn't mean they're using AI in safe and secure ways.  If you missed it back in February, we put out a blog, AI Security: 12 Questions For Your Vendor, and we highly encourage you to ask those questions of anyone selling you on their new AI features. Because from where we sit, adoption of AI is generally outpacing adoption of security for it and protections of the data.

 

Meanwhile, if you're someone who is building AI or building software that uses it together with private data, then you might want to check out our latest blog on ways to protect both training data and the data hidden inside models from attacks and even from insiders.  

 

That's it for this month, I hope you enjoy RSA if you're there and let me know what I missed.

 

PS - check out the events below -- next week's interview/webinar with CNXN Helix will be a good one as will our talk at RMISC for those in Colorado.

Patrick Walsh CEO IronCore Labs

Patrick Walsh
CEO, IronCore 

 

Upcoming events:

  • CNXN Helix Virtual Event (webinar)
    • May 7, 2025 @ 2pm ET
    • Title: Securing AI: Navigating Shadow Data and Emerging Threats
    • Abstract: IronCore CEO Patrick Walsh will be joining Jamal Khan of Helix for a deep dive on emerging threats in AI, vectors, vector databases, and more. The discussion will be a good one covering data privacy and best practices followed by Q&A.  Please join us!
  • Rocky Mountain Info Security Conf
    • May 28, 2025 in Denver, Colorado
    • Title: Illuminating the Dark Corners of AI: Exploiting Shadow Data in AI Models and Embeddings 
    • Abstract: A demonstration of how to extract confidential data and personally identifiable information from fine-tuned LLMs and vector embeddings. Shows how confidential data finds its way into your AI systems and presents attacks for identifying and extracting that sensitive data. This will highlight the problem of AI shadow data in RAG workflows and chat bots.  The data may be monitored and protected in its primary store but is vulnerable and overlooked in the corresponding AI systems
    ai-model-encryption-blog-newsletter

     

    Training AI Without Leaking Data

    How Encrypted Embeddings Protect Privacy 

     

    Learn how to protect sensitive data in AI training by using encrypted vector embeddings. This blog explores privacy risks in AI and presents secure methods like approximate-distance-comparison-preserving encryption to enable private, efficient machine learning without exposing personal or business information.

     

    > Read the full blog

     

    ai-shadow-data-paper-newsletter

    AI Shadow Data White Paper Download

     

    There are three major areas of untracked and unprotected shadow data in AI systems where copies of sensitive data accrue. Learn about the areas of AI shadow data and how to manage them.

     

    > Download the PDF

     

    GohJoZCWsAAUiIS
    LinkedIn
    X
    GitHub
    Mastadon
    YouTube

    IronCore Labs, 1750 30th Street #500, Boulder, CO 80301, United States, 3032615067

    Unsubscribe Manage preferences