We've released new content aimed to help security teams who are handling internal AI rollouts and enablement of external AI features.
View in browser
Architecting Privacy hero image with a key hanging from a peg

Greetings,

 

It's a tough time to be a CISO.  The pressure to innovate and to allow innovation -- especially around new AI tech -- is very strong.  There are no established standards for encryption or for protecting AI systems and data.  So while the business is pushing out new AI features, the security team has to figure out how to mitigate the very large inherent risks that come with these new technologies.

 

We've been trying to make this a bit easier by teasing apart the security of AI landscape, making recommendations, setting up questions to ask SaaS companies who are adding AI features, and more.  Much of this is in our Securing Gen-AI White Paper, but we've recently spun out some bite-size pieces like our new "Essential Security Steps Before Launching Your AI Feature" and "Questions to Ask Your Software Vendor" blogs.

 

We've also been doing original research on attacking AI data that we're excited to share. We'll be giving a presentation on the findings at the upcoming RMISC conference at the end of May -- let us know if you'll be there.

 

That's it for now.  Stay safe and protect your data.

Patrick Walsh CEO IronCore Labs

Patrick Walsh
CEO, IronCore 

Upcoming events:

  • Rocky Mountain Info Security Conf
    • May 28, 2025 in Denver, Colorado
    • Title: Illuminating the Dark Corners of AI: Exploiting Shadow Data in AI Models and Embeddings 
    • Abstract: A demonstration of how to extract confidential data and personally identifiable information from fine-tuned LLMs and vector embeddings. Shows how confidential data finds its way into your AI systems and presents attacks for identifying and extracting that sensitive data. This will highlight the problem of AI shadow data in RAG workflows and chat bots.  The data may be monitored and protected in its primary store but is vulnerable and overlooked in the corresponding AI systems.
interview-arm-12-questions-newsletter

 

Important Questions to Ask Your Software Vendor

About the Security of their AI Features 

 

Before trusting your vendor's new AI feature, ask these 12 critical security questions to protect your data, prevent breaches, and ensure compliance.

 

> Read the full blog

ai-security-blog-newsletter

Essential Security Steps Before Launching Your AI Feature

 

Stay ahead of threats like prompt injections, data leaks, and model manipulation with proactive measures every company should take before rolling out AI features.

 

> Read the full blog

owasp-v2-blog-robot-wasp-newsletter

 

OWASP's Updated Top 10 LLM Includes Vector and Embedding Weaknesses

The Update Looks Beyond Models to the Whole AI Stack 

 

OWASP released their second version of the Top 10 for LLM Applications. It now includes major new issues found in the surrounding AI ecosystem, going beyond LLM model risks.  In this blog post, we look at their key findings and zoom in on LLM08, vector and embedding weaknesses.

 

> Read the full blog

LinkedIn
X
GitHub
Mastadon
YouTube

IronCore Labs, 1750 30th Street #500, Boulder, CO 80301, United States, 3032615067

Unsubscribe Manage preferences