Lessons from DEF CON and the AI Risk Summit; new trends with large financial companies; and new conference dates.
View in browser
AdobeStock_705190755-with-peg-newsletter-header

Greetings,

 

I've been hitting the conference circuit lately and am just back from DEF CON 33 and Security Week's AI Risk Summit.  Those were different talks with very different audiences -- developers and hackers at one and CISOs at the other -- but the level of interest at both were very high.  Without a doubt, few people yet understand the risks of these new Gen-AI systems or how easy it is to steal data out of them. 

 

We had incredibly positive feedback following these talks and we’re encouraged to build on our research in this area and to keep raising awareness (see the three upcoming conferences listed below).

 

Meanwhile, we continue to help companies add advanced security and encryption to their offerings and we've noticed a trend: financial companies, in particular, are increasingly likely to be a driving force for features like BYOK/HYOK.  I wrote about that in a recent blog post.

 

And stay tuned for an upcoming post on why I think data labelling needs to be rethought. It's no longer just the type of data that matters.  With GenAI in the mix, you have to label for data provenance, too.

 

Enjoy your last weekend of summer.  Reach out if you're in a town we're coming to or if you'll be at one of the below conferences.  I'd love to say hello in person.

Patrick Walsh CEO IronCore Labs

Patrick Walsh
CEO, IronCore 

 

Upcoming events:

  • Jamf Nation User Conference
    • Oct 9, 11:30am MT in Denver, CO
    • Title: Empowering Customer Trust and Compliance: The Value of Bring Your Own Encryption Key (BYOK)
    • Abstract: Join us to learn practical insights and real-world success stories, hear highlights of how BYOK adoption leads directly to increased customer satisfaction, improved security posture, and stronger market positioning. Attendees will leave with actionable strategies for effectively communicating the value of BYOK to customers, driving adoption, and ensuring seamless integration into existing workflows.
  •  
  • OWASP LASCon
    • Oct 24, 1pm CT in Austin, TX
    • Title: Hidden Risks of Integrating AI: Extracting Private Data with Real-World Exploits
    • Abstract: We’ll dive into techniques like model inversion attacks targeting fine-tuned models, and embedding inversion attacks on vector databases—key components in RAG architectures that supply private data to LLMs for answering specific queries.

  • OWASP Global AppSec
    • November 6, 10:30am ET in Washington, DC
    • Title: Hidden Risks of Integrating AI: Extracting Private Data with Real-World Exploits
    • Abstract: This talk explores the hidden risks in apps leveraging modern AI systems—especially those using large language models (LLMs) and retrieval-augmented generation (RAG) workflows—and demonstrates how sensitive data, such as personally identifiable information (PII), can be extracted through real-world attacks. We’ll dive into techniques like model inversion attacks targeting fine-tuned models, and embedding inversion attacks on vector databases—key components in RAG architectures that supply private data to LLMs for answering specific queries.

 

bank-grade-saas-security-blog-newsletter

The Rapid Evolution of Bank-Grade SaaS Security

Why Financial Institutions Are Demanding More Than SOC2 From Their SaaS Vendors

 

Financial institutions are demanding SaaS vendors adopt application-layer encryption and hold-your-own-key solutions. Here's what's driving the increase in pressure.

 

> Read the full blog

 

ai-is-random-newsletter

When Randomness Backfires:
Security Risks in AI

The Most Important Tool When Hacking AI Is Persistence

 

LLMs produce different results every time and sometimes those results are outliers that can be used by hackers to exploit systems. Most unsafe outputs, data leaks, and allowed jailbreaks or prompt injections are due to the random component in an LLM. In this blog, we explain that and why it's so dangerous for security.

 

> Read the full blog

 

LinkedIn
X
GitHub
Mastadon
YouTube

IronCore Labs, 1750 30th Street #500, Boulder, CO 80301, United States, 3032615067

Unsubscribe Manage preferences