We've been talking about the importance of application-layer encryption (ALE) for years now. Finally, we're starting to see it reflected back at us with prospective customers telling us that they either have internal requirements or their customers are forcing requirements on them to add ALE. This is huge news for the security of data and we're glad to see the movement growing. It's the right way to build software with data protection by design and by default.
And if ALE is becoming a priority for your organization, we have you covered. We support all of the major patterns and handle all of the hard things around lifecycles, workflows, key orchestration, audit trail orchestration all in a zero-trust service so IronCore never sees sensitive data. And we help you keep that data usable, useful, and findable with encrypted keyword and vector search options.
Take a look at the bottom of the email to see some upcoming events where I'll be speaking (one in Denver and one in the Bay Area). If you'll be there, shoot me a note and let's say hello.
Patrick Walsh CEO, IronCore
Securing AI: The Stack You Need to Protect Your GenAI Systems
Enterprise-grade GenAI systems need these 7 things
We explore the different vendors that are producing solutions for GenAI security, group them into categories, look at which categories are necessary and when, and make recommendations on a professional, enterprise-grade secure AI stack.
If you haven't been tracking it, the CEO of a messaging app company, Telegram, wrote a post telling people that they shouldn't trust Signal, a competing app run by a non-profit company dedicated to privacy, and implying that messages on Signal can be read by the U.S. Government. Elon Musk joined the fray on X, saying that Signal had refused to fix known vulnerabilities for years and that he found them suspicious, too. I'm not going to rehash the whole thing [or debunk the bogus claims], but this is a good time to look at the use of encryption in software.
Title: Exploitable Weaknesses in GenAI Workflows: From RAG to Riches
Abstract: Everyone’s building AI chatbots using Retrieval Augmented Generation (RAG) with Large Language Models (LLM), but how many of these teams understand the risks they’re opening themselves up to, especially as they mix confidential data with new types of databases and other infrastructure. This session will demonstrate attacks on the “memory of AI,” vector databases, which are used in countless ways from RAG to facial recognition to medical diagnoses. The AI data is a treasure trove for attackers. We’ll end by showing how to defend against these completely new attacks.
Title: Data and GenAI Workflows: How RAG Risks Private Information
Abstract: This talk will discuss how RAG systems work, where the risks lie, how security and privacy teams should be approaching these systems, the categories of emerging solutions, and what comes next.