LLM chatbots are taking over the world -- or so it seems if you're watching the tech industry news. Certainly companies are falling over themselves to make use of LLMs to better process and distill data. And for developers of these solutions, the leading architecture in use is called Retrieval-Augmented Generation or RAG.
We've been talking a lot about RAG lately, explaining how it's used and how it can be greatly abused. Many of the risks with RAG architectures are new, and few people understand what they're getting into as they spin up these chatbots.
We have a series of new video discussions and an explainer web page. Check out the links below, and let me know what you think!
Patrick Walsh CEO, IronCore
Security Risks with RAG Architectures
How they work and how they open you up to attacks.
This page details the five main security risks with RAG and the six mitigation steps that all companies should take. If you're unfamiliar with RAG, it also covers what it is, what problems it solves, and what it looks like in practice.
A Conversation About RAG with IronCore Labs cofounders Patrick and Bob
In this discussion, Patrick and Bob cover everything from what RAG is, to how it got its awful name, to why people should pay more attention to the new architectures and infrastructure that are permeating new AI projects.
ISSA Webinar Replay: Cybersecurity considerations for AI: comparing U.S. and EU approaches
By Patrick Walsh and Scott M. Giordano
If you haven't heard, the EU just passed their AI Act, which Patrick and Scott (a privacy lawyer) discuss in-depth along with a broader view of cybersecurity risks to AI systems in this fast-paced hour long ISSA-hosted educational talk. It's information-packed across policy and technical considerations and worth your time if you sit in a privacy or security role or are working with AI systems.