Most of the security conversations happening right now concern the AI model. But there are actually three main areas of modern AI systems where you need to evaluate risk: the AI model, the AI memory, and the interactions.
The AI memory is a newer aspect of AI systems, driven by the need to reduce hallucinations and provide context, and as a way to build anomaly detection, recommendation, classification, and other systems with up-to-date information that doesn't require a custom model.
Securing the AI memory is something I've written about a lot recently because it's one of the biggest security gaps. If you're not familiar with vector databases or vector embeddings, I recommend reading my blog post on embedding myths.
For a comprehensive look at how data flows within AI systems and where it is stored (and vulnerable), you can watch the full webinar on-demand on YouTube.
Protect the AI Memory
Data stored in the AI memory needs to be protected. It will take the form of either vector embeddings or metadata stored in a vector database (there are many to choose from right now). None of these databases offer you encryption-in-use. But you can still encrypt and use your data with our newest product, Cloaked AI, which launches out of beta in two weeks. You can access the beta now.