Greetings,
AI is inherently problematic for privacy, and that's before Big Companies make Terrible Decisions. The latest example comes from Facebook/Meta and their new AI app where they've brilliantly decided to "anonymously" post everyone's AI queries. Here's a real example (credit to Rachel Tobac and this X thread, which has many more examples of easily identifiable prompts that include people's names, addresses, and much more):
My sister is a vp development for a small incorporated company, [REDACTED]. The incorporated company has not paid its corp taxes in 12 years. Would my sister be liable for the taxes even though she is just a vp in charge of business development?
Things we're doing to encrypt models, vector embeddings, and search data are great for privacy and security, but can't stop stupid.
One question we get a lot is about standards and if the encryption we're using is on its way to becoming a NIST standard. The short answer is, "not yet." The longer answer can be found in our latest blog talking about NIST and what they're doing (or not doing).
Hope you're having a great summer if you're in in the Northern hemisphere, and let me know if you'll be in Vegas for BlackHat or Defcon as I'll be there to give a talk at Defcon and would love to see you.