Problem Statement
From home AI assistants like Alexa and Google Assistant to industrial and financial AI systems, Artificial Intelligence (AI) has seamlessly integrated into everyone’s life. When we started to get comfortable using AI-based systems and reliance on AI grows, concerns about data privacy and security have surged. High-profile data breaches reveal how AI systems can expose sensitive information and raise alarms about user safety.
Traditional AI models often lack the necessary safeguards, making them vulnerable to exploitation and misuse. All AI models, especially cloud-based ones like ChatGPT, and AI agents that use these AI models collect and store sensitive user data in the form of training, which poses a significant risk to personal and organizational privacy.
Without proper protection, users unknowingly expose their personal information to AI models. This information may be stored, shared, or even misused by third-party services. This challenge calls for a solution that not only protects user data but also allows users to retain full control over how and when their information is accessed.
Our Project
The LLM Firewall empowers users to have control over their data while interacting with AI. The LLM Firewall uses both cloud-based and local agents to ensure that any data shared with these agents is processed through a secure filtering mechanism. The filtering mechanism removes or anonymizes sensitive information before reaching the LLMs or 3 rd party cloud server.
Key Features of Our LLM Firewall:
-
Using LLMs:
Users can safely communicate with LLMs such as ChatGPT and Copilot while
protecting critical data. -
Train LLMs Securely:
Developers can redact confidential information from their dataset before
uploading it for model training. Only the non-sensitive, redacted data is used for training the model. -
Test LLMs for Fairness, Bias, and Privacy:
After training your models, the firewall provides robust tools to test and evaluate LLMs for fairness and privacy compliance. This includes checks for unintended biases, discriminatory outputs, and potential privacy leaks.
The LLM Firewall not only helps to protect your data but also helps you develop, test, and use AI systems in a secure, controlled environment.