About LLM Secrets
Protecting developer secrets in the age of AI coding assistants.
Our Mission
Make secret management secure, simple, and exfiltration-proof for developers working with AI coding assistants like Claude Code.
The Problem
AI coding assistants are transforming how developers work. But they come with a hidden risk: they can read your .env files. Your API keys, database passwords, and private keys are silently loaded into memory and potentially exposed in conversation logs.
We built LLM Secrets to solve this problem. Your AI assistant can use your secrets to deploy code, run tests, and execute commands - without ever seeing the actual values.
How It Works
LLM Secrets encrypts your .env files using Windows Hello and AES-256 encryption. When your AI assistant needs to use a secret, it's injected into an isolated subprocess at runtime. The decrypted value exists only in memory, never in files, and is immediately cleared after use.
Our Values
🔒 Security First
Every design decision prioritizes security. We use proven cryptographic standards and minimize attack surface.
🌐 Open Source
Our code is open source (Apache 2.0 for CLI). Transparency builds trust, and you can audit every line.
🎯 Developer Experience
Security shouldn't be painful. We make encryption invisible so you can focus on building.
🔐 Privacy by Design
We collect no data. Your secrets never leave your device. We can't see them, and we don't want to.
The Team
LLM Secrets is built by LLM Secrets, a developer passionate about security and developer tools. The project started as a personal tool to protect blockchain private keys while using Claude Code for smart contract development.
Open Source
LLM Secrets uses a dual-license model. The CLI and encryption core are fully open source under Apache 2.0. The desktop app is source available for auditability, with a paid license for commercial use.
Want to contribute? Check out our GitHub.