Stop AI coding assistant secrets exposure. Your phone, laptop, or a passkey unlocks an encrypted vault — Claude Code can deploy, test, and run commands without ever seeing your API keys.
LLM Secrets is fully open source under AGPL-3.0. Every line of code is available for security auditing.
// AES-256-GCM, master key derived from your passkey export class CryptoService { async encrypt(content: string): Promise<string> { // Tap your phone / laptop / security key const masterKey = await deriveFromPasskey(); // Fresh random nonce per encryption const nonce = randomBytes(12); // AES-256-GCM: encrypt + authenticate const cipher = createCipheriv( 'aes-256-gcm', masterKey, nonce ); // Ciphertext + auth tag + nonce return cipher.seal(content); } }
Every line of encryption code is visible. Security researchers can verify there are no backdoors. Ask DeepWiki for an independent analysis.
AES-256-GCM is the same authenticated encryption used by governments, banks, and TLS 1.3. Battle-tested and tamper-evident.
Found a vulnerability? Submit a PR. Improvements benefit everyone using LLM Secrets.
Your encrypted files use standard formats. You own your data and can decrypt without us.
No password to remember, no seed phrase to lose. Unlock with the same tap you use to sign into your bank — Face ID on iPhone, Google passkey on Android, Touch ID or Windows Hello on your laptop, or a hardware security key.
iPhone Face ID, Android Google passkey, Touch ID, Windows Hello — whichever you already use to sign in.
Add, edit, and organize secrets with a clean interface. Or drive everything from the command line — your choice.
Your vault is encrypted on your laptop before it ever leaves. Google stores opaque ciphertext — they can't read it.
Configurable idle timeout. Your session locks, and the next use asks you to tap to unlock again.
Free for macOS, Windows, and WSL. Everything you need to work securely with Claude Code. Start encrypting your secrets today.
Your secrets never leave your machine unencrypted. Claude Code gets access without visibility.
API keys, database URLs, tokens — anything sensitive goes here.
AES-256-GCM, authenticated encryption. The master key is derived from your phone, laptop, or security key — and never leaves it.
CLAUDE.md tells Claude what secrets exist, never the values.
Values exist in subprocess memory, never logged or returned.
Secrets are decrypted in memory, used once, then discarded. Never written to disk or logs.
The master key is derived from your phone, laptop, or security key. The encrypted vault is useless to anyone else — even if it's copied.
Secret values flow one direction. Claude Code output is automatically sanitized.
Same passkey, same AES-256-GCM — but now for tax documents, signing keys, crypto seed phrases, medical records, or anything else you'd rather Google couldn't read. Encryption happens on your laptop before the file ever leaves. Google stores opaque ciphertext. Only your phone or security key can open it.
drive.file access, nothing else
See how developers use LLM Secrets with Claude Code for common workflows.
Call external APIs with secure authentication
Run migrations and queries securely
Publish to npm with secure tokens
Deploy to AWS, GCP, or Azure securely
Deploy to Vercel and configure DNS with GoDaddy API
Deploy smart contracts without exposing private keys
LLM Secrets generates a CLAUDE.md reference file that tells Claude Code exactly which secrets exist and how to use them—without revealing values.
Claude knows the exact variable names. No guessing, no hallucinated API keys.
Descriptions tell Claude when to use each secret. Database URL for migrations, API key for external calls.
No more "secret not found" errors. Claude writes correct commands the first time.
Add a secret, regenerate CLAUDE.md. Your AI always has the latest reference.
Free and open source. Works on macOS, Linux, and Windows (via WSL). Paste one line into your terminal — the rest is a tap.
One install line. Then tap your phone, laptop, or security key to create the vault — no passwords, no seed phrases.
Not just .env files. Any document, any folder — encrypt it with the same tap, keep a single encrypted archive.
See the guidescrt4 encrypt-folder ~/sensitive
Your vault is encrypted on your machine before upload. Google stores opaque ciphertext — only your passkey can open it.
How backup worksdrive.file access only
LLM Secrets uses a zero-knowledge architecture. Secrets are decrypted in memory only when needed, injected directly into subprocess environments, and automatically redacted from output.
Even if someone copies your encrypted vault, it's useless without your phone, laptop, or security key. The master key never leaves the device you unlock with — it can't be stolen by malware, phished, or read off your disk.
AES-256-GCM with a fresh nonce per write. Tamper-evident, no plaintext on disk.
Master key is derived from your phone, laptop, or security key. It never leaves the device — so it can't be copied or stolen.
Secrets are injected into subprocesses at runtime. Claude sees $env[NAME] — never the value.
Configurable inactivity timeout. When your session expires, the next use asks you to tap again.
Everything you need to know about protecting your secrets from AI coding assistants.
Yes. Research shows Claude Code automatically loads .env files without asking permission. Your API keys, database passwords, and tokens are silently loaded into memory. LLM Secrets encrypts these files so Claude can use secrets without ever seeing the actual values.
Traditional approaches like separate user accounts or deny rules are complex and error-prone. LLM Secrets encrypts your .env file with a key derived from your phone or laptop — secrets live encrypted at rest and are only decrypted inside isolated subprocesses at runtime.
Yes. The master key is derived from the same hardware-backed passkey you already use to sign into your bank or email — iPhone Face ID, Android Google passkey, Touch ID, Windows Hello, or a YubiKey / security key. The vault uses AES-256-GCM authenticated encryption. Even if someone copies your encrypted vault file, it's useless on another device — only your passkey can unlock it.
Absolutely. While LLM Secrets is optimized for Claude Code with automatic CLAUDE.md generation, the encryption works with any AI coding assistant. Your .env file stays encrypted—no AI tool can read the plaintext values. Secrets are injected at runtime for any command.
macOS, Linux, and Windows via WSL. You unlock with whatever you already use: iPhone Face ID, Android Google passkey, Touch ID, Windows Hello, or a hardware security key like a YubiKey. All platforms are free with full-featured encryption, automatic CLAUDE.md generation, and encrypted Google Drive backup.
AI assistants read files in your project directory, including .env files. These values can appear in prompts, error messages, logs, and even be transmitted to cloud servers. LLM Secrets prevents this exposure by ensuring the AI only sees encrypted content or variable names—never actual secret values.
Plain text .env files are risky—43.8% of crypto theft in 2024 came from private key compromise. LLM Secrets encrypts your .env with AES-256-GCM, keyed to your phone or laptop. Keys are decrypted only at runtime inside isolated subprocesses — safer than Foundry keystores or Hardhat keystore plugins, and the AI never sees the value.
Yes. Claude can run forge script or hardhat deploy commands using your encrypted private key via $env:PRIVATE_KEY. Your key is injected at runtime but never visible to the AI. Deploy to mainnet, testnets, or L2s—your wallet stays secure while Claude handles the deployment workflow.
Answers from DeepWiki, an independent AI analysis of this codebase.
No. Here's why:
$env[NAME], never the value.Because every step happens locally on hardware you control, no one — not Anthropic, not us, not a cloud provider — is in a position to see your secrets.
You can recover IF you set up backups beforehand.
Recovery options:
scrt4 backup-key --save ~/usb writes a password-protected file. Keep it in a password manager or on a USB stick.scrt4 cloud-crypt encrypt-and-push stores an encrypted copy of your vault in your own Drive. Only your master key can open it.Without backups: secrets are irrecoverable by design. No backdoor exists — not for us, not for Google, not for anyone.
Get answers from an independent third-party AI analysis of our codebase.
Ask DeepWikiJoin developers who trust LLM Secrets to keep their API keys and credentials safe while working with AI coding assistants.