How to Keep Your API Keys Secret: A Complete Guide to Secure Node.js and Express Apps
October 31, 2025 - 13 min read - Raymond

If you're building a web app that connects to an AI API, payment processor, or any other external service, you've probably asked yourself: "How do I keep my secret API key from being exposed?"
You're not alone. This is one of the biggest security questions beginners face, and the answer is simpler than you might think.
In this guide, I'll walk you through exactly how to structure your project, protect your secrets, and deploy it to a live server. By the end, you'll understand why frontend API keys are dangerous and how to build an app that keeps them locked down.
Why API Keys in Your Frontend Code is a Problem
Imagine you paste your API key directly into your JavaScript file:
javascriptconst API_KEY = 'sk-1234567890abcdef';
Someone visits your site. They open DevTools (F12), paste your code into the console, and boom: they have your key. Now they can make unlimited requests to your LLM, rack up charges, and steal your service.
The browser is not a safe place for secrets. Anything you put in HTML, CSS, or JavaScript can be viewed by users who know how to look.
The solution is simple: put your secrets on a server, never send them to the browser, and let the server handle all sensitive requests.
Think of it like a restaurant. You don't give customers the kitchen password. You tell the customer what you want, the waiter takes your order to the kitchen with their own access credentials, and they bring back only your finished meal. Your keys stay with the kitchen.
The Three-Layer Architecture You Need
Before we code anything, let's map out where everything lives:
Layer 1: The Frontend (What the Browser Sees)
This is your public folder. It contains:
index.html
styles.css
script.js
assets (images, icons, fonts)
These files are downloaded by every visitor. They should never contain an API key. They only talk to your own server, never to third-party APIs.
Layer 2: The Backend (Your Private Server)
This is where secrets live:
server.js (your Node.js app)
.env (your real API keys)
package.json (your dependencies)
Only your server can read .env. It reads your secrets, talks to external APIs using those secrets, and returns only the response to the frontend. The key never gets shared.
Layer 3: The Reverse Proxy (For Production)
When you go live, Nginx or a similar tool sits between the internet and your server. It handles SSL certificates, security headers, and forwards traffic to your Node app. This adds another layer of protection.
Here's how a request flows:
textBrowser: "I need an answer to this question"
↓
Your Server: "OK, let me read my secret key from .env and call the LLM"
↓
LLM Provider: "Here's your answer"
↓
Your Server: "Got it. Browser, here's the answer (no key attached)"
↓
Browser: "Thanks! I got the answer and never saw your key"
Restructuring Your Project
Let's organize your files properly. Here's the target structure:
textAIChatbot/
├── server.js
├── package.json
├── .env (your secrets, never committed)
├── .env.example (template, safe to commit)
├── .gitignore
├── README.md
├── public/ (everything the browser can see)
│ ├── index.html
│ ├── styles.css
│ ├── script.js
│ └── assets/
│ └── favicon/
└── .git/
The golden rule: anything in public/ is visible to the browser. Everything else is private.
Step 1: Create the Public Folder
On your local machine, create a folder called public inside your project. Move these files into it:
index.html
styles.css
script.js
assets/ (the entire folder)
Keep server.js, package.json, and documentation files at the project root.
If you're using an AI coder: Try this prompt:
"Create a new folder in my project called 'public'. Move index.html, styles.css, script.js, and the assets folder into it. Make sure all internal paths still work after moving. Keep server.js, package.json, .gitignore, and documentation files at the project root."
Step 2: Create Your .env File
At the project root (same level as server.js), create a file called .env:
textLLM_API_KEY=your-actual-api-key-here
PORT=3000
This file holds your real secrets. Never commit it to Git.
Step 3: Create .env.example
Create another file at the project root called .env.example:
textLLM_API_KEY=your_api_key_here
PORT=3000
This shows your team what variables they need to set. Commit this to Git.
Step 4: Update .gitignore
Make sure your .gitignore file includes:
text.env
node_modules/
This prevents .env from ever being committed to Git.
If you're using an AI coder: Try this prompt:
"Update my .gitignore file to include .env and node_modules/. Make sure these are on separate lines. If .gitignore doesn't exist, create one with just these two entries."
Writing Your Server (server.js)
Now let's build the server that keeps your secrets safe:
javascriptrequire('dotenv').config();
const express = require('express');
const path = require('path');
const app = express();
// Serve static files from the public folder
app.use(express.static(path.join(__dirname, 'public')));
// Parse JSON requests
app.use(express.json());
// Your backend API route
// The browser calls this, never the LLM directly
app.post('/api/chat', async (req, res) => {
const userMessage = req.body.message;
// Read the secret key from .env
const apiKey = process.env.LLM_API_KEY;
// Call the LLM API with the secret
// This key stays on your server, never sent to the browser
const llmResponse = await fetch('https://api.example.com/chat', {
method: 'POST',
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({ message: userMessage })
});
const data = await llmResponse.json();
// Return only the response, not the key
res.json({ answer: data.result });
});
// Listen on localhost only
const PORT = process.env.PORT || 3000;
app.listen(PORT, '127.0.0.1', () => {
console.log(`Server running on http://127.0.0.1:${PORT}`);
});
Here's what's happening:
require('dotenv').config()loads your .env file so process.env.LLM_API_KEY worksexpress.static(path.join(__dirname, 'public'))tells Express to serve files from the public folderThe
/api/chatroute is your private endpoint. The browser calls this, and the server attaches the secret when talking to the LLMprocess.env.LLM_API_KEYis read only by the server. The browser never sees it
If you're using an AI coder: Try this prompt:
"I need you to create or update my server.js file with the following requirements:
Load environment variables using dotenv
Use Express to serve static files from a 'public' folder using an absolute path
Create a POST endpoint at /api/chat that receives a message in the request body
This endpoint should read the LLM_API_KEY from process.env and use it to make a request to an external LLM API
Never expose the API key to the browser; only send back the LLM's response
The server should listen on 127.0.0.1 port 3000
Install any required npm packages"
Updating Your Frontend
Your HTML, CSS, and JavaScript need small changes to work with this new structure.
Update HTML File Paths
Change references to be root-relative:
xml<!-- Old -->
<link rel="stylesheet" href="styles.css">
<script src="script.js" defer></script>
<link rel="icon" href="assets/favicon/favicon.ico">
<!-- New -->
<link rel="stylesheet" href="/styles.css">
<script src="/script.js" defer></script>
<link rel="icon" href="/assets/favicon/favicon.ico">
If you're using an AI coder: Try this prompt:
"Update all file references in my index.html to be root-relative. This means:
Change any href or src that points to 'styles.css' to '/styles.css'
Change any src that points to 'script.js' to '/script.js'
Change any paths pointing to 'assets/' to '/assets/'
Do the same for favicon paths and any other local file references
Only change relative paths, not absolute URLs"
Update JavaScript
Remove any direct API calls and talk to your server instead:
javascript// Old way (insecure, never do this)
// const apiKey = 'sk-1234567890abcdef';
// const response = await fetch('https://api.example.com/chat', {
// headers: { 'Authorization': `Bearer ${apiKey}` }
// });
// New way (secure)
async function askLLM(userMessage) {
const response = await fetch('/api/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ message: userMessage })
});
const data = await response.json();
return data.answer; // Only the answer, no key
}
If you're using an AI coder: Try this prompt:
"Update my script.js file with this critical security change:
Find any code that uses an API key directly (look for 'const API_KEY' or similar)
Remove all direct calls to external LLM APIs
Replace them with calls to '/api/chat' endpoint on my own server
The frontend should only call my server at /api/chat with POST method and JSON body containing the user message
My server will handle calling the actual LLM and attaching the API key
The frontend should never see or use the actual API key
Show me the updated functions that make these API calls"
Testing Locally
Let's verify everything works on your machine:
bash# Install dependencies
npm install
# Start the server
node server.js
Open http://localhost:3000 in your browser. You should see your HTML page load with all CSS, JavaScript, and images working.
Open DevTools (F12) and go to the Network tab. Make a request through your form. You'll see a request to /api/chat, but nowhere will your API key appear. This is exactly what you want.
If you're using an AI coder: Try this prompt:
"Help me test my setup locally:
Install all dependencies from package.json
Start the server with 'node server.js'
Tell me what I should see when I visit http://localhost:3000
Explain how to check in the Network tab of DevTools that my API key is never exposed"
Deployment: Three Paths
Now for the real world. Here are your options depending on what you have access to:
Path 1: Shared Hosting (FileZilla and Hostinger)
Shared hosting doesn't support Node.js. You can only deploy the static frontend.
Upload only the contents of your public/ folder to public_html/ via FileZilla. Your site will work, but any features requiring your LLM won't function because Node.js isn't running.
To get the full stack working with server.js and .env, you need to upgrade to a VPS or use a platform like Vercel, Railway, or Heroku that supports Node.js.
If you're using an AI coder: Try this prompt:
"I'm deploying to shared hosting via FileZilla. I need instructions for:
What files/folders to upload from my local project
Where to upload them on the shared hosting server (typically public_html)
Whether I need to upload server.js or .env files
What I should expect to work and not work on shared hosting"
Path 2: Ubuntu VPS from Scratch
If you have a fresh Ubuntu server:
Install Node and PM2
bash# Install NVM
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash
source ~/.bashrc
# Install Node.js
nvm install --lts
# Install PM2 (keeps your app running)
sudo npm install -g pm2
Upload Your Project
Use FileZilla or this command:
bashscp -r AIChatbot/ user@your-server-ip:/var/www/
Set Up the App
bash# SSH into your server
ssh user@your-server-ip
# Go to your project
cd /var/www/AIChatbot
# Install Node dependencies
npm install
# Create .env with your real key
nano .env
# Paste: LLM_API_KEY=your-real-key
# Press Ctrl+X, then Y, then Enter
# Restrict permissions
chmod 600 .env
# Start with PM2
pm2 start server.js --name aichatbot
# Make it survive reboots
pm2 save
pm2 startup
Install and Configure Nginx
bashsudo apt install nginx -y
sudo nano /etc/nginx/sites-available/aichatbot
Paste this:
textserver {
listen 80;
server_name your-domain.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
Enable it:
bashsudo ln -s /etc/nginx/sites-available/aichatbot /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
Add SSL
bashsudo apt install certbot python3-certbot-nginx -y
sudo certbot --nginx -d your-domain.com
Done. Visit your domain and your app is live.
If you're using an AI coder: Try this prompt:
"Walk me through deploying my Node.js app to an Ubuntu VPS step by step:
Install Node.js using NVM
Install PM2 globally to manage the app process
Upload my entire AIChatbot folder to /var/www/AIChatbot on the server
Install dependencies, set up the .env file with proper permissions (chmod 600), and start the app with PM2
Configure Nginx as a reverse proxy to forward traffic from port 80 to the Node app on port 3000
Set up SSL with Let's Encrypt
Make sure the app survives server reboots
Provide all commands I need to run"
Path 3: Ubuntu VPS with Nginx Proxy Manager
If you already use Nginx Proxy Manager:
Do steps 1-3 from Path 2 to get your app running on port 3000.
Then in Nginx Proxy Manager:
Go to Proxy Hosts and create a new entry
Set Domain to your domain
Set Scheme to http, Forward Hostname to 127.0.0.1, Forward Port to 3000
Enable Websockets Support and Block Common Exploits
Add SSL with Let's Encrypt
That's it. Nginx Proxy Manager handles the rest.
If you're using an AI coder: Try this prompt:
"I have an Ubuntu server with Nginx Proxy Manager already running. My Node.js app is running on 127.0.0.1:3000. Give me step-by-step instructions to:
Verify the Node app is running correctly
Create a new Proxy Host in Nginx Proxy Manager
Route my domain to the Node app
Set up SSL with Let's Encrypt
Test that everything is working"
Security Checklist
File Permissions
On your server, restrict access to .env:
bashchmod 600 .env
Only your user can read it.
Never Commit Secrets
Double-check your .gitignore has .env listed. This prevents accidental uploads to GitHub.
Rotate Keys Often
If a key is ever exposed, replace it immediately:
bash# Edit .env with a new key
nano .env
# Restart the app
pm2 restart aichatbot
Always Use HTTPS
Enable SSL in production. Let's Encrypt is free and automatic.
Monitor Your Logs
Check what's happening on your server:
bashpm2 logs aichatbot
pm2 status
If you're using an AI coder: Try this prompt:
"Create a security checklist for my Node.js app and verify:
The .env file has restrictive permissions (600)
.gitignore properly excludes .env and node_modules
HTTPS/SSL is enabled
The API key is never exposed in any logs or responses
Process management is set up so the app survives reboots
Rate limiting is in place on the /api/chat endpoint
Show me what to check and what commands to run"
Common Issues and Fixes
"Cannot GET /" Error
Your public folder isn't being served. Check that server.js has:
javascriptapp.use(express.static(path.join(__dirname, 'public')));
If you're using an AI coder: Try this prompt:
"I'm getting a 'Cannot GET /' error. Debug my server.js and fix:
Verify express.static is configured to serve the public folder correctly
Check that the public folder exists and contains index.html
Make sure the path is absolute using __dirname
Test the setup and confirm it works"
CSS and JavaScript Not Loading
Make sure all references in index.html start with a forward slash:
xml<link rel="stylesheet" href="/styles.css">
<script src="/script.js"></script>
If you're using an AI coder: Try this prompt:
"My CSS and JavaScript files are not loading. Fix my index.html:
Make sure all href and src attributes for local files start with '/'
Update references like 'styles.css' to '/styles.css'
Update references like 'script.js' to '/script.js'
Check favicon and asset paths too
Keep external URLs unchanged"
API Key Not Working
Verify that process.env.LLM_API_KEY is being read. Add this debug line temporarily:
javascriptconsole.log('API Key loaded:', process.env.LLM_API_KEY ? 'Yes' : 'No');
Restart and check pm2 logs aichatbot.
If you're using an AI coder: Try this prompt:
"My API key seems to not be loading. Debug the issue:
Add console.log statements to verify process.env.LLM_API_KEY is being read
Check that .env file exists in the project root
Confirm the .env file has the correct format: LLM_API_KEY=your-key
Verify dotenv is installed and required at the top of server.js
Restart the app and show me the logs
Tell me exactly what error I should see and how to fix it"
App Crashes on Restart
Check the logs:
bashpm2 logs aichatbot
Usually it's a missing dependency. Run npm install again.
If you're using an AI coder: Try this prompt:
"My app keeps crashing when I restart it. Help me:
Show me how to check PM2 logs to see the error
Run npm install to reinstall any missing dependencies
Verify all required packages are in package.json
Restart the app and confirm it stays running
Check PM2 status to confirm the process is alive"
Final Thoughts
You've just learned the pattern used by thousands of production apps:
Separate public frontend code from private backend secrets
Use environment variables to load secrets
Use a reverse proxy for additional security
The structure is simple once you see it:
Frontend talks to your server
Your server talks to external APIs
Keys stay on the server, never in the browser
.env holds secrets, .gitignore prevents leaks
This architecture scales from a hobby project to an enterprise application. Build it right the first time and you'll never have to worry about exposing your API keys.
Now go build something great, and keep those secrets locked down.