You followed the API tutorials and built a fantastic AI wrapper on your local machine. Now you want to put it on the internet. You use Vite or Create React App, you paste your OpenAI API key into your index.js file, run the build command, and deploy to Netlify.
Four hours later, you get an email from OpenAI that your API key was compromised on GitHub and revoked. If they didn't revoke it, you would have woken up to a $500 bill.
You cannot use OpenAI API keys directly in frontend Javascript code. If it runs in the user's browser, the user can press F12, open the network tab, and steal your key. You must use a backend. In this tutorial, we will use Netlify Serverless Functions to build a perfectly secure backend "proxy" in less than 5 minutes.
The Architecture
Instead of your frontend talking to OpenAI, your frontend will talk to your Netlify Function. Your Netlify Function (which runs securely on an AWS server) will inject your secret API key, talk to OpenAI, and send the response back to your frontend.
Step 1: The Project Structure
Inside your existing web project (even if it's just plain HTML/CSS/JS), create a new folder named `netlify` at the root level. Inside that, create a folder named `functions`. Inside that, create a file named `generate.js`.
/my-project
/index.html
/styles.css
/app.js
/netlify
/functions
generate.js
Step 2: The Serverless Function (The Backend)
Netlify Functions run on Node.js. Inside `generate.js`, we will intercept the POST request from your frontend, grab the prompt, and securely call the OpenAI API. You will need to install the OpenAI NPM package in your project root first: npm install openai.
// generate.js
const { OpenAI } = require("openai");
// The key is safely grabbed from Netlify's secure environment
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
exports.handler = async function (event, context) {
// Only allow POST requests
if (event.httpMethod !== 'POST') {
return { statusCode: 405, body: 'Method Not Allowed' };
}
try {
const body = JSON.parse(event.body);
const userPrompt = body.prompt;
// Call OpenAI from the secure server
const response = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: userPrompt }],
});
const reply = response.choices[0].message.content;
// Send the reply back to the frontend
return {
statusCode: 200,
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ reply: reply }),
};
} catch (error) {
return {
statusCode: 500,
body: JSON.stringify({ error: "Failed to generate AI response" }),
};
}
};
Step 3: The Frontend Fetch Call
Now, let's look at your app.js file that runs in the browser. Notice that there is absolutely no mention of OpenAI or API keys. We simply make a request to our own relative Netlify URL.
// app.js
async function callMyAI() {
const input = document.getElementById("userInput").value;
// We hit the Netlify function endpoint, not the OpenAI endpoint
const response = await fetch('/.netlify/functions/generate', {
method: 'POST',
body: JSON.stringify({ prompt: input }),
headers: { 'Content-Type': 'application/json' }
});
const data = await response.json();
if(data.reply) {
document.getElementById("output").innerText = data.reply;
}
}
Step 4: Deploying to Netlify
You cannot test this by just double clicking index.html on your desktop anymore. Because we are using a serverless function, you must use the Netlify CLI.
- Install the CLI globally:
npm install netlify-cli -g - Run a local dev server:
netlify dev
When you are ready to put it on the public internet:
- Go to Netlify.com, create a new site, and hook it up to your GitHub repo.
- Go to Site Settings > Environment Variables in the Netlify Dashboard.
- Add a new variable named
OPENAI_API_KEYand paste your key there. Never commit this key to GitHub. - Trigger a deploy.
Conclusion
Deploying AI applications requires crossing the gap between frontend design and backend security. By migrating your API calls into serverless functions, you instantly secure your billing methods while maintaining zero server overhead. From here, you can easily add Rate Limiting logic to your function to prevent users from spamming the "Generate" button and racking up costs.