How to implement structured ai outputs on Vercel
Implement structured AI outputs on Vercel by creating API routes with schema validation, using Vercel's Edge Runtime for optimal performance, and configuring proper response formatting. Deploy using Vercel CLI or GitHub integration for seamless CI/CD.
Prerequisites
- Basic knowledge of Next.js or React
- Vercel account and CLI installed
- Understanding of JSON schemas
- Experience with API routes
Step-by-Step Instructions
Set up your Next.js project with Vercel configuration
npm install zod @vercel/edge openai. Create a vercel.json file in your project root with the following configuration:{
"functions": {
"pages/api/ai/*.js": {
"runtime": "edge"
}
}
}This ensures your AI endpoints run on Vercel's Edge Runtime for better performance.Create structured output schemas using Zod
schemas directory and define your output structures. For example, create schemas/aiResponse.js:import { z } from 'zod';
export const ProductSchema = z.object({
name: z.string(),
description: z.string(),
price: z.number().positive(),
category: z.enum(['electronics', 'clothing', 'books']),
tags: z.array(z.string()).optional()
});
export const AIResponseSchema = z.object({
success: z.boolean(),
data: ProductSchema,
timestamp: z.string()
});This creates type-safe schemas for validating AI outputs.Build the AI API endpoint with structured validation
pages/api/ai/generate.js (or app/api/ai/generate/route.js for App Router):import OpenAI from 'openai';
import { AIResponseSchema } from '../../../schemas/aiResponse';
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
});
export default async function handler(req, res) {
try {
const completion = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{
role: 'system',
content: 'Return only valid JSON matching the product schema'
}, {
role: 'user',
content: req.body.prompt
}],
response_format: { type: 'json_object' }
});
const parsed = JSON.parse(completion.choices[0].message.content);
const validated = AIResponseSchema.parse({
success: true,
data: parsed,
timestamp: new Date().toISOString()
});
res.status(200).json(validated);
} catch (error) {
res.status(500).json({ success: false, error: error.message });
}
}Configure environment variables in Vercel
OPENAI_API_KEY- Your OpenAI API keyNODE_ENV- Set to 'production'
.env.local file with the same variables. Use the Vercel CLI to pull environment variables locally: vercel env pull .env.localImplement client-side integration with error handling
export default function AIGenerator() {
const [result, setResult] = useState(null);
const [loading, setLoading] = useState(false);
const generateContent = async (prompt) => {
setLoading(true);
try {
const response = await fetch('/api/ai/generate', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ prompt })
});
if (!response.ok) throw new Error('Generation failed');
const data = await response.json();
setResult(data);
} catch (error) {
console.error('AI generation error:', error);
} finally {
setLoading(false);
}
};
return (
{/* Your UI components */}
);
}Deploy and test your structured AI endpoints
vercel --prod or push to your connected GitHub repository. Test your endpoints using the Vercel Functions tab in your dashboard. Navigate to Functions > View Function Logs to monitor performance and errors. Use tools like Postman or curl to test your API endpoints:curl -X POST https://your-app.vercel.app/api/ai/generate \
-H "Content-Type: application/json" \
-d '{"prompt": "Generate a product for electronics category"}'Optimize performance and implement caching
res.setHeader('Cache-Control', 'public, s-maxage=300, stale-while-revalidate=600');Consider using Vercel KV for caching AI responses:import { kv } from '@vercel/kv';
const cacheKey = `ai-response-${hash(prompt)}`;
const cached = await kv.get(cacheKey);
if (cached) {
return res.json(cached);
}
// Generate new response and cache it
await kv.setex(cacheKey, 3600, response);Monitor usage in the Vercel dashboard under Analytics.Common Issues & Troubleshooting
Edge Runtime compatibility issues with certain packages
Check if your dependencies support Edge Runtime. Replace incompatible packages or use the Node.js runtime by removing the edge configuration from vercel.json. Use next-runtime instead of edge for complex operations.
Function timeout errors during AI generation
Increase function timeout in vercel.json:
"functions": { "pages/api/ai/*.js": { "maxDuration": 30 } } Consider implementing streaming responses or breaking complex operations into smaller chunks. Schema validation failures with AI responses
Add more specific prompting to ensure AI follows your schema. Implement fallback parsing with z.safeParse() and provide clear error messages. Use few-shot examples in your AI prompts to improve output consistency.
Environment variables not accessible in Edge Runtime
Ensure environment variables are properly configured in Vercel dashboard and available at build time. Some variables may need to be prefixed with NEXT_PUBLIC_ for client-side access, but never expose API keys this way.