API Handler
Create server-side API endpoints with built-in streaming, rate limiting, CORS, and API key validation. Works with Next.js App Router.
Streaming
SSE support out of the box
Secure
API key validation & rate limiting
Multi-Provider
OpenAI, Anthropic, Ollama, etc.
Quick Start
Create a fully-featured N4i API endpoint in just a few lines.
// app/api/n4i/route.ts
import { createN4iApiHandler } from "n4i-genui/api";
// One line setup with sensible defaults
export const POST = createN4iApiHandler();
// For CORS preflight (required for cross-origin requests)
export const OPTIONS = createN4iApiHandler().options;That's it!
Your endpoint now handles streaming responses, JSON parsing, error handling, and connects to your configured AI provider automatically.
Full Configuration
Customize every aspect of your API handler for production use.
// app/api/n4i/route.ts
import { createN4iApiHandler } from "n4i-genui/api";
export const POST = createN4iApiHandler({
// AI Model Configuration
defaultModel: "claude-3-5-sonnet-20241022",
temperature: 0.2,
maxTokens: 8192,
jsonMode: true, // Request JSON output from model
// Custom System Prompt
systemPrompt: `You are a UI generation assistant for a fintech dashboard.
Generate UI components using our design system.
Always use professional, accessible colors.`,
// API Key Validation
requireApiKey: true,
apiKeyValidator: async (key) => {
// Custom validation (e.g., database lookup)
const isValid = await validateKeyInDB(key);
return isValid;
},
// Rate Limiting
enableRateLimit: true,
rateLimitPerMinute: 30,
// CORS Configuration
enableCors: true,
corsOrigins: [
"https://myapp.com",
"https://staging.myapp.com"
],
// Request/Response Hooks
onRequest: async (request) => {
// Transform or validate request
console.log("Request:", request.prompt);
return request;
},
onResponse: async (response) => {
// Transform response, add logging, etc.
await logToAnalytics(response);
return response;
},
});
export const OPTIONS = createN4iApiHandler({ enableCors: true }).options;Configuration Options
| Option | Type | Default | Description |
|---|---|---|---|
| defaultModel | string | "gpt-4o" | Default AI model to use |
| systemPrompt | string | N4I_SYSTEM_PROMPT | System prompt for generation |
| temperature | number | 0.1 | Model temperature (0-1) |
| maxTokens | number | 32768 | Max tokens to generate |
| jsonMode | boolean | true | Request JSON output |
| requireApiKey | boolean | false | Require API key validation |
| apiKeyValidator | (key) => Promise<boolean> | validateApiKey | Custom key validator |
| enableRateLimit | boolean | false | Enable rate limiting |
| rateLimitPerMinute | number | 60 | Requests per minute |
| enableCors | boolean | true | Add CORS headers |
| corsOrigins | string[] | ["*"] | Allowed origins |
| onRequest | (req) => Promise<req> | - | Request preprocessor |
| onResponse | (res) => Promise<res> | - | Response postprocessor |
Request Format
The API accepts POST requests with JSON body. Two formats are supported:
Simple Prompt
{
"prompt": "Create a user dashboard",
"model": "gpt-4o", // optional
"stream": true, // optional
"temperature": 0.2, // optional
"context": "User: John Doe" // optional RAG
}With History
{
"prompt": "Add a chart to it",
"history": [
{"role": "user", "content": "Create..."},
{"role": "assistant", "content": "..."}
],
"context": ["Doc 1", "Doc 2"]
}OpenAI-Compatible Format
{
"model": "n4i-default",
"messages": [
{"role": "system", "content": "Custom context..."},
{"role": "user", "content": "Create a dashboard"}
],
"stream": true,
"temperature": 0.1,
"max_tokens": 8192
}Custom Provider Integration
For advanced use cases, use the low-level provider APIs directly.
// app/api/custom-n4i/route.ts
import { streamProvider, N4I_SYSTEM_PROMPT, getCorsHeaders } from "n4i-genui/server";
export async function POST(request: Request) {
const { prompt, context } = await request.json();
// Build messages with optional RAG context
const systemPrompt = context
? `${N4I_SYSTEM_PROMPT}\n\nContext: ${context}`
: N4I_SYSTEM_PROMPT;
const messages = [
{ role: "system" as const, content: systemPrompt },
{ role: "user" as const, content: prompt },
];
// Create streaming response
const encoder = new TextEncoder();
const stream = new ReadableStream({
async start(controller) {
try {
for await (const chunk of streamProvider("gpt-4o", messages)) {
if (chunk.content) {
const data = JSON.stringify({ content: chunk.content });
controller.enqueue(encoder.encode(`data: ${data}\n\n`));
}
if (chunk.done) {
controller.enqueue(encoder.encode("data: [DONE]\n\n"));
break;
}
if (chunk.error) {
const data = JSON.stringify({ error: chunk.error });
controller.enqueue(encoder.encode(`data: ${data}\n\n`));
break;
}
}
} finally {
controller.close();
}
},
});
return new Response(stream, {
headers: {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
"Connection": "keep-alive",
...getCorsHeaders({ allowedOrigins: ["*"] }, request),
},
});
}Environment Variables
Configure your API keys and settings via environment variables.
# .env.local
# AI Provider Keys (at least one required)
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_AI_API_KEY=AIza...
GROQ_API_KEY=gsk_...
OPENROUTER_API_KEY=sk-or-...
# Vercel AI Gateway (optional, for unified access)
AI_GATEWAY_API_KEY=...
USE_VERCEL_AI_GATEWAY=true
# Ollama (local models)
OLLAMA_URL=http://localhost:11434
OLLAMA_API_KEY=... # For Ollama cloud
# N4i Configuration
N4I_DEV_API_KEY=n4i-dev-key # Development key
N4I_API_KEYS=key1:name1,key2:name2 # Production keys
N4I_CORS_ORIGINS=https://myapp.com # Allowed origins
DEFAULT_MODEL=gpt-4o # Default modelTesting with cURL
Quick commands to test your API endpoint.
# Non-streaming request
curl -X POST http://localhost:3000/api/n4i \
-H "Content-Type: application/json" \
-H "Authorization: Bearer n4i-dev-key" \
-d '{
"prompt": "Create a simple card with a hello message",
"stream": false
}'
# Streaming request
curl -X POST http://localhost:3000/api/n4i \
-H "Content-Type: application/json" \
-d '{"prompt": "Create a dashboard", "stream": true}'
# With RAG context
curl -X POST http://localhost:3000/api/n4i \
-H "Content-Type: application/json" \
-d '{
"prompt": "Show user profile",
"context": ["User: John, Role: Admin, Joined: 2024"]
}'Next Steps
- → Add RAG Integration for context-aware generation
- → Connect with Frontend Widget
- → Check the full API Reference
