API Handler

Create server-side API endpoints with built-in streaming, rate limiting, CORS, and API key validation. Works with Next.js App Router.

Streaming

SSE support out of the box

Secure

API key validation & rate limiting

Multi-Provider

OpenAI, Anthropic, Ollama, etc.

Quick Start

Create a fully-featured N4i API endpoint in just a few lines.

TypeScript
// app/api/n4i/route.ts
import { createN4iApiHandler } from "n4i-genui/api";

// One line setup with sensible defaults
export const POST = createN4iApiHandler();

// For CORS preflight (required for cross-origin requests)
export const OPTIONS = createN4iApiHandler().options;

That's it!

Your endpoint now handles streaming responses, JSON parsing, error handling, and connects to your configured AI provider automatically.

Full Configuration

Customize every aspect of your API handler for production use.

TypeScript
// app/api/n4i/route.ts
import { createN4iApiHandler } from "n4i-genui/api";

export const POST = createN4iApiHandler({
  // AI Model Configuration
  defaultModel: "claude-3-5-sonnet-20241022",
  temperature: 0.2,
  maxTokens: 8192,
  jsonMode: true,  // Request JSON output from model

  // Custom System Prompt
  systemPrompt: `You are a UI generation assistant for a fintech dashboard.
Generate UI components using our design system.
Always use professional, accessible colors.`,

  // API Key Validation
  requireApiKey: true,
  apiKeyValidator: async (key) => {
    // Custom validation (e.g., database lookup)
    const isValid = await validateKeyInDB(key);
    return isValid;
  },

  // Rate Limiting
  enableRateLimit: true,
  rateLimitPerMinute: 30,

  // CORS Configuration
  enableCors: true,
  corsOrigins: [
    "https://myapp.com",
    "https://staging.myapp.com"
  ],

  // Request/Response Hooks
  onRequest: async (request) => {
    // Transform or validate request
    console.log("Request:", request.prompt);
    return request;
  },
  onResponse: async (response) => {
    // Transform response, add logging, etc.
    await logToAnalytics(response);
    return response;
  },
});

export const OPTIONS = createN4iApiHandler({ enableCors: true }).options;

Configuration Options

OptionTypeDefaultDescription
defaultModelstring"gpt-4o"Default AI model to use
systemPromptstringN4I_SYSTEM_PROMPTSystem prompt for generation
temperaturenumber0.1Model temperature (0-1)
maxTokensnumber32768Max tokens to generate
jsonModebooleantrueRequest JSON output
requireApiKeybooleanfalseRequire API key validation
apiKeyValidator(key) => Promise<boolean>validateApiKeyCustom key validator
enableRateLimitbooleanfalseEnable rate limiting
rateLimitPerMinutenumber60Requests per minute
enableCorsbooleantrueAdd CORS headers
corsOriginsstring[]["*"]Allowed origins
onRequest(req) => Promise<req>-Request preprocessor
onResponse(res) => Promise<res>-Response postprocessor

Request Format

The API accepts POST requests with JSON body. Two formats are supported:

Simple Prompt

TypeScript
{
  "prompt": "Create a user dashboard",
  "model": "gpt-4o",           // optional
  "stream": true,              // optional
  "temperature": 0.2,          // optional
  "context": "User: John Doe"  // optional RAG
}

With History

TypeScript
{
  "prompt": "Add a chart to it",
  "history": [
    {"role": "user", "content": "Create..."},
    {"role": "assistant", "content": "..."}
  ],
  "context": ["Doc 1", "Doc 2"]
}

OpenAI-Compatible Format

TypeScript
{
  "model": "n4i-default",
  "messages": [
    {"role": "system", "content": "Custom context..."},
    {"role": "user", "content": "Create a dashboard"}
  ],
  "stream": true,
  "temperature": 0.1,
  "max_tokens": 8192
}

Custom Provider Integration

For advanced use cases, use the low-level provider APIs directly.

TypeScript
// app/api/custom-n4i/route.ts
import { streamProvider, N4I_SYSTEM_PROMPT, getCorsHeaders } from "n4i-genui/server";

export async function POST(request: Request) {
  const { prompt, context } = await request.json();

  // Build messages with optional RAG context
  const systemPrompt = context
    ? `${N4I_SYSTEM_PROMPT}\n\nContext: ${context}`
    : N4I_SYSTEM_PROMPT;

  const messages = [
    { role: "system" as const, content: systemPrompt },
    { role: "user" as const, content: prompt },
  ];

  // Create streaming response
  const encoder = new TextEncoder();
  
  const stream = new ReadableStream({
    async start(controller) {
      try {
        for await (const chunk of streamProvider("gpt-4o", messages)) {
          if (chunk.content) {
            const data = JSON.stringify({ content: chunk.content });
            controller.enqueue(encoder.encode(`data: ${data}\n\n`));
          }
          
          if (chunk.done) {
            controller.enqueue(encoder.encode("data: [DONE]\n\n"));
            break;
          }
          
          if (chunk.error) {
            const data = JSON.stringify({ error: chunk.error });
            controller.enqueue(encoder.encode(`data: ${data}\n\n`));
            break;
          }
        }
      } finally {
        controller.close();
      }
    },
  });

  return new Response(stream, {
    headers: {
      "Content-Type": "text/event-stream",
      "Cache-Control": "no-cache",
      "Connection": "keep-alive",
      ...getCorsHeaders({ allowedOrigins: ["*"] }, request),
    },
  });
}

Environment Variables

Configure your API keys and settings via environment variables.

TypeScript
# .env.local

# AI Provider Keys (at least one required)
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_AI_API_KEY=AIza...
GROQ_API_KEY=gsk_...
OPENROUTER_API_KEY=sk-or-...

# Vercel AI Gateway (optional, for unified access)
AI_GATEWAY_API_KEY=...
USE_VERCEL_AI_GATEWAY=true

# Ollama (local models)
OLLAMA_URL=http://localhost:11434
OLLAMA_API_KEY=...  # For Ollama cloud

# N4i Configuration
N4I_DEV_API_KEY=n4i-dev-key              # Development key
N4I_API_KEYS=key1:name1,key2:name2       # Production keys
N4I_CORS_ORIGINS=https://myapp.com       # Allowed origins
DEFAULT_MODEL=gpt-4o                      # Default model

Testing with cURL

Quick commands to test your API endpoint.

TypeScript
# Non-streaming request
curl -X POST http://localhost:3000/api/n4i \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer n4i-dev-key" \
  -d '{
    "prompt": "Create a simple card with a hello message",
    "stream": false
  }'

# Streaming request
curl -X POST http://localhost:3000/api/n4i \
  -H "Content-Type: application/json" \
  -d '{"prompt": "Create a dashboard", "stream": true}'

# With RAG context
curl -X POST http://localhost:3000/api/n4i \
  -H "Content-Type: application/json" \
  -d '{
    "prompt": "Show user profile",
    "context": ["User: John, Role: Admin, Joined: 2024"]
  }'

Next Steps