RAG Integration
Enhance UI generation with Retrieval-Augmented Generation. Pass retrieved documents and context to make generated UIs data-aware.
How It Works
1. Retrieve
Query your vector database with the user's prompt
2. Augment
Pass retrieved documents as context to N4i
3. Generate
N4i creates context-aware UI from the data
Client-Side RAG
Pass context when calling the generate/stream functions from your client.
import { useN4i, N4iRenderer } from "n4i-genui/react";
function RAGDashboard() {
const { ui, stream, isStreaming } = useN4i({
apiKey: process.env.NEXT_PUBLIC_N4I_API_KEY!,
baseUrl: "/api/n4i",
});
const handleQuery = async (query: string) => {
// 1. Retrieve relevant documents from your vector DB
const documents = await vectorSearch(query, { limit: 5 });
// 2. Format as context strings
const context = documents.map(doc =>
`Document: ${doc.title}\nContent: ${doc.content}\nMetadata: ${JSON.stringify(doc.metadata)}`
);
// 3. Pass context to N4i
await stream(query, {
context, // Array of strings gets joined with separators
});
};
return (
<div>
<SearchInput
onSubmit={handleQuery}
placeholder="Ask about your data..."
/>
{isStreaming && <LoadingIndicator />}
{ui && <N4iRenderer tree={ui} />}
</div>
);
}
// Example vector search function
async function vectorSearch(query: string, options: { limit: number }) {
const response = await fetch("/api/search", {
method: "POST",
body: JSON.stringify({ query, ...options }),
});
return response.json();
}Server-Side RAG
Perform retrieval on the server before generation for better security and performance.
// app/api/rag-n4i/route.ts
import { createN4iApiHandler } from "n4i-genui/api";
import { searchDocuments } from "@/lib/vector-db";
export const POST = createN4iApiHandler({
defaultModel: "gpt-4o",
// Transform requests to include retrieved context
onRequest: async (request) => {
const { prompt, history } = request;
// Perform vector search
const documents = await searchDocuments(prompt, {
limit: 5,
minScore: 0.7,
});
// Format retrieved documents as context
const ragContext = documents.map(doc =>
`---
Title: ${doc.title}
Source: ${doc.source}
Content: ${doc.content}
---`
).join("\n\n");
// Return modified request with context
return {
...request,
context: ragContext,
// Optionally override system prompt for RAG mode
systemPrompt: `You are a helpful assistant with access to the user's documents.
Use the provided context to generate accurate, data-driven UI.
If the context doesn't contain relevant information, say so.`,
};
},
// Optional: Log what was generated
onResponse: async (response) => {
console.log("Generated UI with RAG context");
return response;
},
});Dynamic Context with getContext
Use the getContext callback in useN4iConversation for automatic per-message retrieval.
import { useN4iConversation, N4iRenderer } from "n4i-genui/react";
import { searchVectorDB } from "@/lib/vector-db";
function SmartChat() {
const {
messages,
sendMessage,
isStreaming,
ui,
} = useN4iConversation({
apiKey: process.env.NEXT_PUBLIC_N4I_API_KEY!,
baseUrl: "/api/n4i",
maxHistory: 10,
// Automatically retrieve context for each message
getContext: async (message) => {
// Search your vector database
const results = await searchVectorDB(message, {
collection: "company-docs",
limit: 3,
});
// Return context strings
return results.map(r => r.content);
},
});
return (
<div className="space-y-4">
{messages.map((msg, i) => (
<div key={i} className={msg.role === "user" ? "text-right" : ""}>
{msg.ui ? (
<N4iRenderer tree={msg.ui} />
) : (
<p>{msg.content}</p>
)}
</div>
))}
<input
onKeyDown={(e) => {
if (e.key === "Enter") {
sendMessage(e.currentTarget.value);
e.currentTarget.value = "";
}
}}
placeholder="Ask about your data..."
disabled={isStreaming}
/>
</div>
);
}Context Format Best Practices
Structure your context for optimal UI generation results.
// ✅ Good: Structured context with clear metadata
const goodContext = [
`# Sales Report Q4 2024
Source: quarterly_report.pdf
Last Updated: 2024-12-15
Total Revenue: $1,245,000
Growth: +15.3% YoY
Top Product: Enterprise Plan ($450K)
Top Region: North America (42%)
Monthly Breakdown:
- October: $380K
- November: $425K
- December: $440K`,
`# Customer Data
Source: CRM Export
Records: 1,234 customers
Segments:
- Enterprise: 89 (45% revenue)
- SMB: 456 (35% revenue)
- Startup: 689 (20% revenue)
Churn Rate: 2.3%`
];
// ❌ Bad: Unstructured raw text
const badContext = "sales 1245000 growth 15 percent enterprise customers...";
// ✅ Good: JSON data with schema
const jsonContext = JSON.stringify({
type: "sales_data",
period: "Q4 2024",
metrics: {
revenue: 1245000,
growth: 0.153,
customers: 1234,
},
breakdown: [
{ month: "Oct", value: 380000 },
{ month: "Nov", value: 425000 },
{ month: "Dec", value: 440000 },
]
}, null, 2);Pro Tip
Include data types, units, and relationships in your context. The AI can generate more accurate visualizations (charts, tables) when it understands the data structure.
Complete RAG Pipeline
A full example showing document indexing, retrieval, and UI generation.
// lib/rag-pipeline.ts
import { OpenAIEmbeddings } from "@langchain/openai";
import { PineconeStore } from "@langchain/pinecone";
import { Pinecone } from "@pinecone-database/pinecone";
// Initialize vector store
const pinecone = new Pinecone();
const index = pinecone.Index("documents");
const embeddings = new OpenAIEmbeddings();
// Document indexing (run once or on document upload)
export async function indexDocuments(documents: Document[]) {
await PineconeStore.fromDocuments(documents, embeddings, {
pineconeIndex: index,
namespace: "company-docs",
});
}
// Retrieval function
export async function retrieveContext(query: string): Promise<string[]> {
const vectorStore = await PineconeStore.fromExistingIndex(embeddings, {
pineconeIndex: index,
namespace: "company-docs",
});
const results = await vectorStore.similaritySearch(query, 5);
return results.map(doc =>
`Source: ${doc.metadata.source}\n${doc.pageContent}`
);
}
// API Route: app/api/rag-chat/route.ts
import { createN4iApiHandler } from "n4i-genui/api";
import { retrieveContext } from "@/lib/rag-pipeline";
export const POST = createN4iApiHandler({
onRequest: async (request) => {
const context = await retrieveContext(request.prompt || "");
return {
...request,
context,
};
},
});
// Client Component
import { useN4iChat, N4iMessageRenderer } from "n4i-genui/react";
function RAGChat() {
const { messages, sendMessage, isStreaming } = useN4iChat({
apiEndpoint: "/api/rag-chat",
});
return (
<div>
{messages.map((msg, i) => (
<div key={i}>
{msg.role === "user" ? (
<p>{msg.content}</p>
) : (
<N4iMessageRenderer content={msg.content} />
)}
</div>
))}
<input
onKeyDown={(e) => {
if (e.key === "Enter") {
sendMessage(e.currentTarget.value);
e.currentTarget.value = "";
}
}}
disabled={isStreaming}
placeholder="Ask about your documents..."
/>
</div>
);
}Example RAG Prompts
"Show me our sales performance for Q4"
Revenue data, regional breakdown, product performance
Dashboard with charts, KPI cards, and comparison tables
"Display customer John Smith's account details"
CRM record with contact info, purchase history, support tickets
Profile card with data list, activity timeline, related orders
"Create a report comparing our top 5 products"
Product catalog with sales figures, reviews, inventory
Comparison table, bar charts, product cards with metrics
Next Steps
- → Set up your API endpoint
- → Build a Chat Interface
- → Customize with Theming
