React Hooks
Use N4i hooks for full control over UI generation in your React applications. Choose the hook that best fits your use case.
Live Demo
Try the useN4i hook in action. Enter a prompt and generate UI.
Enter a prompt above and click "Generate" to see the hook in action
Available Hooks
useN4i
Basic hook for single prompt/response generation
useN4iChat
Full-featured streaming chat with message history
useN4iConversation
Conversation management with RAG context support
useN4i Hook
The basic hook for generating UI from prompts. Supports both streaming and non-streaming modes.
import { useN4i, N4iRenderer } from "n4i-genui/react";
function GeneratorUI() {
const {
ui, // Generated UiNode or null
isLoading, // True during non-streaming requests
isStreaming, // True during streaming
error, // Error message if any
streamContent,// Raw streamed JSON content
generate, // Non-streaming generation
stream, // Streaming generation
reset, // Reset state
} = useN4i({
apiKey: process.env.NEXT_PUBLIC_N4I_API_KEY!,
baseUrl: "/api/n4i",
defaultModel: "gpt-4o",
onUIGenerated: (ui) => console.log("UI ready:", ui),
onError: (error) => console.error("Error:", error),
onStreamChunk: (chunk) => console.log("Chunk:", chunk),
});
return (
<div className="space-y-4">
{/* Generate Button */}
<button
onClick={() => stream("Create a user profile card with stats")}
disabled={isLoading || isStreaming}
className="px-4 py-2 bg-blue-500 rounded-lg disabled:opacity-50"
>
{isStreaming ? "Generating..." : "Generate UI"}
</button>
{/* Streaming Preview */}
{isStreaming && streamContent && (
<pre className="p-4 bg-slate-800 rounded-lg text-xs max-h-40 overflow-auto">
{streamContent.slice(-500)}
</pre>
)}
{/* Error Display */}
{error && (
<div className="p-4 bg-red-500/10 border border-red-500/30 rounded-lg">
Error: {error}
</div>
)}
{/* Rendered UI */}
{ui && (
<div className="p-4 bg-white rounded-lg">
<N4iRenderer
tree={ui}
onAction={(actionId, nodeId) => {
console.log("Action:", actionId, "Node:", nodeId);
}}
/>
</div>
)}
</div>
);
}useN4i with RAG Context
Inject retrieved documents or context into the generation process.
import { useN4i, N4iRenderer } from "n4i-genui/react";
function RAGExample() {
const { ui, stream, isStreaming } = useN4i({
apiKey: process.env.NEXT_PUBLIC_N4I_API_KEY!,
baseUrl: "/api/n4i",
// Default context applied to all generations
defaultContext: "Company: Acme Corp, Industry: E-commerce",
// Default system prompt override
defaultSystemPrompt: "Generate UI for an e-commerce platform...",
});
const handleGenerate = async () => {
// Fetch relevant documents (e.g., from vector database)
const documents = await fetchRelevantDocs("sales data");
// Pass per-request context
await stream("Show me sales performance", {
context: documents.map(d => d.content),
// Or a single string
// context: "Q4 Revenue: $1.2M, Growth: 15%...",
});
};
return (
<div>
<button onClick={handleGenerate} disabled={isStreaming}>
Generate with Context
</button>
{ui && <N4iRenderer tree={ui} />}
</div>
);
}useN4i Return Value
| Property | Type | Description |
|---|---|---|
| ui | UiNode | null | Generated UI tree |
| isLoading | boolean | True during non-streaming requests |
| isStreaming | boolean | True during streaming |
| error | string | null | Error message if generation failed |
| streamContent | string | Raw streamed JSON content |
| generate | (prompt, options?) => Promise | Non-streaming generation |
| stream | (prompt, options?) => Promise | Streaming generation |
| reset | () => void | Reset all state |
useN4iConversation Hook
Manage multi-turn conversations with history tracking and RAG context support.
import { useN4iConversation, N4iRenderer } from "n4i-genui/react";
function ChatInterface() {
const {
messages, // Array of ConversationMessage
sendMessage, // Send new message with optional context
clearHistory, // Clear conversation
ui, // Latest generated UI
isStreaming,
} = useN4iConversation({
apiKey: process.env.NEXT_PUBLIC_N4I_API_KEY!,
baseUrl: "/api/n4i",
maxHistory: 10, // Keep last 10 messages
// Dynamic RAG context fetcher
getContext: async (message) => {
const docs = await vectorSearch(message);
return docs.map(d => d.content);
},
});
return (
<div className="flex flex-col h-screen">
{/* Messages */}
<div className="flex-1 overflow-auto p-4 space-y-4">
{messages.map((msg, i) => (
<div
key={i}
className={`p-4 rounded-lg ${
msg.role === "user" ? "bg-blue-500/10 ml-auto" : "bg-gray-100"
}`}
>
{msg.ui ? (
<N4iRenderer tree={msg.ui} />
) : (
<p>{msg.content}</p>
)}
<span className="text-xs text-gray-500">
{msg.timestamp.toLocaleTimeString()}
</span>
</div>
))}
</div>
{/* Input */}
<form
onSubmit={(e) => {
e.preventDefault();
const input = e.currentTarget.elements.namedItem("message") as HTMLInputElement;
sendMessage(input.value);
input.value = "";
}}
className="p-4 border-t"
>
<div className="flex gap-2">
<input
name="message"
placeholder="Ask something..."
disabled={isStreaming}
className="flex-1 px-4 py-2 border rounded-lg"
/>
<button
type="submit"
disabled={isStreaming}
className="px-4 py-2 bg-blue-500 text-white rounded-lg"
>
{isStreaming ? "..." : "Send"}
</button>
</div>
</form>
</div>
);
}ConversationMessage Type
{ role: "user" | "assistant", content: string, ui?: UiNode, timestamp: Date }Generation Options
Both generate() and stream() accept these options:
interface GenerateOptions {
// Model to use (overrides defaultModel)
model?: string;
// Temperature (0-1, lower = more deterministic)
temperature?: number;
// Maximum tokens to generate
maxTokens?: number;
// Conversation history for context
history?: ChatMessage[];
// RAG context (single string or array)
context?: string | string[];
// Custom system prompt for this request
systemPrompt?: string;
}
// Example usage
await stream("Create a dashboard", {
model: "claude-3-5-sonnet",
temperature: 0.2,
context: ["Revenue: $1.2M", "Users: 50,000"],
});Custom Loading States
Build custom loading experiences using streaming state.
import { useN4i, N4iRenderer } from "n4i-genui/react";
import { motion, AnimatePresence } from "framer-motion";
function CustomLoadingUI() {
const { ui, isStreaming, streamContent, stream } = useN4i({
apiKey: process.env.NEXT_PUBLIC_N4I_API_KEY!,
});
return (
<div>
<AnimatePresence mode="wait">
{isStreaming ? (
<motion.div
key="loading"
initial={{ opacity: 0 }}
animate={{ opacity: 1 }}
exit={{ opacity: 0 }}
className="space-y-4"
>
{/* Animated Skeleton */}
<div className="animate-pulse space-y-3">
<div className="h-8 bg-gray-200 rounded w-1/3" />
<div className="h-4 bg-gray-200 rounded w-full" />
<div className="h-4 bg-gray-200 rounded w-2/3" />
</div>
{/* Live JSON Preview */}
<div className="text-xs font-mono text-gray-500 max-h-20 overflow-hidden">
{streamContent.slice(-200)}
</div>
{/* Progress Indicator */}
<div className="flex items-center gap-2 text-sm text-gray-500">
<span className="animate-spin">⟳</span>
Generating UI...
</div>
</motion.div>
) : ui ? (
<motion.div
key="ui"
initial={{ opacity: 0, y: 10 }}
animate={{ opacity: 1, y: 0 }}
>
<N4iRenderer tree={ui} />
</motion.div>
) : null}
</AnimatePresence>
</div>
);
}Next Steps
- → Build a Chat Interface with useN4iChat
- → Add RAG Context for smarter generation
- → Set up your API endpoint
