# Why components?
GenSX uses components for building workflows and focuses on composition over abstraction. Many LLM and agent frameworks add abstractions to make it easier to get up and running but this approach takes you farther away from the underlying models and can make it harder to understand, debug, and improve your workflows.
Components in GenSX take inspiration from React's programming model, but are designed to be used for building the backend of your LLM applications. Components offer many benefits: they're re-usable, idempotent, and can be tested in isolation. They also provide good boundaries for tracing, retries, and error handling.
This page explains why components are a perfect fit for anyone building LLM applications, whether it be simple linear workflows or complex cyclical agents. At the end of the day, building agents and workflows is all about constructing a dataflow graph. And agents in particular need to dynamically branch and execute conditionally at runtime. This is exactly what GenSX excels at.
## Why not graphs?
Graph APIs are the standard for LLM frameworks. They provide APIs to define nodes, edges between those nodes, and a global state object that is passed around the workflow.
A workflow for writing a blog post might look like this:
```ts
const graph = new Graph()
.addNode("fetchHNPosts", FetchHNPosts)
.addNode("analyzeHNPosts", AnalyzeHNPosts)
.addNode("generateReport", GenerateReport)
.addNode("editReport", EditReport)
.addNode("writeTweet", WriteTweet);
graph
.addEdge(START, "fetchHNPosts")
.addEdge("fetchHNPosts", "analyzeHNPosts")
.addEdge("analyzeHNPosts", "generateReport")
.addEdge("generateReport", "editReport")
.addEdge("editReport", "writeTweet")
.addEdge("writeTweet", END);
```
Can you easily read this code and visualize the workflow?
On the other hand, the same workflow with GenSX reads top to bottom like normal code:
```ts
const AnalyzeHackerNewsTrends = gensx.Workflow(
"AnalyzeHackerNewsTrends",
async ({ postCount }) => {
const stories = await FetchHNPosts({ limit: postCount });
const { analyses } = await AnalyzeHNPosts({ stories });
const report = await GenerateReport({ analyses });
const editedReport = await EditReport({ content: report });
const tweet = await WriteTweet({
context: editedReport,
prompt: "Summarize the HN trends in a tweet",
});
return { report: editedReport, tweet };
},
);
```
As you'll see in the next section, trees are just another kind of graph and you can express all of the same things.
## Graphs, DAGs, and trees
Most workflow frameworks use explicit graph construction with nodes and edges. This makes sense - workflows are fundamentally about connecting steps together, and graphs are a natural way to represent these connections.
However, using components lets you express trees and trees are just a special kind of graph - one where each node has a single parent. At first glance, this might seem more restrictive than a general graph. But components gives us something powerful: the ability to express _programmatic_ trees.
Consider a cycle in a workflow:
```ts
const Reflection = gensx.Component("Reflection", async () => {
let { needsWork, feedback } = await EvaluateFn({ input });
let improvedInput = input;
while (needsWork) {
improvedInput = await ImproveFn({ input: improvedInput, feedback });
({ needsWork, feedback } = await EvaluateFn({ input: improvedInput }));
}
return improvedInput;
});
```
Typescript and components allow you to express cycles through normal programming constructs. This gives you the best of both worlds:
- Visual clarity of a tree structure
- Full expressiveness of a graph API
- Natural control flow through standard TypeScript
- No explicit edge definitions needed
## Pure functional components
GenSX uses a functional component model, enabling you to compose your workflows from discrete, reusable steps.
Functional and reusable components can be published and shared on `npm`, and it's easy to test and evaluate them in isolation.
Writing robust evals is the difference between a prototype and a high quality AI app. Usually you start with end to end evals, but as workflows grow these become expensive, take a long time to run, and it can be difficult to isolate and understand the impact of changes in your workflow.
By breaking down your workflow into discrete components, you can write more focused evals that are easier to run, faster to complete, and test the impact of specific changes in your workflow.
## Programming language native
TypeScript and components give you everything you need to build complex workflows:
- Conditionals via `if`, `??`, and other standard primitives
- Looping via `for` and `while`
- Vanilla function calling
- Type safety
No DSL required, just standard TypeScript.
# TypeScript Compatibility
GenSX is designed to be compatible with TypeScript. This page has some tips and tricks for using GenSX with TypeScript.
## Minimum TypeScript Version
GenSX requires TypeScript version 5.1 or higher.
## TypeScript Configuration
- `target` should be set to at least `es6`
- `module` should be set to at least `es6`
## JavaScript Compatibility
GenSX is bundled with builds for both CommonJS and ESM. This means that you can use GenSX in any environment that supports either of these module formats.
# Use a template
GenSX provides several templates to help you get up and running quickly. This guide will walk you through cloning and deploying one of the GenSX app templates of your choice.
If you're looking for a more detailed guide on how to build GenSX Workflows, check out the [quickstart](/docs/quickstart) guide.
## Select a template
To get started, select one of the templates below:
The [Chat UX template](https://github.com/gensx-inc/chat-ux-template) is a full-featured chat application with streaming, tools, and thread history.
This Next.js chat template provides a great starting point for building chat-based applications with GenSX. Key features include:
- **Streaming chat**: Real-time message streaming with instant responses
- **AI thinking**: Visible AI reasoning process before generating responses
- **Tool integration**: Built-in tools for web scraping, search, and data processing
- **Thread history**: Persistent chat history using GenSX Storage, separated by user
The [Deep Research template](https://github.com/gensx-inc/deep-research-template) is a deep research tool that writes detailed reports on any topic.
This Next.js app template provides both the front-end and back-end for a deep research tool with results iteratively streamed to the app to keep users engaged. Key features include:
- **Multi-step streaming workflow**: Iterative research process that builds comprehensive reports
- **Streaming UX**: The entire research process is streamed in real-time and users can see the results of each step including text from search results
- **Web search and summarization**: Web search using Tavily along with extractive summarization to manage context size
- **Detailed report generation**: Creates structured, in-depth reports on any research topic
- **History**: Persistent storage of research history for each user
The [Draft Pad template](https://github.com/gensx-inc/draft-pad-template) is a real-time collaborative writing and editing tool with versioning and diffing.
This Next.js template shows how to build AI-powered writing tools with GenSX workflows. Key features include:
- **Real-time streaming**: Content updates live as the AI generates text
- **Interactive chat**: Conversational interface with full message history
- **Draft versioning**: Track changes and navigate between different versions
- **Live progress tracking**: See detailed workflow progress and event updates
- **Multi-provider support**: Works with OpenAI, Anthropic, Google, and many other AI providers
The [Client Side Tools template](https://github.com/gensx-inc/client-side-tools-template) is an app showing how to call tools on the client side to build interactive apps. This is demonstrated via a "zap map" application that allows AI to control the map including moving the map around and placing markers.
## Log in to GenSX
If you haven't already, download the GenSX CLI and log in to GenSX using the following commands:
```bash
# Install the CLI
npm i -g gensx
# Log in to GenSX Cloud
gensx login
```
Youโll be redirected to the GenSX website and will need to create an account if you donโt have one already.
## Download the template
Next, download the template by running the following command:
```bash
gensx template clone chat-ux
```
This will clone the template into a new directory called `chat-ux`.
Alternatively, you can [create your own GitHub repository from the template](https://github.com/gensx-inc/chat-ux-template/generate) and then clone it yourself.
```bash
gensx template clone deep-research
```
This will clone the template into a new directory called `deep-research`.
Alternatively, you can [create your own GitHub repository from the template](https://github.com/gensx-inc/deep-research-template/generate) and then clone it yourself.
```bash
gensx template clone draft-pad
```
This will clone the template into a new directory called `draft-pad`.
Alternatively, you can [create your own GitHub repository from the template](https://github.com/gensx-inc/draft-pad-template/generate) and then clone it yourself.
```bash
gensx template clone client-side-tools
```
This will clone the template into a new directory called `client-side-tools`.
Alternatively, you can [create your own GitHub repository from the template](https://github.com/gensx-inc/client-side-tools-template/generate) and then clone it yourself.
## Run the template
Now that you've cloned the template, navigate to the template directory:
```bash
cd chat-ux
```
Next, configure the required API keys for your template. This template requires [Tavily](https://tavily.com), [Anthropic](https://www.anthropic.com), and [OpenAI](https://openai.com) API keys.
```bash
export TAVILY_API_KEY=your-api-key
export ANTHROPIC_API_KEY=your-api-key
```
```bash
cd deep-research
```
Next, configure the required API keys for your template. This template requires [Tavily](https://tavily.com), [Anthropic](https://www.anthropic.com), and [OpenAI](https://openai.com) API keys.
```bash
export TAVILY_API_KEY=your-api-key
export ANTHROPIC_API_KEY=your-api-key
export OPENAI_API_KEY=your-api-key
```
```bash
cd draft-pad
```
Next, configure the required API keys for your template. This template requires at least one AI provider API key. You can use [Anthropic](https://www.anthropic.com), [OpenAI](https://openai.com), [Google](https://ai.google.dev), or any other supported provider shown below.
```bash
# Choose one or more AI providers
# OpenAI
export OPENAI_API_KEY=your-api-key
# Anthropic
export ANTHROPIC_API_KEY=your-api-key
# Google
export GOOGLE_GENERATIVE_AI_API_KEY=your-google-api-key
# Mistral
export MISTRAL_API_KEY=your-mistral-api-key
# Cohere
export COHERE_API_KEY=your-cohere-api-key
# Amazon Bedrock (AWS)
export AWS_ACCESS_KEY_ID=your-aws-access-key
export AWS_SECRET_ACCESS_KEY=your-aws-secret-key
export AWS_REGION=us-east-1
# Azure OpenAI
export AZURE_OPENAI_API_KEY=your-azure-openai-api-key
export AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com
# DeepSeek
export DEEPSEEK_API_KEY=your-deepseek-api-key
# Groq
export GROQ_API_KEY=your-groq-api-key
# xAI (Grok)
export XAI_API_KEY=your-xai-api-key
```
```bash
cd client-side-tools
```
Next, configure the required API keys for your template. This template requires both [Tavily](https://tavily.com) and [Anthropic](https://www.anthropic.com) API keys.
```bash
export TAVILY_API_KEY=your-api-key
export ANTHROPIC_API_KEY=your-api-key
```
Start the development server:
```bash
npm run dev
```
This will start the Next.js app on [localhost:3000](http://localhost:3000) and run the GenSX workflows on [localhost:1337](http://localhost:1337/swagger-ui).
You should see an output in the CLI that looks something like this:
```bash
[0]
[0] > @examples/chat-ux@0.1.0 dev:next
[0] > next dev
[0]
[1]
[1] > @examples/chat-ux@0.1.0 dev:gensx
[1] > gensx start ./gensx/workflows.ts
[1]
[1] Starting GenSX dev server...
[0] โฒ Next.js 15.3.3
[0] - Local: http://localhost:3000
[0] - Network: http://192.168.0.93:3000
[0]
[0] โ Starting...
[1]
[1] ๐ GenSX Dev Server running at http://localhost:1337
[1] ๐งช Swagger UI available at http://localhost:1337/swagger-ui
[1]
[1] โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
[1] โ Available workflows: โ
[1] โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
[1] โ Chat: http://localhost:1337/workflows/Chat โ
[1] โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
[1]
[1] Listening for changes... 4:15:56 PM
[0] โ Ready in 850ms
```
For more detailed information about the template, check out the [template's README](https://github.com/gensx-inc/chat-ux-template/blob/main/README.md).
```bash
[1]
[1] > @examples/deep-research@0.1.0 dev:gensx
[1] > gensx start ./gensx/workflows.ts
[1]
[0]
[0] > @examples/deep-research@0.1.0 dev:next
[0] > next dev
[0]
[1] Starting GenSX dev server...
[0] โฒ Next.js 15.3.3
[0] - Local: http://localhost:3000
[0] - Network: http://192.168.0.93:3000
[0]
[0] โ Starting...
[1]
[1] ๐ GenSX Dev Server running at http://localhost:1337
[1] ๐งช Swagger UI available at http://localhost:1337/swagger-ui
[1]
[1] โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
[1] โ Available workflows: โ
[1] โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
[1] โ DeepResearch: http://localhost:1337/workflows/DeepResearch โ
[1] โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
[1]
[1] Listening for changes... 9:02:33 PM
[0] โ Ready in 1148ms
```
For more detailed information about the template, check out the [template's README](https://github.com/gensx-inc/deep-research-template/blob/main/README.md).
```bash
[1]
[1] > draft-pad@0.1.0 dev:gensx /Users/some-user/source/draft-pad
[1] > npx gensx@latest start ./gensx/workflows.ts
[1]
[0]
[0] > draft-pad@0.1.0 dev:next /Users/some-user/source/draft-pad
[0] > next dev -p 3100
[0]
[0] โฒ Next.js 15.3.3
[0] - Local: http://localhost:3100
[0] - Network: http://192.168.0.93:3100
[0]
[0] โ Starting...
[0] โ Ready in 942ms
[1] Starting GenSX dev server...
[1]
[1] ๐ GenSX Dev Server running at http://localhost:1337
[1] ๐งช Swagger UI available at http://localhost:1337/swagger-ui
[1]
[1] โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
[1] โ Available workflows: โ
[1] โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
[1] โ updateDraft: http://localhost:1337/workflows/updateDraft โ
[1] โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
[1]
[1] Listening for changes... 9:03:22 PM
```
For more detailed information about the template, check out the [template's README](https://github.com/gensx-inc/draft-pad-template/blob/main/README.md).
```bash
[1]
[1] > @examples/client-side-tools@0.1.0 dev:gensx
[1] > npx gensx@latest start ./gensx/workflows.ts
[1]
[0]
[0] > @examples/client-side-tools@0.1.0 dev:next
[0] > NEXT_PUBLIC_MAPBOX_ACCESS_TOKEN=${MAPBOX_ACCESS_TOKEN} next dev -p 3002
[0]
[0] โฒ Next.js 15.3.3
[0] - Local: http://localhost:3002
[0] - Network: http://192.168.0.93:3002
[0]
[0] โ Starting...
[0] โ Ready in 915ms
[1] Starting GenSX dev server...
[1]
[1] ๐ GenSX Dev Server running at http://localhost:1337
[1] ๐งช Swagger UI available at http://localhost:1337/swagger-ui
[1]
[1] โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
[1] โ Available workflows: โ
[1] โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
[1] โ MapAgent: http://localhost:1337/workflows/MapAgent โ
[1] โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
[1]
[1] Listening for changes... 9:04:59 PM
```
For more detailed information about the template, check out the [template's README](https://github.com/gensx-inc/client-side-tools-template/blob/main/README.md).
And with that, you're ready to start customizing the template! You can stop here or continue on to customize and then deploy the workflows and web app template.
## Deploy the template
Now that you got the template up and running you're ready to deploy it. You'll need to deploy both the GenSX workflows and the Next.js app.
### Deploy the GenSX workflows
You can deploy the GenSX workflows by running the following command. Make sure you've already exported the environment variables:
```bash
npm run deploy
```
You can see the full command in the `package.json` file but under the hood, this is calling the `gensx deploy` command with the environment variables set.
### Deploy the Next.js app
You can deploy the Next.js app to [Vercel](https://vercel.com) by running the following command:
```bash
# Install Vercel CLI
npm i -g vercel
# Log in to Vercel
vercel login
# Deploy the Next.js app
vercel
```
Walk through the prompts to deploy the app. Once you finish, you'll need to set the following environment variables for the app. These are needed for the web app to call the GenSX workflows and to use GenSX Cloud storage:
```bash
vercel env add GENSX_API_KEY
vercel env add GENSX_ORG
vercel env add GENSX_PROJECT
vercel env add GENSX_ENV
```
# Streaming & React integration
AI-powered applications need to be responsive and update in real-time to keep users engaged. GenSX provides utilities and hooks that make it easy to build interactive, streaming applications on top of your workflows. These capabilities include:
- **Streaming objects** - Stream your workflow's state with `publishObject` and consume it with the `useObject` hook
- **Custom event streams** - Broadcast workflow events using `publishEvent` and consume them with the `useEvents` hook
- **Resumable streams** - Pick up exactly where you left off if connections drop or replay a stream at any time
- **Strongly-typed streaming** - Full TypeScript support for all streaming data
This guide will walk you through how to build responsive UI's that update in real-time as your workflow runs. It covers both [how to stream data from your workflow](#streaming-data-from-workflows) and then [how to consume it in your React app](#consuming-streaming-data-in-react).
## Streaming data from workflows
There are multiple ways to stream data from your workflows. You can stream outputs, objects, events, or any arbitrary data.
### Streaming objects
GenSX provides a `publishObject` function that allows you to stream an object to the client. It's designed for you to continually publish the latest state of an object as it gets updated. The changes will be patched to efficiently update the state on the client.
```typescript
import * as gensx from "@gensx/core";
interface Message {
role: "user" | "assistant" | "system";
content: string;
}
gensx.publishObject("messages", [
{ role: "assistant", content: "Hello, how can I help you today?" },
]);
```
You can also use `createObjectStream` to create a reusable function for publishing a given object.
```typescript
const publishMessages = gensx.createObjectStream("messages");
publishMessages([
{ role: "assistant", content: "Hello, how can I help you today?" },
]);
```
On the client side, you use the [`useObject`](#using-the-useobject-hook) hook to subscribe to the object and get the latest state.
### Streaming events
GenSX provides a `publishEvent` function that allows you to stream events to the client. It's designed for you to publish events that happen over time.
```typescript
interface ProgressEvent {
progress: "brainstorming" | "researching" | "writing" | "editing";
}
gensx.publishEvent("progress", {
progress: "brainstorming",
});
```
There is also a `createEventStream` helper available for creating a reusable function for publishing the events.
On the client side, you use the [`useEvents`](#using-the-useevents-hook) hook to subscribe to the events and get a list of events or process them in the [`onEvent`](#using-the-useworkflow-hook) callback.
### Streaming arbitrary data
GenSX also provides a lower-level `publishData` function that allows you to pass arbitrary data to the client. It's designed for you to pass data that doesn't fit into the other categories.
```typescript
interface Answer {
answer: string;
confidence: number;
}
gensx.publishData({
answer: "42",
confidence: 0.95,
});
```
Unlike `publishEvent` and `publishObject`, `publishData` does not take in a label and does not have a corresponding react hook so it will need to be consumed manually. The event will have `type: "data"` and the data will be available under the `data` property.
```json
{
"type": "data",
"data": {
"answer": "42",
"confidence": 0.95
},
"id": "1752110129329",
"timestamp": "2025-07-10T01:15:29.329Z"
}
```
### Streaming an output
Often we recommend using the utilities above to stream and then just returning the final accumulated result from your workflow. That approach makes consuming the stream simpler and makes it easy to read the outputs in traces. However, GenSX gives you the flexibility to stream the output directly too.
To stream an output, just have your workflow return a `ReadableStream` or an `AsyncIterator`. Here's an example of streaming a chat response with the Vercel AI SDK:
```typescript
export const StreamingChat = gensx.Workflow(
"StreamingChat",
({ prompt }: { prompt: string }) => {
const result = streamText({
messages: [
{
role: "system",
content:
"you are a trash eating infrastructure engineer embodied as a racoon. Be sassy and fun. ",
},
{
role: "user",
content: prompt,
},
],
model: openai("gpt-4o-mini"),
});
return result.textStream;
},
);
```
When using the streaming headers, the output will be a series of events with the `type: "output"` with the data in the `content` property:
```json
{
"id": "1752109017087",
"timestamp": "2025-07-10T00:56:57.087Z",
"type": "output",
// content is a string, JSON will be stringified
"content": "Hello, world!"
}
```
## Consuming streaming data in React
The `@gensx/react` library is the best way to consume streaming data from GenSX workflows.
### Setting up the passthrough API
To avoid exposing your GenSX API key to the client, we recommend setting up a passthrough API that forwards the request to the GenSX API. For brevity, we won't include the code in this guide but you can grab [the code here](https://github.com/gensx-inc/gensx/blob/main/examples/chat-ux/app/api/gensx/%5Bworkflow%5D/route.ts) as a reference.
### Using the `useWorkflow` hook
The `useWorkflow` hook lets you run a workflow and subscribe to its events and output.
```typescript
const { inProgress, error, output, execution, run, stop, clear } = useWorkflow<
ChatInput, // the input type of the workflow
ChatOutput // the output type of the workflow
>({
config: {
baseUrl: "/api/gensx/chat", // the passthrough API route
},
});
// Run the workflow
await run({
inputs: {
userMessage: "Hello, how are you?",
},
});
```
`useWorkflow` also supports callbacks for `onStart`, `onComplete`, `onError`, and `onEvent` that you can use to handle the workflow's lifecycle.
```typescript
const { error, output, execution, run } = useWorkflow({
config: {
baseUrl: "/api/gensx/chat", // the passthrough API route
},
onStart: () => {
console.log("Workflow started");
},
onComplete: () => {
console.log("Workflow completed");
},
onError: (error) => {
console.error(error);
},
onEvent: (event) => {
if (event.type === "data") {
console.log(event.data);
} else if (event.type === "event") {
console.log(event.label);
console.log(event.data);
} else if (event.type === "output") {
console.log(event.content);
}
},
});
```
### Using the `useObject` hook
The `useObject` hook lets you subscribe to an object published via `publishObject` and get its latest state.
```typescript
const messages = useObject(execution, "messages");
```
Whenever a new version of the object is published, the value will automatically be updated, making it a great way to render data in real-time. In this example, you can render the messages as they are published and the latest text will be streamed to the UI.
```tsx
messages.map((message, index) => (
));
```
### Using the `useEvents` hook
The `useEvents` hook lets you subscribe to events and get the latest state.
```typescript
const progressEvents = useEvents(execution, "progress");
progressEvents.forEach((event) => {
console.log(event.progress);
});
```
You can also pass a callback function to the hook to process the events as they are received.
```typescript
const progressEvents = useEvents(
execution,
"progress",
(event) => {
setState(event.progress);
},
);
```
## Additional events
In addition to the events created by `publishEvent`, `publishObject`, and `publishData`, GenSX also emits the following events:
| Event Type | Description |
| ----------------- | ---------------------------------- |
| `start` | Emitted when the workflow starts |
| `end` | Emitted when the workflow ends |
| `component-start` | Emitted when a component starts |
| `component-end` | Emitted when a component ends |
| `output` | Emitted when an output is returned |
| `error` | Emitted when an error occurs |
### Example events
```json
// start event
{
"type": "start",
"workflowName": "Chat",
"id": "1752108242902",
"timestamp": "2025-07-10T00:44:02.902Z"
}
// end event
{
"type": "end",
"id": "1752108243493",
"timestamp": "2025-07-10T00:44:03.493Z"
}
// component-start event
{
"type": "component-start",
"componentName": "StreamText",
"componentId": "StreamText:7e1339d69eee8d3d",
"id": "1752108242902",
"timestamp": "2025-07-10T00:44:02.902Z"
}
// component-end event
{
"type": "component-end",
"componentName": "StreamText",
"componentId": "StreamText:7e1339d69eee8d3d",
"id": "1752108242904",
"timestamp": "2025-07-10T00:44:02.904Z"
}
// output event
{
"id": "1752109017087",
"timestamp": "2025-07-10T00:56:57.087Z",
"type": "output",
// content is a string, JSON will be stringified
"content": "{\"message\":\"Hello, world!\"}"
}
// error event
{
"id": "1752109017087",
"timestamp": "2025-07-10T00:56:57.087Z",
"type": "error",
"error": "An error occurred"
}
```
## Consuming streaming data from the API
To consume the streaming messages from the API, you need to set the `Accept` header to `text/event-stream` or `application/x-ndjson`. The `@gensx/react` and `@gensx/client` libraries automatically set the `Accept` header for you. If you don't set the `Accept` header, only outputs will be streamed and they will be returned as a basic `application/stream`.
GenSX also allows you to resume a stream at any time by calling the `progress` API with the `lastEventId` query parameter:
```bash
curl "https://api.gensx.com/org/{orgName}/workflowExecutions/{executionId}/progress?lastEventId={lastEventId}" \
-H "Authorization: Bearer {apiKey}" \
-H "Accept: text/event-stream" # or application/x-ndjson
```
Optionally, you can emit the `lastEventId` and the entire stream will be replayed.
## Examples
The links below are end-to-end examples showing how to build streaming applications with GenSX:
- [Chat UX](https://github.com/gensx-inc/gensx/tree/main/examples/chat-ux)
- [Draft Pad](https://github.com/gensx-inc/gensx/tree/main/examples/draft-pad)
- [Deep Research](https://github.com/gensx-inc/gensx/tree/main/examples/deep-research)
# Quickstart
In this quickstart, you'll learn how to get up and running with GenSX, a simple typescript framework for building complex LLM applications.
## Prerequisites
Before getting started, make sure you have the following:
- [Node.js](https://nodejs.org/) version 20 or higher installed
- An [OpenAI API key](https://platform.openai.com/api-keys)
- A package manager of your choice ([npm](https://www.npmjs.com/), [yarn](https://yarnpkg.com/), or [pnpm](https://pnpm.io/))
## Install the `gensx` CLI
You can install the `gensx` CLI using your package manager of choice:
```bash
npm i -g gensx
```
Alternatively, if you prefer not to install the CLI globally, you can prefix every command in this guide with `npx`.
## Log in to GenSX Cloud (optional)
If you want to be able to visualize your workflows and view traces, you'll need to log in to GenSX Cloud. This is optional, but recommended.

To log in to GenSX Cloud, run the following command:
```bash
gensx login
```
You'll be redirected to the GenSX website and will need to create an account if you don't have one already.
Once you're logged in, you're ready to create a workflow! Workflow traces will automatically be saved to the cloud so you can visualize and debug workflow executions.
## Create a new project
To get started, run the `new` command with a project name of your choice. This will create a new GenSX project with a simple workflow to get you started.
```bash
gensx new
```
When creating a new project, you'll be prompted to select IDE rules to add to your project. These rules help AI assistants like Claude, Cursor, Cline, and Windsurf understand your GenSX project better, providing more accurate code suggestions and help.
In `src/workflows.ts`, you'll find a simple `Chat` component and workflow:
```ts
import * as gensx from "@gensx/core";
import { openai } from "@ai-sdk/openai";
import { generateText } from "@gensx/vercel-ai";
interface ChatProps {
userMessage: string;
}
const Chat = gensx.Component("Chat", async ({ userMessage }: ChatProps) => {
const result = await generateText({
model: openai("gpt-4.1-mini"),
messages: [
{
role: "system",
content: "You are a helpful assistant.",
},
{ role: "user", content: userMessage },
],
});
return result.text;
});
const ChatWorkflow = gensx.Workflow(
"ChatWorkflow",
async ({ userMessage }: ChatProps) => {
return await Chat({ userMessage });
},
);
export { ChatWorkflow };
```
This template shows the basics of building a GenSX workflow:
- Components and workflows are just pure functions that take inputs and return outputs
- You create components and workflows by calling `gensx.Component()` and `gensx.Workflow()` along with a name and a function.
- Components are the building blocks of workflows. Workflows are the entry point to your application and are what we'll deploy as an API in a few steps.
- You can use the LLM package of your choice. GenSX provides `@gensx/vercel-ai`, `@gensx/openai`, and `@gensx/anthropic` out of the box. These packages are simply wrappers around the original packages optimized for GenSX.
### Running the workflow
The project template includes a `src/index.ts` file that you can use to run the workflow:
```ts
import { ChatWorkflow } from "./workflows.js";
const result = await ChatWorkflow({
userMessage: "Hi there! Say 'Hello, World!' and nothing else.",
});
console.log(result);
```
There's nothing special here--workflows are just invoked like any other function.
To run the workflow, you'll need to set the `OPENAI_API_KEY` environment variable.
```bash
# Set the environment variable
export OPENAI_API_KEY=
# Run the project
pnpm dev
```
This will run the workflow and print the workflow's output to the console along with a URL to the trace (if you're logged in).
```bash
[GenSX] View execution at: https://console///executions/?workflowName=ChatWorkflow
Hello, World!
```
You can now view the trace for this run in GenSX Cloud by clicking the link:

The trace shows a flame graph of your workflow, including every component that executed with inputs and outputs.
Some components will be hidden by default, but you can click the carat to expand them. Clicking on a component will show you details about it's inputs and outputs.
For longer running workflows, this view will update in real-time as the workflow executes.
## Running the dev server
Now that you've built your first workflow, you can easily turn it into a REST API.
GenSX provides a local development server with local REST APIs that match the shape of workflows deployed to GenSX Cloud. You can run the dev server from the CLI:
```bash
# Start the development server
gensx start src/workflows.ts
```
The development server provides several key features:
- **Hot reloading**: Changes to your code are automatically detected and recompiled
- **API endpoints**: Each workflow is exposed as a REST endpoint
- **Swagger UI**: Interactive documentation for your workflows at `http://localhost:1337/swagger-ui`
- **Local storage**: Built-in support for blob storage and databases
You'll see something like this when you start the server:
```bash
๐ GenSX Dev Server running at http://localhost:1337
๐งช Swagger UI available at http://localhost:1337/swagger-ui
Available workflows:
- ChatWorkflow: http://localhost:1337/workflows/ChatWorkflow
Listening for changes... 10:58:55 AM
```
You can now test your workflow by sending requests to the provided URL using any HTTP client, or using the built-in Swagger UI at `http://localhost:1337/swagger-ui`.
## Deploying your project to GenSX Cloud
Now that you've tested your APIs locally, you can deploy them to the cloud. GenSX Cloud provides serverless deployment with zero configuration:
```bash
# Deploy your project to GenSX Cloud
gensx deploy src/workflows.ts -e OPENAI_API_KEY
```
This command:
1. Builds your TypeScript code for production
2. Bundles all dependencies
3. Uploads the package to GenSX Cloud
4. Creates REST API endpoints for each workflow
5. Configures serverless infrastructure
For production deployments, you can target a specific environment:
```bash
# Deploy to production environment
gensx deploy src/workflows.ts -e OPENAI_API_KEY --env production
```
### Running a workflow from the CLI
Once deployed, you can execute your workflows directly from the command line:
```bash
# Run a workflow synchronously
gensx run ChatWorkflow --input '{"userMessage":"Write a poem about an AI loving raccoon"}'
# Save the output to a file
gensx run ChatWorkflow --input '{"userMessage":"Write a haiku"}' --output result.json
```
The CLI makes it easy to test your workflows and integrate them into scripts or automation.
### Running a workflow from the GenSX console
The GenSX Cloud console provides a visual interface for managing and executing your workflows:
1. Log in to the GenSX Console
2. Navigate to your project and environment
3. Select the workflow you want to run
4. Click the "Run" button and enter your input
5. View the results directly in the console

The console also provides API documentation and code snippets for your workflows as well as execution history and tracing for all previous workflow runs.

## Improving your workflow with storage
Now that you've deployed your first workflow, you can use GenSX's cloud storage to build more sophisticated workflows. GenSX offers three types of built-in storage: blob storage, sql database storage and full text and vector search storage.
In this section, we'll add chat history using blob storage and then add in RAG using vector search.
### Chat history with blob storage
To start, we'll add chat history to our workflow. First, we need to install the `@gensx/storage` package then import the `useBlob` hook.
```bash
npm install @gensx/storage
```
```ts
import { useBlob } from "@gensx/storage";
```
Next, we need to update the interfaces for our workflow.
```ts
interface ChatProps {
userMessage: string;
threadId: string; // add thread id for tracking the history
}
// Add this interface for storing chat history
interface ChatMessage {
role: "system" | "user" | "assistant";
content: string;
}
```
Now, we're ready to update the `Chat` component to use blob storage to store chat history.
```ts
const Chat = gensx.Component(
"Chat",
async ({ userMessage, threadId }: ChatProps) => {
// Function to load chat history
const loadChatHistory = async (): Promise => {
const blob = useBlob(`chat-history/${threadId}.json`);
const history = await blob.getJSON();
return history ?? [];
};
// Function to save chat history
const saveChatHistory = async (messages: ChatMessage[]): Promise => {
const blob = useBlob(`chat-history/${threadId}.json`);
await blob.putJSON(messages);
};
try {
// Load existing chat history
const existingMessages = await loadChatHistory();
// Add the new user message
const updatedMessages = [
...existingMessages,
{ role: "user", content: userMessage } as ChatMessage,
];
// Generate response using the model
const result = await generateText({
messages: updatedMessages,
model: openai("gpt-4.1-mini"),
});
// Add the assistant's response to the history
const finalMessages = [
...updatedMessages,
{ role: "assistant", content: result.text } as ChatMessage,
];
// Save the updated chat history
await saveChatHistory(finalMessages);
console.log(
`[Thread ${threadId}] Chat history updated with new messages`,
);
return result.text;
} catch (error) {
console.error("Error in chat processing:", error);
return `Error processing your request in thread ${threadId}. Please try again.`;
}
},
);
const ChatWorkflow = gensx.Workflow(
"ChatWorkflow",
async ({ userMessage, threadId }: ChatProps) => {
return await Chat({ userMessage, threadId });
},
);
```
When run locally, GenSX blob storage will just write to the local filesystem and when you deploy the workflow it will automatically start using cloud storage.
After you've made these updates, deploy and run the workflow again to see chat history in action.
```bash
gensx deploy src/workflows.ts -e OPENAI_API_KEY
# send an initial message to the thread
gensx run ChatWorkflow --input '{"userMessage":"Name the capital of France", "threadId":"123"}'
# use the same thread
gensx run ChatWorkflow --input '{"userMessage":"What was my previous message?", "threadId":"123"}'
```
You should see that the model remembers the previous message. You can also go to the _Blob Storage_ tab in the GenSX Cloud console and see the blob that was created.
### Add RAG with vector search
Next, let's add RAG to our workflow. We'll use GenSX's [llms-full.txt](https://www.gensx.com/llms-full.txt) file and store it in GenSX's vector search.
First, we need to build a workflow that will populate the vector search namespace. Add this to your workflows.ts file.
```ts
import { useSearch } from "@gensx/storage";
import { embedMany } from "@gensx/vercel-ai";
export const InitializeSearch = gensx.Workflow("InitializeSearch", async () => {
// useSearch will create the namespace automatically if it doesn't exist.
const namespace = await useSearch("gensx-docs");
// Fetch content from the URL
const content = await (
await fetch("https://www.gensx.com/llms-full.txt")
).text();
// Split content on H1 headings and filter out empty sections
const documents = content
.split(/\n# /)
.map((text, i) => ({ id: `section-${i + 1}`, text: text.trim() }))
.filter((doc) => doc.text.length > 0);
// Create embeddings for the documents
const embeddings = await embedMany({
model: openai.embedding("text-embedding-3-small"),
values: documents.map((doc) => doc.text),
});
// Write the documents to the vector search namespace
await namespace.write({
upsertRows: documents.map((doc, index) => ({
id: doc.id,
vector: embeddings.embeddings[index],
text: doc.text,
})),
distanceMetric: "cosine_distance",
});
return `Search namespace initialized`;
});
```
Next, let's add a tool that will use the vector search namespace to answer questions.
```ts
import { useSearch } from "@gensx/storage";
import { tool } from "ai";
import { z } from "zod";
import { embed } from "@gensx/vercel-ai";
const tools = {
search: tool({
description: "Search the GenSX documentation",
parameters: z.object({
query: z.string().describe("the search query"),
}),
execute: async ({ query }: { query: string }) => {
const namespace = await useSearch("gensx-docs");
const { embedding } = await embed({
model: openai.embedding("text-embedding-3-small"),
value: query,
});
// Search for similar documents
const results = await namespace.query({
rankBy: ["vector", "ANN", embedding],
topK: 5,
includeAttributes: true,
});
return results;
},
}),
};
```
Finally, update the `Chat` component to use the new tool. You also need to set `maxSteps` to allow the sdk to process the tool calls.
```ts
const result = await generateText({
messages: updatedMessages,
model: openai("gpt-4.1-mini"),
tools,
maxSteps: 5,
});
```
Great! Now you're ready to deploy the workflows and test them:
```bash
gensx deploy src/workflows.ts -e OPENAI_API_KEY
# initialize the search namespace
gensx run InitializeSearch
# run the workflow
gensx run ChatWorkflow --input '{"userMessage":"Succinctly describe GenSX", "threadId":"789"}'
```
Now when you go look at the trace, you'll see that the model used the tool call to search the GenSX documentation. The trace allows you to see all the inputs and outputs of each component including the inputs and outputs of the tool call.

## Learning more
Explore these resources to dive deeper into GenSX:
- [Serverless Deployments](/docs/cloud/serverless-deployments): Deploy and manage your workflows in the cloud
- [Local Development](/docs/cloud/local-development): Set up a productive local environment
- [Storage Components](/docs/component-reference/storage-components): Persistent storage for your workflows
- [Observability & Tracing](/docs/cloud/observability): Debug and monitor your workflows
- [Projects & Environments](/docs/cloud/projects-environments): Organize your deployments
Check out these example projects to see GenSX in action:
- [Blog Writer](https://github.com/gensx-inc/gensx/tree/main/examples/blog-writer)
- [Chat with Memory](https://github.com/gensx-inc/gensx/tree/main/examples/chat-memory)
- [Text to SQL](https://github.com/gensx-inc/gensx/tree/main/examples/text-to-sql)
- [RAG](https://github.com/gensx-inc/gensx/tree/main/examples/rag)
# GenSX Overview
GenSX is a simple TypeScript framework for building complex LLM applications. It's a workflow engine designed for building agents, chatbots, and long-running workflows. In addition, GenSX Cloud offers serverless hosting, cloud storage, and tracing and observability to build production-ready agents and workflows.
Workflows in GenSX are built by composing functional, reusable components that are composed together in plain old typescript:
{/* prettier-ignore-start */}
```ts
const WriteBlog = gensx.Workflow(
"WriteBlog",
async ({ prompt }: WriteBlogInput) => {
const draft = await WriteDraft({ prompt });
const editedVersion = await EditDraft({ draft });
return editedVersion;
}
);
const result = await WriteBlog({ prompt: "Write a blog post about AI developer tools" });
console.log(result);
```
{/* prettier-ignore-end */}
Most LLM frameworks are graph oriented, inspired by popular python tools like Airflow. You express nodes, edges, and a global state object for your workflow. While graph APIs are highly expressive, they are also cumbersome:
- Building a mental model and visualizing the execution of a workflow from a node/edge builder is difficult.
- Global state makes refactoring difficult.
- All of this leads to low velocity when experimenting with and evolving your LLM workflows.
With GenSX, building LLM workflows is as simple as just writing TypeScript functions โ no need for a graph DSL or special abstractions. You use regular language features like function composition, control flow, and recursion to express your logic. Your functions are just wrapped in the `gensx.Workflow()` and `gensx.Component()` higher order functions to let you take advantage of all the GenSX features. To learn more about why GenSX uses components, read [Why Components?](docs/why-components).
## GenSX Cloud
[GenSX Cloud](/docs/cloud) provides everything you need to ship production grade agents and workflows including a serverless runtime designed for long-running workloads, cloud storage to build stateful workflows and agents, and tracing and observability.
### Serverless deployments
Deploy any workflow to a hosted API endpoint with a single command. The GenSX cloud platform handles scaling, infrastructure management, and API generation automatically:
```bash
# Deploy your workflow to GenSX Cloud
$ gensx deploy ./src/workflows.tsx
```
```bash
โ Building workflow using Docker
โ Generating schema
โ Successfully built project
โน Using project name from gensx.yaml: support-tools
โ Deploying project to GenSX Cloud (Project: support-tools)
โ Successfully deployed project to GenSX Cloud
Dashboard: console/support-tools/default/workflows
Available workflows:
- ChatAgent
- TextToSQLWorkflow
- RAGWorkflow
- AnalyzeDiscordWorkflow
Project: support-tools
Environment: default
```
The platform is optimized for AI workloads with millisecond-level cold starts and support for long-running executions up to an hour.
### Tracing and observability
GenSX Cloud provides comprehensive tracing and observability for all your workflows and agents.

Inputs and outputs are recorded for every component that executes in your workflows, including prompts, tool calls, and token usage. This makes it easy to debug hallucinations, prompt upgrades, and monitor costs.

### Cloud Storage
Build stateful AI applications with zero configuration using built-in storage primitives that provide managed blob storage, SQL databases, and full-text + vector search:
```ts
interface ChatWithMemoryInput {
userInput: string;
threadId: string;
}
const ChatWithMemory = gensx.Component(
"ChatWithMemory",
async ({ userInput, threadId }: ChatWithMemoryInput) => {
// Load chat history from blob storage
const blob = useBlob(`chat-history/${threadId}.json`);
const history = await blob.getJSON();
// Add new message and run LLM
// ...
// Save updated history
await blob.putJSON(updatedHistory);
return response;
},
);
```
For more details about GenSX Cloud see the [complete cloud reference](/docs/cloud).
## Reusable by default
GenSX components are pure functions, depend on zero global state, and are _reusable_ by default. Components accept inputs and return an output just like any other function.
```ts
interface ResearchTopicInput {
topic: string;
}
const ResearchTopic = gensx.Component(
"ResearchTopic",
async ({ topic }: ResearchTopicInput) => {
console.log("๐ Researching topic:", topic);
const systemPrompt = `You are a helpful assistant that researches topics...`;
const result = await openai.chat.completions.create({
model: "gpt-4.1-mini",
messages: [
{ role: "system", content: systemPrompt },
{ role: "user", content: topic },
],
});
return result.text;
},
);
```
Because components are pure functions, they are easy to test and evaluate in isolation. This enables you to move quickly and experiment with the structure of your workflow.
A second order benefit of reusability is that components can be _shared_ and published to package managers like `npm`. If you build a functional component, you can make it available to the community - something that frameworks that depend on global state preclude by default.
## Composition
All GenSX components support composition through standard programming patterns. This creates a natural way to pass data between steps and organize workflows:
```ts
const WriteBlog = gensx.Workflow(
"WriteBlog",
async ({ prompt }: WriteBlogInput) => {
const research = await Research({ prompt });
const draft = await WriteDraft({ prompt, research: research.flat() });
const editedDraft = await EditDraft({ draft });
return editedDraft;
},
);
```
There is no need for a DSL or graph API to define the structure of your workflow. Nest components within components, run components in parallel, and use loops and conditionals to create complex workflows just like you would with any other typescript program.
## Visual clarity
Workflow composition reads naturally from top to bottom like a standard programming language:
```ts
const AnalyzeHackerNewsTrends = gensx.Workflow(
"AnalyzeHackerNewsTrends",
async ({ postCount }: { postCount: number }) => {
const stories = await FetchHNPosts({ limit: postCount });
const { analyses } = await AnalyzeHNPosts({ stories });
const report = await GenerateReport({ analyses });
const editedReport = await EditReport({ content: report });
const tweet = await WriteTweet({
context: editedReport,
prompt: "Summarize the HN trends in a tweet",
});
return { report: editedReport, tweet };
},
);
```
Contrast this with graph APIs, where you need to build a mental model of the workflow from a node/edge builder:
```ts
const graph = new Graph()
.addNode("fetchHNPosts", fetchHNPosts)
.addNode("analyzeHNPosts", analyzePosts)
.addNode("generateReport", generateReport)
.addNode("editReport", editReport)
.addNode("writeTweet", writeTweet);
graph
.addEdge(START, "fetchHNPosts")
.addEdge("fetchHNPosts", "analyzeHNPosts")
.addEdge("analyzeHNPosts", "generateReport")
.addEdge("generateReport", "editReport")
.addEdge("editReport", "writeTweet")
.addEdge("writeTweet", END);
```
Using components to compose workflows makes dependencies explicit, and it is easy to see the data flow between steps. No graph DSL required.
## Designed for velocity
The GenSX programming model is optimized for speed of iteration in the long-run.
The typical journey building an LLM application looks like:
1. Ship a prototype that uses a single LLM prompt.
2. Add in evals to measure progress.
3. Add in external context via RAG.
4. Break tasks down into smaller discrete LLM calls chained together to improve quality.
5. Add advanced patterns like memory and self-reflection.
Experimentation speed depends on the ability to refactor, rearrange, and inject new steps. In our experience, this is something that frameworks that revolve around global state and a graph model slow down.
The functional component model in GenSX support an iterative loop that is fast on day one through day 1000.
# Human-in-the-Loop
Some decisions require a human. GenSX lets you pause a workflow mid-execution and wait for inputโapproval, edits, review, anythingโbefore continuing. No polling, no weird state machines, no extra infra.
## Basic usage
Use `requestInput` when you need to pause execution and resume later with human input. It generates a callback URL and passes it to your trigger function. You decide how to collect the inputโemail, Slack, custom UI, whatever.
```tsx
import { requestInput } from "@gensx/core";
const ApprovalWorkflow = gensx.Component(
"ApprovalWorkflow",
async ({ requestDetails }: { requestDetails: string }) => {
const userInput = await requestInput<{ approved: boolean; comment?: string }>(
async (callbackUrl) => {
// Your custom trigger logic here
console.log("Please provide input at:", callbackUrl);
// Example: Send to your approval system
await fetch("/api/approval-request", {
method: "POST",
body: JSON.stringify({ callbackUrl, requestDetails }),
});
}
);
if (userInput.approved) {
return `Approved! ${userInput.comment || ""}`;
} else {
return "Request was rejected";
}
}
);
```
## Slack integration
You can wire this into Slack with interactive buttons. Here's an example using `@slack/web-api`:
```tsx
import { requestInput } from "@gensx/core";
import { WebClient } from "@slack/web-api";
const slack = new WebClient(process.env.SLACK_TOKEN);
const SlackApprovalWorkflow = gensx.Component(
"SlackApprovalWorkflow",
async ({ requestDetails }: { requestDetails: string }) => {
const decision = await requestInput<{ approved: boolean; reason?: string }>(
async (callbackUrl) => {
await slack.chat.postMessage({
channel: "#approvals",
text: `New approval request: ${requestDetails}`,
blocks: [
{
type: "section",
text: {
type: "mrkdwn",
text: `*Approval Request*\n${requestDetails}`
}
},
{
type: "actions",
elements: [
{
type: "button",
text: { type: "plain_text", text: "Approve" },
style: "primary",
url: `${callbackUrl}?approved=true`
},
{
type: "button",
text: { type: "plain_text", text: "Reject" },
style: "danger",
url: `${callbackUrl}?approved=false`
}
]
}
]
});
}
);
return decision;
}
);
```
## Web interface integration
For apps with a UI, just store the callback and surface it wherever makes sense:
```tsx
import { requestInput } from "@gensx/core";
const WebApprovalWorkflow = gensx.Component(
"WebApprovalWorkflow",
async ({ taskId }: { taskId: string }) => {
const approval = await requestInput<{ approved: boolean; notes: string }>(
async (callbackUrl) => {
// Store in database for web interface to display
await db.pendingApprovals.create({
data: {
taskId,
callbackUrl,
status: "pending",
createdAt: new Date(),
}
});
// Send notification
await sendNotification({
type: "approval_needed",
taskId,
message: `Task ${taskId} requires approval`
});
}
);
return approval;
}
);
```
### Performing callback from your system
Hereโs what calling back into GenSX looks like from your API:
```tsx
// app/api/approval/[taskId]/route.ts
import { NextRequest, NextResponse } from "next/server";
export async function POST(
request: NextRequest,
{ params }: { params: { taskId: string } }
) {
const { approved, notes } = await request.json();
// Get the stored callback URL
const approval = await db.pendingApprovals.findUnique({
where: { taskId: params.taskId }
});
if (!approval) {
return NextResponse.json({ error: "Approval not found" }, { status: 404 });
}
// Call the GenSX callback URL
const response = await fetch(approval.callbackUrl, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ approved, notes })
});
if (response.ok) {
await db.pendingApprovals.update({
where: { taskId: params.taskId },
data: { status: "completed" }
});
return NextResponse.json({ success: true });
} else {
return NextResponse.json({ error: "Failed to submit approval" }, { status: 500 });
}
}
```
## Error handling
If your trigger fails (e.g. Slack down, webhook times out), youโre still in control:
```tsx
const RobustApprovalWorkflow = gensx.Component(
"RobustApprovalWorkflow",
async ({ request }: { request: string }) => {
try {
const result = await requestInput<{ approved: boolean }>(
async (callbackUrl) => {
// Handle errors in sending the approval request
try {
await sendApprovalRequest(callbackUrl, request);
} catch (error) {
console.error("Failed to send approval request:", error);
// You might want to store this for retry logic
throw error;
}
}
);
return result;
} catch (error) {
return { approved: false, error: "Failed to send approval request" };
}
}
);
```
## Type safety
Use Zod (or your favorite schema lib) to validate input:
```tsx
import { z } from "zod";
const ApprovalInputSchema = z.object({
approved: z.boolean(),
comment: z.string().optional(),
approver: z.string(),
timestamp: z.date()
});
type ApprovalInput = z.infer;
const TypedApprovalWorkflow = gensx.Component(
"TypedApprovalWorkflow",
async () => {
const input = await requestInput(
async (callbackUrl) => {
await sendTypedApprovalRequest(callbackUrl);
}
);
return `Approved by ${input.approver} at ${input.timestamp}`;
}
);
```
## How it works
Behind the scenes, `requestInput`:
1. Generates a callback URL tied to the current execution node
2. Passes it to your trigger function
3. Pauses the workflow
4. Resumes once the callback receives input
The callback URL format is:
```
${process.env.GENSX_API_BASE_URL}/org/${process.env.GENSX_ORG}/workflowExecutions/${process.env.GENSX_EXECUTION_ID}/fulfill/${nodeId}
```
# Durable Execution
Build workflows that donโt break. GenSX gives you first-class primitives for pause/resume, retries, and human inputโwithout losing state or writing scaffolding code.
Durable execution is built in. State is preserved automatically, workflows resume after failure, and you can checkpoint anywhere in the flow to rewind with new input or retry logic.
---
## State preservation
GenSX automatically snapshots all workflow state between component steps. If the process crashes or restarts, it picks up right where it left off:
```ts
const StatefulWorkflow = gensx.Component(
"StatefulWorkflow",
async ({ userId }: { userId: string }) => {
const userData = await fetchUserData(userId);
const processedData = await processUserData(userData);
const approval = await requestInput<{ approved: boolean }>(
async (callbackUrl) => {
await sendApprovalRequest(callbackUrl, processedData);
}
);
if (approval.approved) {
// If the program crashes during the finalizeUserData component, it can recover and resume without needing to re-fetch human approval.
return await finalizeUserData(processedData);
}
return "User data processing cancelled";
}
);
```
---
## Human-in-the-loop workflows
Need to wait for someone to click a button, review a doc, or approve a change? GenSX makes it easy to pause execution until human input arrivesโwith zero polling or timers.
```ts
const LongRunningApproval = gensx.Component(
"LongRunningApproval",
async ({ requestId }: { requestId: string }) => {
const approval = await requestInput<{ approved: boolean; notes: string }>(
async (callbackUrl) => {
await scheduleApprovalRequest(callbackUrl, requestId);
}
);
if (approval.approved) {
return await processApprovedRequest(requestId, approval.notes);
}
return "Request denied";
}
);
```
Execution can wait hours, days, or weeksโno resource usage, no expiration.
---
## Error handling and recovery
Handle failures the same way you'd write robust async codeโ`try/catch` still works, but now it's durable:
```ts
const RobustWorkflow = gensx.Component(
"RobustWorkflow",
async ({ items }: { items: any[] }) => {
const results = [];
const errors = [];
for (const item of items) {
try {
const result = await ProcessItem({ item });
results.push(result);
} catch (error) {
errors.push({ item, error: error.message });
}
}
return { results, errors };
}
);
```
Failures are isolated and recoverable across runs. No work is lost unless you want it to be.
---
## Deterministic execution
For replay to work, everything outside a component must be deterministic. That means:
* **No `Date.now()` or `Math.random()`** outside components
* **No side effects** during render
* **Inputs must match** exactly between runs
```ts
// โ Breaks replay
const BadWorkflow = gensx.Component("Bad", async () => {
return await ProcessData({
timestamp: Date.now(),
randomId: Math.random()
});
});
// โ Safe for replay
const GoodWorkflow = gensx.Component(
"Good",
async ({ timestamp, randomId }) => {
return await ProcessData({ timestamp, randomId });
}
);
```
### Component isolation
Non-determinism is allowed inside components. Thatโs where GenSX captures and preserves execution.
```ts
const SafeComponent = gensx.Component(
"SafeComponent",
async ({ userId }: { userId: string }) => {
const now = Date.now(); // โ Safe here
const requestId = Math.random().toString(36);
const response = await fetch(`/api/users/${userId}`, {
headers: { 'X-Request-ID': requestId }
});
return await response.json();
}
);
```
---
## Checkpoint restoration
Sometimes you need to go back and do something overโwith feedback. Checkpoints let you do exactly that:
### How it works
1. Call `createCheckpoint()` to snapshot the current point
2. Resume the workflow later with `restore(feedback)`
3. Execution jumps back to the checkpoint line and continues
```ts
const CheckpointWorkflow = gensx.Component(
"CheckpointWorkflow",
async ({ data }) => {
const { restore, feedback } = createCheckpoint();
if (feedback) {
return await processDataWithFeedback(data, feedback);
}
const result = await processData(data);
if (result.needsReview) {
await restore({ message: "Needs review", result });
}
return result;
}
);
```
The line after `restore()` is never reached.
### Checkpoint limits
You can prevent infinite retries with `maxRestores`:
```ts
const LimitedCheckpoint = gensx.Component("Retry", async () => {
const { restore, feedback } = createCheckpoint(
{ label: "retry" },
{ maxRestores: 3 }
);
if (feedback?.attempt >= 3) {
throw new Error("Maximum retries exceeded");
}
const result = await processWithRetry();
if (!result.success) {
await restore({
attempt: (feedback?.attempt || 0) + 1,
error: result.error
});
}
return result;
});
```
---
## Use cases
Checkpoint restoration is a powerful escape hatch for:
* Fixing agent dead ends
* Iterative improvement
* A/B testing
* Human-in-the-loop retries
* Structured recovery after failures
```ts
const HumanReviewWorkflow = gensx.Component(
"HumanReview",
async ({ document }) => {
const { restore, feedback } = createCheckpoint({ label: "review" });
if (feedback) {
return feedback.approved
? await finalizeDocument(document, feedback.changes)
: await reviseDocument(document, feedback.changes);
}
const draft = await generateDocument(document);
const review = await requestInput<{ approved: boolean; changes: any }>(
async (callbackUrl) => {
await sendForReview(draft, callbackUrl);
}
);
await restore(review);
}
);
```
---
## Monitoring and observability
Every execution is tracked in the GenSX console:
* Timeline of each step
* Inputs and outputs for each component
* Checkpoint/restore history
---
## Best practices
* Keep all nondeterminism inside components
* Pass time and IDs as props
* Use checkpoints instead of custom rollback logic
* Donโt fear retriesโdurability is cheap
---
## Related docs
* [Human-in-the-Loop](/human-in-the-loop)
* [Client-Side Tools](/client-side-tools)
* [GenSX Cloud](/cloud)
# Client-Side Tools
Client-side tools are for cases where a tool canโt run on the same system thatโs calling the LLM API. Instead, the tool needs to execute in the userโs environmentโlike a browser, a desktop app, or any other remote client. Use this when the function needs to touch client-only APIs (e.g. geolocation), update UI state, search through code on a users local machine, or interact with something the server canโt access directly.
## Basic setup
Client-side tools work by having your workflow emit `external-tool` messages that the React `useWorkflow` hook intercepts and executes locally.
### 1. Define your tools
Start by defining your tools with schemas:
```ts
// tools/toolbox.ts
import { createToolBox } from "@gensx/core";
import { z } from "zod";
export const toolbox = createToolBox({
getUserLocation: {
description: "Get the user's current location using browser geolocation",
params: z.object({
enableHighAccuracy: z.boolean().optional(),
timeout: z.number().optional(),
}),
result: z.object({
latitude: z.number(),
longitude: z.number(),
accuracy: z.number(),
}),
},
moveMap: {
description: "Move the map to a specific location",
params: z.object({
latitude: z.number(),
longitude: z.number(),
zoom: z.number().optional(),
}),
result: z.object({
success: z.boolean(),
message: z.string(),
}),
},
});
```
### 2. Implement tool functions
Create React hooks that implement the tool logic:
```ts
// hooks/useMapTools.ts
import { ToolImplementations } from "@gensx/core";
import { toolbox } from "../tools/toolbox";
export function useMapTools() {
const [mapState, setMapState] = useState({
latitude: 37.7749,
longitude: -122.4194,
zoom: 12,
});
const toolImplementations: ToolImplementations = {
getUserLocation: {
execute: async (params) => {
return new Promise((resolve, reject) => {
if (!navigator.geolocation) {
reject(new Error("Geolocation not supported"));
return;
}
navigator.geolocation.getCurrentPosition(
(position) => {
resolve({
latitude: position.coords.latitude,
longitude: position.coords.longitude,
accuracy: position.coords.accuracy,
});
},
(error) => reject(error),
{
enableHighAccuracy: params.enableHighAccuracy ?? false,
timeout: params.timeout ?? 10000,
}
);
});
},
},
moveMap: {
execute: async (params) => {
setMapState({
latitude: params.latitude,
longitude: params.longitude,
zoom: params.zoom ?? 12,
});
return { success: true, message: "Map moved" };
},
},
};
return { toolImplementations, mapState };
}
```
### 3. Connect tools to workflow
Use the `useWorkflow` hook with your tool implementations:
```tsx
// components/ChatInterface.tsx
import { useWorkflow } from "@gensx/react";
import { useMapTools } from "../hooks/useMapTools";
export function ChatInterface() {
const { toolImplementations } = useMapTools();
const workflow = useWorkflow({
config: {
baseUrl: "/api/gensx",
},
tools: toolImplementations,
});
const sendMessage = async (message: string) => {
await workflow.run({
inputs: { message },
});
};
return (
{/* Your chat interface */}
);
}
```
## Using with AI SDK
Client-side tools work seamlessly with LLM calls. Use `asToolSet` to convert your toolbox to AI SDK format:
```ts
// workflows/mapWorkflow.ts
import { Agent } from "./agent";
import { anthropic } from "@ai-sdk/anthropic";
import { asToolSet } from "@gensx/vercel-ai";
import { tool } from "ai";
import { z } from "zod";
import { toolbox } from "../tools/toolbox";
// Server-side tools
const geocodeTool = tool({
description: "Geocode a location from an address",
parameters: z.object({
address: z.string(),
}),
execute: async ({ address }) => {
const response = await fetch(`https://nominatim.openstreetmap.org/search?q=${address}&format=json`);
return await response.json();
},
});
const MapWorkflow = gensx.Component(
"MapWorkflow",
async ({ userMessage }: { userMessage: string }) => {
// Combine server-side and client-side tools
const tools = {
geocode: geocodeTool, // Server-side geocoding
...asToolSet(toolbox), // Client-side map tools
};
const model = anthropic("claude-3-5-sonnet-20240620");
const result = await Agent({
messages: [
{
role: "system",
content: `You are a geographic assistant with access to:
- geocode: Convert addresses to coordinates (server-side)
- getUserLocation: Get user's current location (client-side)
- moveMap: Move the map view (client-side)
When users ask about locations, use geocode to find coordinates,
then use moveMap to show them on the map.`,
},
{
role: "user",
content: userMessage,
},
],
tools,
model,
});
return result;
}
);
```
## Using with OpenAI SDK
You can also use client-side tools with OpenAI models:
```ts
// workflows/openaiMapWorkflow.ts
import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";
import { executeExternalTool } from "@gensx/core";
import { toolbox } from "../tools/toolbox";
const OpenAIMapWorkflow = gensx.Component(
"OpenAIMapWorkflow",
async ({ userMessage }: { userMessage: string }) => {
const result = await generateText({
model: openai("gpt-4o"),
messages: [
{
role: "system",
content: "You are a location-aware assistant that can move maps and get user locations.",
},
{
role: "user",
content: userMessage,
},
],
tools: {
moveMap: {
description: "Move the map to show a specific location",
parameters: z.object({
latitude: z.number(),
longitude: z.number(),
zoom: z.number().optional(),
}),
execute: async (params) => {
return await executeExternalTool(toolbox, "moveMap", params);
},
},
getUserLocation: {
description: "Get the user's current location",
parameters: z.object({
enableHighAccuracy: z.boolean().optional(),
}),
execute: async (params) => {
return await executeExternalTool(toolbox, "getUserLocation", params);
},
},
},
});
return result.text;
}
);
```
## Advanced patterns
### LLM-driven tool selection
Let the LLM decide which tools to use based on user queries:
```ts
const SmartMapWorkflow = gensx.Component(
"SmartMapWorkflow",
async ({ userMessage }: { userMessage: string }) => {
const tools = {
webSearch: webSearchTool,
...asToolSet(toolbox),
};
const model = anthropic("claude-3-5-sonnet-20240620");
const result = await Agent({
messages: [
{
role: "system",
content: `You are a smart geographic assistant. Based on user queries:
For location queries ("Where is X?"):
1. Use webSearch to find information about the location
2. Use moveMap to center the map on the location
For "near me" queries:
1. Use getUserLocation to get their current position
2. Use webSearch to find places near them
3. Use moveMap to show results`,
},
{
role: "user",
content: userMessage,
},
],
tools,
model,
});
return result;
}
);
```
### Tool result validation
Validate client-side tool results before using them:
```ts
const ValidatedToolWorkflow = gensx.Component(
"ValidatedToolWorkflow",
async ({ userMessage }: { userMessage: string }) => {
const LocationSchema = z.object({
latitude: z.number().min(-90).max(90),
longitude: z.number().min(-180).max(180),
accuracy: z.number().positive(),
});
const tools = {
getUserLocation: {
description: "Get user's current location",
parameters: z.object({
enableHighAccuracy: z.boolean().optional(),
}),
execute: async (params) => {
const result = await executeExternalTool(toolbox, "getUserLocation", params);
return LocationSchema.parse(result); // Validate before returning
},
},
};
const model = anthropic("claude-3-5-sonnet-20240620");
const result = await Agent({
messages: [
{
role: "system",
content: "You are a location-aware assistant with validated location data.",
},
{
role: "user",
content: userMessage,
},
],
tools,
model,
});
return result;
}
);
```
## Best practices
### Optimized tool descriptions
Write clear, specific descriptions to help LLMs use tools efficiently:
```ts
const optimizedToolbox = createToolBox({
moveMap: {
description: "Move the map to center on specific coordinates. Use this when showing locations to the user.",
params: z.object({
latitude: z.number().describe("Latitude coordinate (-90 to 90)"),
longitude: z.number().describe("Longitude coordinate (-180 to 180)"),
zoom: z.number().optional().describe("Zoom level (1-20, default 12)"),
}),
result: z.object({
success: z.boolean(),
message: z.string(),
}),
},
getUserLocation: {
description: "Get the user's current location using browser geolocation. Only call when you need their current position.",
params: z.object({
enableHighAccuracy: z.boolean().optional().describe("Request high accuracy (uses more battery)"),
}),
result: z.object({
latitude: z.number(),
longitude: z.number(),
accuracy: z.number().describe("Accuracy in meters"),
}),
},
});
```
## Complete example
Check out the full implementation in the [client-side-tools example](https://github.com/cortexclick/gensx/tree/main/examples/client-side-tools) which demonstrates:
- Map-based chat interface
- Real-time tool execution
- Geolocation and geocoding tools
- Type-safe tool definitions
- Error handling and fallbacks
# Basic concepts
GenSX is a simple typescript framework for building complex LLM applications. It's built around functional, reusable components that are composed to create and orchestrate workflows.
## Components
Components are the building blocks of GenSX applications; they're pure TypeScript functions that:
- Accept an input and produce an output
- Don't depend on global state
- Are strongly typed using TypeScript
You can also think of components as a unit of [tracing](/docs/cloud/observability): the inputs and outputs of components are recorded and traced making it easy to understand how data flows through your workflow.
Here's an example of a simple component:
```ts
interface GreetUserInput {
name: string;
}
const GreetUser = gensx.Component(
"GreetUser",
async ({ name }: GreetUserInput) => {
return `Hello, ${name}!`;
},
);
```
At first glance, this syntax may seem a bit complex, but in reality, you're simply passing a name and a function to `gensx.Component()`, a higher-order function that returns a component. Another way to write the code above is:
```ts
function greetUser({ name }: GreetUserInput) {
return `Hello, ${name}!`;
}
const GreetUser = gensx.Component("GreetUser", greetUser);
```
Components can be used like any other functions in typescript. They can consume other components, return other components, and return any type of data. Here's an example of a component that uses `generateText` from the Vercel AI SDK to call an LLM:
```ts
import { generateText } from "@gensx/vercel-ai";
import { openai } from "@ai-sdk/openai";
const GreetUser = gensx.Component(
"GreetUser",
async ({ name }: GreetUserInput) => {
const result = await generateText({
model: openai("gpt-4.1-mini"),
messages: [
{
role: "system",
content: "You are a friendly assistant that greets people warmly.",
},
{ role: "user", content: `Write a greeting for ${name}.` },
],
});
return result.text;
},
);
```
To run a component, you just call it like a function:
```ts
const result = await GreetUser({ name: "John" });
```
## Workflows
Workflows are a special type of component in GenSX. While components are the re-usable building blocks of your application, workflows are the top-level components that handle the orchestration and serve as the entry point for your application.
Each workflow creates a top level [trace](/docs/cloud/observability) that shows all components that were executed along with their inputs and outputs.
Workflows are created in almost the same way as components:
```ts
const WriteBlog = gensx.Workflow(
"WriteBlog",
async ({ title, description }: WriteBlogInput) => {
const queries = await GenerateQueries({
title,
description,
});
const research = await ResearchBlog({ queries });
const draft = await WriteDraft({ title, context: research });
const final = await EditDraft({ title, content: draft });
return final;
},
);
```
Here you can see that components are executed sequentially, with the output of each component being passed as the input to the next component. Of course, you can also execute components in parallel using `Promise.all`:
```ts
const CreateContent = gensx.Workflow(
"CreateContent",
async ({ title, description }: CreateContentInput) => {
// Create the different assets in parallel
const context = await GatherContext({ title, description });
const [blog, tweet, email] = await Promise.all([
WriteBlog({ title, description, context }),
WriteTweet({ title, description, context }),
WriteEmail({ title, description, context }),
]);
return { blog, tweet, email };
},
);
```
Just like components, workflows are invoked just like any other function:
```ts
const title = "How AI and agents broke modern infra";
const description = "...";
const result = await CreateContent({ title, description });
```
Another special property of workflows is that any workflow exported from your `workflows.ts` file will be automatically turned into API endpoints that can be called both synchronously and asynchronously. More details on that [here](/docs/cloud/serverless-deployments).
## Component Isolation
Because workflows are just components, you can run and evaluate them in isolation, making it easy to debug and verify individual steps of your workflow. This is particularly valuable when building complex LLM applications that need robust evaluation.
Rather than having to run an entire workflow to test a change to a single component, you can test just that component, dramatically speeding up your dev loop. This isolation also makes unit testing more manageable, as you can create specific test cases without having to worry about the rest of the workflow.
# Using tools with GenSX
Workflows often require LLMs to interact with external systems or perform specific actions and using tools with LLMs is a powerful way to accomplish that.
This guide will show examples of using tools with both [@gensx/openai](../component-reference/openai.mdx) and [@gensx/vercel-ai](../component-reference/vercel-ai.mdx). You can also find similar examples in [OpenAI examples](https://github.com/gensx-inc/gensx/tree/main/examples/openai-examples) and [Vercel AI examples](https://github.com/gensx-inc/gensx/tree/main/examples/vercel-ai) in the GitHub repo.
## Tools with the Vercel AI SDK
The [`@gensx/vercel-ai`](../component-reference/vercel-ai.mdx) package provides a simple way to define and use tools with LLMs.
### Defining a tool
Start by defining your tool using the `tool` helper from the Vercel AI SDK:
```ts
import { tool } from "ai";
import { z } from "zod";
const weatherTool = tool({
description: "Get the weather in a location",
parameters: z.object({
location: z.string().describe("The location to get the weather for"),
}),
execute: async ({ location }: { location: string }) => {
console.log("Executing weather tool with location:", location);
await new Promise((resolve) => setTimeout(resolve, 100));
return {
location,
temperature: 72 + Math.floor(Math.random() * 21) - 10,
};
},
});
```
### Using tools with `generateText`
You can use tools with the `generateText` function by passing them in the `tools` prop:
```ts
const WeatherAssistant = gensx.Component(
"WeatherAssistant",
async ({ prompt }: { prompt: string }) => {
const result = await generateText({
messages: [
{
role: "system",
content: "You're a helpful, friendly weather assistant.",
},
{
role: "user",
content: prompt,
},
],
model: openai("gpt-4o-mini"),
tools: { weather: weatherTool },
maxSteps: 5,
});
return result.text;
},
);
```
### Using tools with streaming
You can also use tools with streaming responses using `streamText`:
```ts
const StreamingWeatherAssistant = gensx.Component(
"StreamingWeatherAssistant",
({ prompt }: { prompt: string }) => {
const result = streamText({
messages: [
{
role: "system",
content: "You're a helpful, friendly weather assistant.",
},
{
role: "user",
content: prompt,
},
],
model: openai("gpt-4o-mini"),
tools: { weather: weatherTool },
maxSteps: 5,
});
const generator = async function* () {
for await (const chunk of result.textStream) {
yield chunk;
}
};
return generator();
},
);
```
## Tools with the OpenAI SDK
You can also use the [`@gensx/openai`](../component-reference/openai.mdx) package to work with tools using OpenAI's native tool-calling capabilities.
### Defining a tool
Define your tool:
```ts
const weatherTool = {
type: "function" as const,
function: {
name: "get_weather",
description: "get the weather for a given location",
parameters: {
type: "object",
properties: {
location: {
type: "string",
description: "The location to get the weather for",
},
},
required: ["location"],
},
parse: JSON.parse,
function: (args: { location: string }) => {
console.log("getting weather for", args.location);
const weather = ["sunny", "cloudy", "rainy", "snowy"];
return {
weather: weather[Math.floor(Math.random() * weather.length)],
};
},
},
};
```
### Using tools with `runTools`
The OpenAI SDK provides a `runTools` method that handles the tool calling process:
```ts
const WeatherAssistant = gensx.Component(
"WeatherAssistant",
async ({ prompt }: { prompt: string }) => {
const result = await openai.beta.chat.completions.runTools({
model: "gpt-4.1-mini",
messages: [
{
role: "system",
content: "You're a helpful, friendly weather assistant.",
},
{
role: "user",
content: prompt,
},
],
tools: [weatherTool],
});
return await result.finalContent();
},
);
```
### Using tools with streaming
You can also use tools with streaming responses:
```ts
const StreamingWeatherAssistant = gensx.Component(
"StreamingWeatherAssistant",
async ({ prompt }: { prompt: string }) => {
const result = await openai.beta.chat.completions.runTools({
model: "gpt-4.1-mini",
messages: [
{
role: "system",
content: "You're a helpful, friendly weather assistant.",
},
{
role: "user",
content: prompt,
},
],
tools: [weatherTool],
stream: true,
});
return result;
},
);
```
Then to consume the output of the component, you would do the following:
```ts
const streamToolsResult = await StreamingTools({
prompt,
});
for await (const chunk of streamToolsResult) {
process.stdout.write(chunk.choices[0].delta.content ?? "");
}
```
## Resources
For more examples of using tools with GenSX, see the following examples:
- [Vercel AI SDK tools example](https://github.com/gensx-inc/gensx/blob/main/examples/vercel-ai)
- [OpenAI tools example](https://github.com/gensx-inc/gensx/blob/main/examples/openai-examples)
# Structured outputs
Workflows regularly require getting structured outputs (JSON) from LLMs. This guide shows how to use structured outputs with both [@gensx/openai](docs/components/openai) and [@gensx/vercel-ai](docs/components/vercel-ai). You can also find similar examples in [OpenAI examples](https://github.com/gensx-inc/gensx/tree/main/examples/openai-examples) and [Vercel AI examples](https://github.com/gensx-inc/gensx/tree/main/examples/vercel-ai) in the GitHub repo.
## Structured outputs with the Vercel AI SDK
The [`@gensx/vercel-ai`](docs/components/vercel-ai) package provides two ways to get structured outputs from LLMs: `generateObject` and `streamObject`. Two key benefits of using the [Vercel AI SDK](https://ai-sdk.dev/docs/ai-sdk-core/generating-structured-data) are that you can stream the output as it's generated, and you can use it with models that don't natively support structured outputs.
### Using `generateObject`
Start by defining the Zod schema for the output format you want:
```ts
import { z } from "zod";
const ExtractEntitiesSchema = z.object({
people: z.array(z.string()),
places: z.array(z.string()),
organizations: z.array(z.string()),
});
```
Then define the component and set the `schema` param on the `generateObject` function:
```ts
interface ExtractEntitiesInput {
text: string;
}
const ExtractEntities = gensx.Component(
"ExtractEntities",
({ text }: ExtractEntitiesInput) => {
const prompt = `Please review the following text and extract all the people, places, and organizations mentioned.
${text}
Please return JSON with the following format:
{
"people": ["person1", "person2", "person3"],
"places": ["place1", "place2", "place3"],
"organizations": ["org1", "org2", "org3"]
}`;
const result = generateObject({
model: openai("gpt-4o-mini"),
schema: ExtractEntitiesSchema,
messages: [{ role: "user", content: prompt }],
});
return result.object;
},
);
```
When you run this component, it will return the structured output directly matching the type of the `ExtractEntitiesSchema` with no extra parsing required.
### Using `streamObject`
The `streamObject` function is similar to `generateObject` but it streams the output as it's generated.
```ts
const ExtractEntitiesStreaming = gensx.Component(
"ExtractEntitiesStreaming",
({ text }: ExtractEntitiesInput) => {
const prompt = `Please review the following text and extract all the people, places, and organizations mentioned.
${text}
Please return JSON with the following format:
{
"people": ["person1", "person2", "person3"],
"places": ["place1", "place2", "place3"],
"organizations": ["org1", "org2", "org3"]
}`;
const result = streamObject({
model: openai("gpt-4o-mini"),
schema: ExtractEntitiesSchema,
messages: [{ role: "user", content: prompt }],
});
const generator = async function* () {
for await (const chunk of result.partialObjectStream) {
yield chunk;
}
};
return generator();
},
);
```
Then you can consume the output of the component like this:
```ts
const structuredStreamResult = await ExtractEntitiesStreaming({
text: "John Doe is a software engineer at Google.",
});
console.log("Response:");
for await (const chunk of structuredStreamResult) {
console.clear();
console.log(chunk);
}
```
## Structured outputs with the OpenAI SDK
You can also use the [`@gensx/openai`](docs/components/openai) package to get structured outputs from OpenAI models and compatible APIs.
The OpenAI SDK provides a `parse` method that you can use to automatically parse the response of the structured output.
Here's an example of how to extract entities using the OpenAI SDK:
```ts
import { OpenAI } from "@gensx/openai";
import { zodResponseFormat } from "openai/helpers/zod.mjs";
import { z } from "zod";
const openai = new OpenAI();
const EntityExtractionSchema = z.object({
people: z.array(z.string()).describe("List of people mentioned in the text"),
places: z.array(z.string()).describe("List of places mentioned in the text"),
organizations: z
.array(z.string())
.describe("List of organizations mentioned in the text"),
});
const ExtractEntities = gensx.Component(
"ExtractEntities",
async ({ text }: { text: string }) => {
const result = await openai.beta.chat.completions.parse({
model: "gpt-4",
messages: [
{
role: "user",
content: `Please extract all people, places, and organizations from this text:\n\n${text}`,
},
],
response_format: zodResponseFormat(
EntityExtractionSchema,
"entityExtraction",
),
});
return result.choices[0].message.parsed!;
},
);
```
When you run this component, it will return a typed object matching the `EntityExtractionSchema` structure. The `parse` method ensures that the response is properly validated against your schema before returning it.
You can use it like this:
```ts
const result = await ExtractEntities({
text: "John works at Google in New York City.",
});
console.log(result);
// Output:
// {
// people: ["John"],
// places: ["New York City"],
// organizations: ["Google"]
// }
```
Alternatively, you can just call `openai.chat.completions.create` and then parse the response yourself with Zod:
```ts
const parsed = EntityExtractionSchema.parse(result.choices[0].message.content);
```
# Self-reflection
Self-reflection is a common prompting technique used to improve the outputs of LLMs. With self-reflection, an LLM is used to evaluate its own output and then improve it, similar to how humans would review and edit their own work.
Self-reflection works well because it's easy for LLMs to make mistakes. LLMs are simply predicting tokens one after the next so a single bad token choice can create a cascading effect. Self-reflection allows the model to evaluate the output in its entirety giving the model a chance to catch and correct any mistakes.
## Self-reflection in GenSX
The nested approach to creating GenSX workflows might make it seem difficult to implement looping patterns like self-reflection. However, GenSX allows you to express dynamic, programmatic trees giving you all the flexibility you need.
The [reflection example](https://github.com/gensx-inc/gensx/tree/main/examples/reflection) implements a `Reflection` component that you can use to implement self-reflection in your GenSX workflows.
To implement self-reflection, you'll need:
1. **An evaluation component** that assesses the output and provides feedback
2. **An improvement component** that processes the input using the feedback to create a better output
The output you want to improve becomes the `input` to the reflection component itself. You can choose to run a single round of self-reflection or multiple rounds to iteratively refine the output, based on your scenario.
The `Reflection` component, does the following:
1. It calls the evaluation component (`EvaluateFn`) to review the current output and determine if further improvements are needed.
2. If feedback suggests more changes, it runs the improvement component (`ImproveFn`) to revise the output based on that feedback.
3. This process repeats, evaluating and improving, until either the maximum number of iterations (`maxIterations`) is reached or the evaluation component decides no further changes are necessary.
Here's the implementation of the `Reflection` component:
```tsx
interface ReflectionProps {
// The initial input to process
input: TInput;
// Component to process the input and generate new output
ImproveFn: (props: { input: TInput; feedback: string }) => Promise;
// Component to evaluate if we should continue processing and provide feedback
EvaluateFn: (props: { input: TInput }) => Promise;
// Maximum number of iterations allowed
maxIterations?: number;
}
const Reflection = gensx.Component(
"Reflection",
async ({
input,
ImproveFn,
EvaluateFn,
maxIterations = 3,
}: ReflectionProps): Promise => {
let currentInput = input;
let iteration = 0;
while (iteration < maxIterations) {
// Check if we should continue processing
const { feedback, continueProcessing } = await EvaluateFn({
input: currentInput,
});
if (!continueProcessing) {
break;
}
// Process the input
currentInput = await ImproveFn({ input: currentInput, feedback });
iteration++;
}
// Return the final input when we're done processing
return currentInput;
},
);
```
## Implementing self-reflection
Now that you've seen the pattern and the helper component for doing self-reflection, let's implement it. The example below shows how to use the `Reflection` component to evaluate and improve text.
### Step 1: Define the evaluation component
First, you need to define the component that will be used to evaluate the text. The evaluation component needs to return a string, `feedback`, and a boolean, `continueProcessing`.
To get good results, you'll need to provide useful instructions on what feedback to provide. In this example, we focus on trying to make the text sound more authentic and less AI-generated.
```tsx
const EvaluateText = gensx.Component(
"EvaluateText",
async ({ input }: { input: string }): Promise => {
const systemPrompt = `You're a helpful assistant that evaluates text and suggests improvements if needed.
## Evaluation Criteria
- Check for genuine language: flag any buzzwords, corporate jargon, or empty phrases like "cutting-edge solutions"
- Look for clear, natural expression: mark instances of flowery language or clichรฉd openers like "In today's landscape..."
- Review word choice: highlight where simpler alternatives could replace complex or technical terms
- Assess authenticity: note when writing tries to "sell" rather than inform clearly and factually
- Evaluate tone: identify where the writing becomes overly formal instead of warm and conversational
- Consider flow and engagement - flag where transitions feel choppy or content becomes dry and predictable
## Output Format
Return your response as JSON with the following two properties:
- feedback: A string describing the improvements that can be made to the text. Return feedback as short bullet points. If no improvements are needed, return an empty string.
- continueProcessing: A boolean indicating whether the text should be improved further. If no improvements are needed, return false.
You will be given a piece of text. Your job is to evaluate the text and return a JSON object with the following format:
{
"feedback": "string",
"continueProcessing": "boolean"
}
`;
const result = await generateObject({
model: openaiModel,
messages: [
{ role: "system", content: systemPrompt },
{ role: "user", content: input },
],
schema: z.object({
feedback: z.string(),
continueProcessing: z.boolean(),
}),
});
return result.object;
},
);
```
### Step 2: Define the improvement component
Next, you need to define the component that will be used to improve the text. This component will take the `input` text and the `feedback` as input and return the improved text.
```tsx
const ImproveText = gensx.Component(
"ImproveText",
async ({
input,
feedback,
}: {
input: string;
feedback: string;
}): Promise => {
console.log("\n๐ Current draft:\n", input);
console.log("\n๐ Feedback:\n", feedback);
console.log("=".repeat(50));
const systemPrompt = `You're a helpful assistant that improves text by fixing typos, removing buzzwords, jargon, and making the writing sound more authentic.
You will be given a piece of text and feedback on the text. Your job is to improve the text based on the feedback. You should return the improved text and nothing else.`;
const prompt = `
${feedback}
${input}
`;
const result = await generateText({
model: openaiModel,
messages: [
{ role: "system", content: systemPrompt },
{ role: "user", content: prompt },
],
});
return result.text;
},
);
```
### Step 3: Create the reflection loop
Now that you have the evaluation and improvement components, you can create the reflection loop.
```tsx
export const ImproveTextWithReflection = gensx.Workflow(
"ImproveTextWithReflection",
async ({ text }: { text: string }): Promise => {
return Reflection({
input: text,
ImproveFn: ImproveText,
EvaluateFn: EvaluateText,
maxIterations: 3,
});
},
);
```
### Step 4: Run the example
You can run the text improvement example using the following code:
```tsx
const text = `We are a cutting-edge technology company leveraging bleeding-edge AI solutions to deliver best-in-class products to our customers. Our agile development methodology ensures we stay ahead of the curve with paradigm-shifting innovations.`;
const improvedText = await ImproveTextWithReflection({ text });
console.log("๐ฏ Final text:\n", improvedText);
```
You can find the complete example code in the [reflection example](https://github.com/gensx-inc/gensx/tree/main/examples/reflection).
# OpenAI Computer Use
The [OpenAI Computer Use](https://github.com/gensx-inc/gensx/tree/main/examples/openai-computer-use) example show how to use OpenAI's computer-use model with GenSX to control a web browser with natural language.
## Workflow
The OpenAI Computer Use workflow consists of the following steps:
1. Launch a browser session with Playwright (``)
2. Send an initial user prompt to the OpenAI computer-use model (``)
3. Process any computer actions requested by the model
- Execute browser actions like clicking, scrolling, typing, etc. (``)
- Take a screenshot after each action and send it back to the model
4. Optionally collect human feedback and continue the conversation (``)
5. Process subsequent model responses and browser actions until completion
Here's an example trace of the workflow showing the actions taken at each step:

## Running the example
From the root of the [GensX Github Repo](https://github.com/gensx-inc/gensx), run the following commands:
```bash
# Navigate to the example directory
cd examples/openai-computer-use
# Install dependencies
pnpm install
# Install playwright
npx playwright install
# Run the example
OPENAI_API_KEY= pnpm run start
```
The default prompt is `how long does it take to drive from seattle to portland? use google maps` but you can change this by editing the `index.tsx` file. You can also control whether or not the example is multi-turn by toggling the `allowHumanFeedback` prop. This is set to `false` by default but you might what to change this to `true` so you can continue the conversation with the model in the terminal.
When you run the example, you'll see an output like the following:
```bash
๐ Starting the computer use example
๐ฏ PROMPT: how long does it take to drive from seattle to portland? use google maps
๐ป Action: screenshot
๐ป Action: click at (188, 180) with button 'left'
๐ป Action: type text 'Google Maps'
๐ป Action: keypress 'ENTER'
๐ป Action: wait
๐ป Action: click at (233, 230) with button 'left'
๐ป New tab opened
๐ป Action: wait
๐ป Action: click at (389, 38) with button 'left'
๐ป Action: type text 'Seattle to Portland'
๐ป Action: keypress 'ENTER'
๐ป Action: wait
โ Computer use complete
โจ Final response: The estimated driving time from Seattle to Portland on Google Maps is approximately 2 hours and 58 minutes via I-5 S, covering a distance of 174 miles. Would you like any more assistance with your route?
```
## Key patterns
### Browser automation
The example uses Playwright to control a web browser, creating a context that's shared throughout the workflow. The `BrowserProvider` component initializes a browser session and makes it available to child components:
```tsx
const BrowserProvider = gensx.Component(
"BrowserProvider",
async ({ initialUrl }) => {
const browser = await chromium.launch({
headless: false,
chromiumSandbox: true,
env: {},
args: ["--disable-extensions", "--disable-file-system"],
});
const page = await browser.newPage();
await page.setViewportSize({ width: 1024, height: 768 });
await page.goto(initialUrl);
return ;
},
);
```
### Processing model actions
The `ProcessComputerCalls` component handles the computer actions returned by the model. For each action, it:
1. Extracts the action from the model response
2. Executes the action on the browser using the `UseBrowser` component
3. Takes a screenshot of the result
4. Sends the screenshot back to the model
5. Processes the next model response
```tsx
const ProcessComputerCalls = gensx.Component<
ProcessComputerCallsProps,
ProcessComputerCallsResult
>("ProcessComputerCalls", async ({ response }) => {
let currentResponse = response;
let computerCalls = currentResponse.output.filter(
(item) => item.type === "computer_call",
);
while (computerCalls.length > 0) {
// Execute browser action and take screenshot
// ...
// Send screenshot back to model
// ...
// Get updated response
// ...
}
return { updatedResponse: currentResponse };
});
```
### Interactive feedback loop
The example supports an interactive conversation with the model, allowing you to provide feedback or additional instructions once the model finishes an initial turn:
```tsx
// Start conversation loop with human feedback
let currentResponse = updatedResponse;
let continueConversation = true;
while (continueConversation) {
// Get human feedback
const { userMessage, shouldExit } = await HumanFeedback.run({
assistantMessage: currentResponse.output_text,
});
// Exit if requested
if (shouldExit) {
continueConversation = false;
continue;
}
// Send user message to model
// ...
// Process any computer calls in the response
// ...
}
```
## Additional resources
Check out the other examples in the [GenSX Github Repo](https://github.com/gensx-inc/gensx/tree/main/examples).
# Hacker News analyzer
The [Hacker News Analyzer](https://github.com/gensx-inc/gensx/tree/main/examples/hacker-news-analyzer) example uses GenSX to fetch, analyze, and extract trends from the top Hacker News posts. It shows how to combine data fetching, parallel analysis, and content generation in a single workflow to create two outputs: a detailed report and a tweet of the trends.
## Workflow
The Hacker News Analyzer workflow is composed of the following steps:
1. Fetch the top 500 Hacker News posts and filters down to `text` posts (`FetchHNPosts`)
2. Process each post in parallel (`AnalyzeHNPosts`)
- Summarize the content (`SummarizePost`)
- Analyze the comments (`AnalyzeComments`)
3. Writes a detailed report identifying the key trends across all posts (`GenerateReport`)
4. Edits the report into the style of Paul Graham (`EditReport`)
5. Generates a tweet in the voice of Paul Graham (`WriteTweet`)
## Running the example
```bash
# Navigate to the example directory
cd examples/hacker-news-analyzer
# Install dependencies
pnpm install
# Set your OpenAI API key
export OPENAI_API_KEY=
# Run the example
pnpm run dev
```
The workflow will create two files:
- `hn_analysis_report.md`: A detailed analysis report
- `hn_analysis_tweet.txt`: A tweet-sized summary of the analysis
## Key patterns
### Parallel processing
The `AnalyzeHNPosts` component processes each post in parallel and does two types of analysis in parallel as well. This is achieved using `Promise.all` to concurrently process multiple posts, with each post's analysis being handled by separate components.
```ts
const AnalyzeHNPosts = gensx.Component(
"AnalyzeHNPosts",
async ({ stories }: AnalyzeHNPostsProps) => {
const analyses = await Promise.all(
stories.map(async (story) => {
const [summary, commentAnalysis] = await Promise.all([
SummarizePost({ story }),
AnalyzeComments({
postId: story.id,
comments: story.comments,
}),
]);
return { summary, commentAnalysis };
}),
);
return { analyses };
},
);
```
The component returns an array of `analyses` that looks like this:
```ts
{
analyses: [
{ summary: "...", commentAnalysis: "..." },
{ summary: "...", commentAnalysis: "..." },
// ...
];
}
```
## Additional resources
Check out the other examples in the [GenSX Github Repo](https://github.com/gensx-inc/gensx/tree/main/examples).
# Blog writer
Breaking down complex tasks into smaller, discrete steps is one of the best ways to improve the quality of LLM outputs. The [blog writer workflow example](https://github.com/gensx-inc/gensx/tree/main/examples/blog-writer) does this by following the same approach a human would take to write a blog post: conducting research, creating an outline, writing a structured draft, and finally editing that draft.
## Workflow
The Blog Writer workflow consists of the following steps:
1. **Research phase**:
- Generate focused research topics using Claude (`GenerateTopics`)
- Conduct web research with citations via Perplexity API (`WebResearch`)
- Search internal documentation catalog using GenSX storage (`CatalogResearch`)
2. **Outline creation**: Structure the blog post with sections, key points, and research integration (`WriteOutline`)
3. **Draft writing**: Generate content section-by-section with expert SaaS company writer prompts (`WriteDraft`)
4. **Editorial enhancement**: Polish content for engagement, style, and readability (`Editorial`)
5. **Tone matching** (optional): Adapt writing style to match a reference URL (`MatchTone`)
## Running the example
```bash
# Navigate to the example directory
cd examples/blog-writer
# Install dependencies
pnpm install
# Set your API keys
export ANTHROPIC_API_KEY=
export PERPLEXITY_API_KEY=
# Optional: For catalog search
export GENSX_API_KEY=
export GENSX_PROJECT=
export GENSX_ENV=development
# Run the example
pnpm run start
```
## Key patterns
### Multi-step content generation
The workflow demonstrates how to break complex content generation into discrete, manageable steps. Each component has a specific role and produces structured output for the next step:
```ts
const WriteBlog = gensx.Workflow("WriteBlog", async (props: WriteBlogProps) => {
// Step 1: Conduct research
const research = await Research({
title: props.title,
prompt: props.prompt,
});
// Step 2: Create outline based on research
const outline = await WriteOutline({
title: props.title,
prompt: props.prompt,
research: research,
});
// Step 3: Write draft based on outline and research
const draft = await WriteDraft({
title: props.title,
prompt: props.prompt,
outline: outline.object,
research: research,
targetWordCount: props.wordCount ?? 1500,
});
// Step 4: Editorial pass to make it more engaging
const finalContent = await Editorial({
title: props.title,
prompt: props.prompt,
draft: draft,
targetWordCount: props.wordCount ?? 1500,
});
return { title: props.title, content: finalContent, metadata: {...} };
});
```
### Parallel research processing
The `Research` component processes multiple research topics in parallel using `Promise.all`, combining web research with optional catalog search:
```ts
const Research = gensx.Component("Research", async (props: ResearchProps) => {
// Generate research topics
const topicsResult = await GenerateTopics({
title: props.title,
prompt: props.prompt,
});
// Conduct web research in parallel
const webResearchPromises = topicsResult.topics.map((topic) =>
WebResearch({ topic }),
);
const webResearch = await Promise.all(webResearchPromises);
return {
topics: topicsResult.topics,
webResearch: webResearch,
};
});
```
### Real-time web research with citations
The `WebResearch` component uses Perplexity's Sonar API to get current information with proper citations:
```ts
const WebResearch = gensx.Component(
"WebResearch",
async (props: { topic: string }) => {
const result = await generateText({
model: perplexity("sonar-pro"),
prompt: `Research the following topic comprehensively: ${props.topic}
Provide detailed, current information with proper citations.`,
});
return {
topic: props.topic,
content: result.text,
citations: result.response.citations || [],
source: "perplexity",
};
},
);
```
### Tool integration for dynamic research
Components can use tools to gather additional information during generation. The `WriteSection` component includes a web research tool for section-specific information:
```ts
const webResearchTool = tool({
description: "Conduct additional web research on a specific topic",
parameters: z.object({
topic: z.string().describe("The specific topic to research"),
}),
execute: async ({ topic }: { topic: string }) => {
const result = await WebResearch({ topic });
return {
topic: result.topic,
content: result.content,
citations: result.citations,
source: result.source,
};
},
});
```
## Additional resources
Check out the other examples in the [GenSX Github Repo](https://github.com/gensx-inc/gensx/tree/main/examples).
# Vercel AI SDK
The [@gensx/vercel-ai](https://www.npmjs.com/package/@gensx/vercel-ai) package provides [Vercel AI SDK](https://sdk.vercel.ai/docs/introduction) compatible components for GenSX, allowing you to use Vercel's AI SDK with GenSX's component model.
## Installation
To install the package, run the following command:
```bash
npm install @gensx/vercel-ai
```
You'll also need to install the relevant providers from the Vercel AI SDK:
```bash
npm install @ai-sdk/openai
```
Then import the components you need from the package:
```ts
import { generateText, generateObject } from "@gensx/vercel-ai";
```
## Supported components
|
Component
| Description |
| :-------------------------------------------- | :------------------------------------------------------------- |
| [`generateText`](#generatetext) | Generate complete text responses from language models |
| [`generateObject`](#generateobject) | Generate complete structured JSON objects from language models |
| [`streamText`](#streamtext) | Stream text responses from language models |
| [`streamObject`](#streamobject) | Stream structured JSON objects from language models |
| [`embed`](#embed) | Generate embeddings for a single text input |
| [`embedMany`](#embedmany) | Generate embeddings for multiple text inputs |
| [`generateImage`](#generateimage) | Generate images from text prompts |
## Component Reference
#### `generateText`
The [`generateText`](https://sdk.vercel.ai/docs/ai-sdk-core/generating-text#generatetext) component generates complete text responses from language models, waiting for the entire response before returning.
```ts
import { generateText } from "@gensx/vercel-ai";
import { openai } from "@ai-sdk/openai";
const result = await generateText({
prompt: "Write a poem about a cat",
model: openai("gpt-4.1-mini"),
});
console.log(result.text);
```
##### Props
The `generateText` component accepts all parameters from the Vercel AI SDK's `generateText` function:
- `prompt` (required): The text prompt to send to the model
- `model` (required): The language model to use (from Vercel AI SDK)
- Plus any other parameters supported by the Vercel AI SDK
##### Return Type
Returns a complete text string containing the model's response.
#### `generateObject`
The [`generateObject`](https://sdk.vercel.ai/docs/ai-sdk-core/generating-structured-data#generate-object) component generates complete structured JSON objects from language models, with type safety through Zod schemas.
```ts
import { generateObject } from "@gensx/vercel-ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
const userSchema = z.object({
user: z.object({
name: z.string(),
age: z.number(),
interests: z.array(z.string()),
contact: z.object({
email: z.string().email(),
phone: z.string().optional(),
}),
}),
});
const result = await generateObject({
prompt,
schema: userSchema,
model: openai("gpt-4.1-mini"),
});
console.log(result.object);
```
##### Props
The `generateObject` component accepts all parameters from the Vercel AI SDK's `generateObject` function:
- `prompt` (required): The text prompt to send to the model
- `model` (required): The language model to use (from Vercel AI SDK)
- `schema`: A Zod schema defining the structure of the response
- `output`: The output format ("object", "array", or "no-schema")
- Plus any other optional parameters supported by the Vercel AI SDK
##### Return Type
Returns a structured object matching the provided schema.
#### `streamText`
The [`streamText`](https://sdk.vercel.ai/docs/ai-sdk-core/generating-text#streamtext) component streams text responses from language models, making it ideal for chat interfaces and other applications where you want to show responses as they're generated.
```ts
import { streamText } from "@gensx/vercel-ai";
import { openai } from "@ai-sdk/openai";
const result = streamText({
messages: [
{
role: "system",
content: "You are a helpful assistant",
},
{
role: "user",
content: "write a children's book about AGI",
},
],
model: openai("gpt-4.1-mini"),
});
for await (const chunk of result.textStream) {
console.log(chunk);
}
```
##### Props
The `streamText` component accepts all parameters from the Vercel AI SDK's `streamText` function:
- `prompt` (required): The text prompt to send to the model
- `model` (required): The language model to use (from Vercel AI SDK)
- Plus all other parameters supported by the Vercel AI SDK
##### Return Type
Returns a streaming response that can be consumed token by token.
#### `streamObject`
The [`streamObject`](https://sdk.vercel.ai/docs/ai-sdk-core/generating-structured-data#stream-object) component streams structured JSON objects from language models, allowing you to get structured data with type safety.
```ts
import { streamObject } from "@gensx/vercel-ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
// Define a schema for the response
const recipeSchema = z.object({
recipe: z.object({
name: z.string(),
ingredients: z.array(z.string()),
steps: z.array(z.string()),
}),
});
const result = streamObject({
prompt: "Generate a recipe for chocolate chip cookies",
schema: recipeSchema,
model: openai("gpt-4.1-mini"),
});
for await (const chunk of result.partialObjectStream) {
console.log(chunk);
}
```
##### Props
The `streamObject` component accepts all parameters from the Vercel AI SDK's `streamObject` function:
- `prompt` (required): The text prompt to send to the model
- `model` (required): The language model to use (from Vercel AI SDK)
- `schema`: A Zod schema defining the structure of the response
- `output`: The output format ("object", "array", or "no-schema")
- Plus all other parameters supported by the Vercel AI SDK
##### Return Type
Returns a structured object matching the provided schema.
#### `embed`
The [`embed`](https://sdk.vercel.ai/docs/ai-sdk-core/embeddings) component generates embeddings for a single text input, which can be used for semantic search, clustering, and other NLP tasks.
```ts
import { embed } from "@gensx/vercel-ai";
import { openai } from "@ai-sdk/openai";
import * as gensx from "@gensx/core";
const result = await embed({
value: "the cat jumped over the dog",
model: openai.embedding("text-embedding-3-small");,
});
console.log(result.embedding)
```
##### Props
The `embed` component accepts all parameters from the Vercel AI SDK's `embed` function:
- `value` (required): The text to generate an embedding for
- `model` (required): The embedding model to use (from Vercel AI SDK)
- Plus any other optional parameters supported by the Vercel AI SDK
##### Return Type
Returns a vector representation (embedding) of the input text.
#### `embedMany`
The [`embedMany`](https://sdk.vercel.ai/docs/ai-sdk-core/embeddings#embedding-many-values) component generates embeddings for multiple text inputs in a single call, which is more efficient than making separate calls for each text.
```ts
import { embedMany } from "@gensx/vercel-ai";
import { openai } from "@ai-sdk/openai";
const texts = [
"the cat jumped over the dog",
"the dog chased the cat",
"the cat ran away",
];
const result = await embedMany({
values: texts,
model: openai.embedding("text-embedding-3-small"),
});
console.log(result.embeddings);
```
##### Props
The `EmbedMany` component accepts all parameters from the Vercel AI SDK's `embedMany` function:
- `values` (required): Array of texts to generate embeddings for
- `model` (required): The embedding model to use (from Vercel AI SDK)
- Plus any other optional parameters supported by the Vercel AI SDK
##### Return Type
Returns an array of vector representations (embeddings) for the input texts.
#### `generateImage`
The [`generateImage`](https://sdk.vercel.ai/docs/ai-sdk-core/image-generation) component generates images from text prompts using image generation models.
```ts
import { generateImage } from "@gensx/vercel-ai";
import { openai } from "@ai-sdk/openai";
const result = await generateImage({
prompt: "a bear walking through a lush forest",
model: openai.image("dall-e-3"),
});
console.log(result);
```
##### Props
The `generateImage` component accepts all parameters from the Vercel AI SDK's `experimental_generateImage` function:
- `prompt` (required): The text description of the image to generate
- `model` (required): The image generation model to use (from Vercel AI SDK)
- Plus any other optional parameters supported by the Vercel AI SDK
##### Return Type
Returns an object containing information about the generated image, including its URL.
## Usage with Different Models
The Vercel AI SDK supports multiple model providers. Here's how to use different providers with GenSX components:
```ts
// OpenAI
import { openai } from "@ai-sdk/openai";
const openaiModel = openai("gpt-4.1");
// Anthropic
import { anthropic } from "@ai-sdk/anthropic";
const anthropicModel = anthropic("claude-sonnet-4-20250514");
// Gemini
import { google } from "@ai-sdk/google";
const googleModel = google("gemini-2.5-flash-preview-05-20");
```
For more information on the Vercel AI SDK, visit the [official documentation](https://sdk.vercel.ai/docs).
# OpenRouter
[OpenRouter](https://openrouter.ai) provides a unified API to access various AI models from different providers. You can use GenSX with OpenRouter by configuring the OpenAI client with OpenRouter's API endpoint.
## Installation
To use OpenRouter with GenSX, you need to install the [`@gensx/openai`](/docs/component-reference/openai) package:
```bash
npm install @gensx/openai
```
## Configuration
Configure the OpenAI client with your OpenRouter API key and the OpenRouter base URL:
```ts
import { OpenAI } from "@gensx/openai";
const client = new OpenAI({
apiKey: process.env.OPENROUTER_API_KEY,
baseURL: "https://openrouter.ai/api/v1",
});
```
## Example Usage
Here's a complete example of using OpenRouter with GenSX:
```ts
import { OpenAI } from "@gensx/openai";
interface RespondProps {
userInput: string;
}
type RespondOutput = string;
const GenerateText = gensx.Component(
"GenerateText",
async ({ userInput }) => {
const result = await client.chat.completions.create({
model: "anthropic/claude-sonnet-4",
messages: [
{
role: "system",
content: "You are a helpful assistant. Respond to the user's input.",
},
{ role: "user", content: userInput },
],
provider: {
ignore: ["Anthropic"],
},
});
return result.choices[0].message.content ?? "";
},
);
const OpenRouterWorkflow = gensx.Component<{ userInput: string }, string>(
"OpenRouter",
async ({ userInput }: { userInput: string }) => {
const result = await GenerateText({ userInput });
return result;
},
);
const result = await OpenRouterWorkflow.run({
userInput: "Hi there! Write me a short story about a cat that can fly.",
});
```
## Specifying Models
When using OpenRouter, you can specify models using their full identifiers:
- `anthropic/claude-sonnet-4`
- `openai/gpt-4.1`
- `google/gemini-2.5-pro-preview`
- `meta-llama/llama-3.3-70b-instruct`
Check the [OpenRouter documentation](https://openrouter.ai/docs) for a complete list of available models.
## Provider Options
You can use the `provider` property in the `openai.chat.completions.create` method to specify OpenRouter-specific options:
```tsx
openai.chat.completions.create({
model: "anthropic/claude-sonnet-4",
messages: [
/* your messages */
],
provider: {
ignore: ["Anthropic"], // Ignore specific providers
route: "fallback", // Use fallback routing strategy
},
});
```
## Learn More
- [OpenRouter Documentation](https://openrouter.ai/docs)
- [GenSX OpenAI Components](/docs/component-reference/openai)
# OpenAI
The [@gensx/openai](https://www.npmjs.com/package/@gensx/openai) package provides a pre-wrapped version of the OpenAI SDK for GenSX, making it easy to use OpenAI's API with GenSX functionality.
## Installation
To install the package, run the following command:
```bash
npm install @gensx/openai openai
```
## Usage
You can use this package in two ways:
### 1. Drop-in Replacement (Recommended)
Simply replace your OpenAI import with the GenSX version:
```ts
// Instead of:
// import { OpenAI } from 'openai';
// Use:
import { OpenAI } from "@gensx/openai";
// Create a client as usual
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
// All methods are automatically wrapped with GenSX functionality
const completion = await client.chat.completions.create({
model: "gpt-4.1-mini",
messages: [{ role: "user", content: "Hello!" }],
});
// Use embeddings
const embedding = await client.embeddings.create({
model: "text-embedding-3-small",
input: "Hello world!",
});
// Use responses
const response = await client.responses.create({
model: "gpt-4.1-mini",
messages: [{ role: "user", content: "Hello!" }],
});
```
### 2. Wrap an Existing Instance
If you already have an OpenAI instance, you can wrap it with GenSX functionality:
```ts
import { OpenAI } from "openai";
import { wrapOpenAI } from "@gensx/openai";
// Create your OpenAI instance as usual
const client = wrapOpenAI(
new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
}),
);
// Now all methods are wrapped with GenSX functionality
const completion = await client.chat.completions.create({
model: "gpt-4.1-mini",
messages: [{ role: "user", content: "Hello!" }],
});
```
## API Reference
The package exports:
1. `OpenAI` - A drop-in replacement for the OpenAI client that automatically wraps all methods with GenSX functionality
2. `wrapOpenAI` - A function to manually wrap an OpenAI instance with GenSX functionality
All methods from the OpenAI SDK are supported and automatically wrapped with GenSX functionality, including:
- Chat Completions
- Embeddings
- Responses
- And all other OpenAI API endpoints
The wrapped methods maintain the same interface as the original OpenAI SDK, so you can use them exactly as you would with the standard OpenAI client.
# Anthropic
The [@gensx/anthropic](https://www.npmjs.com/package/@gensx/anthropic) package provides a pre-wrapped version of the Anthropic SDK for GenSX, making it easy to use Anthropic's API with GenSX functionality.
## Installation
To install the package, run the following command:
```bash
npm install @gensx/anthropic @anthropic-ai/sdk
```
## Usage
You can use this package in two ways:
### 1. Drop-in Replacement (Recommended)
Simply replace your Anthropic import with the GenSX version:
```ts
// Instead of:
// import { Anthropic } from '@anthropic-ai/sdk';
// Use:
import { Anthropic } from "@gensx/anthropic";
// Create a client as usual
const client = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
});
// All methods are automatically wrapped with GenSX functionality
const completion = await client.messages.create({
model: "claude-sonnet-4-20250514",
messages: [{ role: "user", content: "Hello!" }],
max_tokens: 1000,
});
// Use streaming
const stream = await client.messages.create({
model: "claude-sonnet-4-20250514",
messages: [{ role: "user", content: "Hello!" }],
max_tokens: 1000,
stream: true,
});
```
### 2. Wrap an Existing Instance
If you already have an Anthropic instance, you can wrap it with GenSX functionality:
```ts
import { Anthropic } from "@anthropic-ai/sdk";
import { wrapAnthropic } from "@gensx/anthropic";
// Create your Anthropic instance as usual
const client = wrapAnthropic(
new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
}),
);
// Now all methods are wrapped with GenSX functionality
const completion = await client.messages.create({
model: "claude-sonnet-4-20250514",
messages: [{ role: "user", content: "Hello!" }],
max_tokens: 1000,
});
```
## API Reference
The package exports:
1. `Anthropic` - A drop-in replacement for the Anthropic client that automatically wraps all methods with GenSX functionality
2. `wrapAnthropic` - A function to manually wrap an Anthropic instance with GenSX functionality
All methods from the Anthropic SDK are supported and automatically wrapped with GenSX functionality.
# Serverless deployments
> **Note**: GenSX Cloud is currently in developer preview.
Deploy your GenSX workflows as serverless APIs with support for both synchronous and asynchronous execution, as well as long-running operations.
## Deploy with the CLI
Projects are a collection of workflows and environment variables that deploy together into an `environment` that you configure.
Each project has a `gensx.yaml` file at the root and a `workflows.ts` file that exports all of your deployable workflows.
Run `gensx deploy` from the root of your project to deploy it:
```bash
# Deploy the workflow file with default settings
npx gensx deploy src/workflows.ts
# Deploy with environment variables
npx gensx deploy src/workflows.ts -e OPENAI_API_KEY
```
Environment variables are encrypted with per-project encryption keys.
### Deploying to different environments
GenSX supports multiple environments within a project (such as development, staging, and production) to help manage your deployment lifecycle.
```bash
# Deploy to a specific environment
npx gensx deploy src/workflows.ts --env production
# Deploy to staging with environment-specific variables
npx gensx deploy src/workflows.ts --env staging -e OPENAI_API_KEY -e LOG_LEVEL=debug
```
Each environment can have its own configuration and environment variables, allowing you to test in isolation before promoting changes to production.
When you deploy a workflow, GenSX:
1. Builds your TypeScript code for production
2. Bundles your dependencies
3. Uploads the package to GenSX Cloud
4. Configures serverless infrastructure
5. Creates API endpoints for each exported workflow
6. Encrypts and sets up environment variables
7. Activates the deployment
The entire process typically takes 15 seconds.
## Running workflows from the CLI
Once deployed, you can execute workflows directly from the CLI:
```bash
# Run a workflow synchronously with input data
npx gensx run MyWorkflow --input '{"prompt":"Generate a business name"}' --project my-app
# Run and save the output to a file
npx gensx run MyWorkflow --input '{"prompt":"Generate a business name"}' --output results.json
# Run asynchronously (start the workflow but don't wait for completion)
npx gensx run MyWorkflow --input '{"prompt":"Generate a business name"}' --project my-app
```
### CLI run options
| Option | Description |
| ----------- | ---------------------------------- |
| `--input` | JSON string with input data |
| `--no-wait` | Do not wait for workflow to finish |
| `--output` | Save results to a file |
| `--project` | Specify the project name |
| `--env` | Specify the environment name |
## API endpoints
Each workflow is exposed as an API endpoint:
```
https://api.gensx.com/org/{org}/projects/{project}/environments/{environment}/workflows/{workflow}
```
- `{org}` - Your organization ID
- `{project}` - Your project name
- `{environment}` - The environment (defaults to "default")
- `{workflow}` - The name of your workflow
For example, if you have a workflow named `BlogWriter` in project `content-tools`, the endpoint would be:
```
https://api.gensx.com/org/your-org/projects/content-tools/environments/default/workflows/BlogWriter
```
## Authentication
All GenSX Cloud API endpoints require authentication using your GenSX API key as a bearer token:
```bash
curl -X POST https://api.gensx.com/org/your-org/projects/your-project/environments/default/workflows/YourWorkflow \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{"prompt": "Tell me about GenSX"}'
```
### Obtaining an API Key
To generate or manage API keys:
1. Log in to the GenSX Cloud console
2. Navigate to Settings > API Keys
3. Create a new key
## Execution modes
### Synchronous Execution
By default, API calls execute synchronously, returning the result when the workflow completes:
```bash
curl -X POST https://api.gensx.com/org/your-org/projects/your-project/environments/default/workflows/YourWorkflow \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{"prompt": "Tell me about GenSX"}'
```
### Asynchronous execution
For longer-running workflows, use asynchronous execution by calling the `/start` endpoint:
```bash
# Request asynchronous execution
curl -X POST https://api.gensx.com/org/your-org/projects/your-project/environments/default/workflows/YourWorkflow/start \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{"prompt": "Tell me about GenSX"}'
# Response includes an execution ID
# { "executionId": "exec_123abc" }
# Check status later
curl -X GET https://api.gensx.com/executions/exec_123abc \
-H "Authorization: Bearer your-api-key"
```
### Streaming responses
For workflows that support streaming, you can receive tokens as they're generated:
```bash
curl -X POST https://api.gensx.com/org/your-org/projects/your-project/environments/default/workflows/YourWorkflow \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{"prompt": "Tell me about GenSX", "stream": true }'
```
The response is delivered as a stream of server-sent events (SSE).
## Execution time limits
GenSX Cloud is optimized for long-running workflows and agents, with generous execution time limits:
| Plan | Maximum Execution Time |
| ---------- | ----------------------- |
| Free Tier | Up to 5 minutes |
| Pro Tier | Up to 60 minutes |
| Enterprise | Custom limits available |
These extended timeouts make GenSX ideal for complex AI workflows that might involve:
- Multiple LLM calls in sequence
- Real-time agent tool use
- Complex data processing
- Extensive RAG operations
## Cold starts and performance
The GenSX Cloud serverless architecture is designed to minimize cold starts:
- **Fast cold starts**: Cold starts typically range from ~100ms
- **Warm execution**: Subsequent executions of recently used workflows start in 1-5ms
- **Auto-scaling**: Infrastructure automatically scales with workloads
## Managing deployments in the console
GenSX Cloud provides a console to run, debug, and view all of your workflows.
### Viewing workflows

1. Log in to the GenSX Cloud console
1. Log in to the GenSX Cloud console
1. Navigate to your project and environment
1. The workflows tab shows all deployed workflows with status information
1. Click on a workflow to view its details, including schema, recent executions, and performance metrics
The workflow page includes API documentation and code snippets that you can copy/paste to run your workflow from within another app:

### Running workflows manually
You can test workflows directly from the console:
1. Navigate to the workflow detail page
2. Click the "Run" button
3. Enter JSON input in the provided editor
4. Choose execution mode (sync, async, or streaming)
5. View results directly in the console

### Viewing execution history
Each workflow execution generates a trace you can review:
1. Navigate to the "Executions" tab in your project
2. Browse the list of recent executions
3. Click on any execution to see detailed traces
4. Explore the component tree, inputs/outputs, and execution timeline
## Next steps
- [Learn about cloud storage options](/docs/cloud/storage)
- [Explore observability and tracing](/docs/cloud/observability)
# Projects and environments
GenSX organizes your workflows and deployments using a flexible structure of projects and environments, making it easy to match the rest of your application architecture and CI/CD topology. Projects are a topโlevel resource and environments are instances of a project that you deploy to.
## Project structure
A project in GenSX is a collection of related workflows that are deployed, managed, and monitored together:
- **Projects as logical units**: Group related workflows that serve a common purpose
- **Shared configuration**: Apply settings across all workflows in a project
- **Collective deployment**: Deploy all workflows within a project in one operation
- **Unified monitoring**: View traces and metrics for an entire project
Projects typically correspond to a codebase or application that contains multiple workflows.
## Environment separation
Within each project, you can have multiple environments. For example, you could create three environments for each project:
- **Development**: For building and testing new features
- **Staging**: For preโproduction validation
- **Production**: For live, user-facing workflows
You have full control over your environments so you can organize them however you see fit.
Each environment maintains separate:
- Workflow deployments
- Configuration and environment variables
- Execution traces and monitoring data
## Configuring projects
### Project configuration file
Projects are defined using a `gensx.yaml` file at the root of your codebase:
```yaml
# gensx.yaml
projectName: customer-support-bot
description: AI assistant for customer support
```
This configuration applies to both local development and cloud deployments.
## Working with environments
### Deploying to different environments
Deploy your workflows to specific environments using the CLI:
```bash
# Deploy to the default environment
gensx deploy src/workflows.ts
# Deploy to a staging environment
gensx deploy src/workflows.ts --env staging
# Deploy to production
gensx deploy src/workflows.ts --env production
```
### Environmentโspecific configuration
Set environmentโspecific variables during deployment:
```bash
# Development-specific settings
gensx deploy src/workflows.ts --env development \
-e LOG_LEVEL=debug \
-e OPENAI_API_KEY
# Production-specific settings
gensx deploy src/workflows.ts --env production \
-e LOG_LEVEL=error \
-e OPENAI_API_KEY
```
## Projects in the GenSX Console
The GenSX Console organizes everything by project and environment:

Selecting an environment brings you to the workflows view:

When you click into a workflow, you can trigger it within the console if you've deployed it to GenSX Cloud:

You can also see API documentation and sample code for calling that workflow:

## Next steps
- [Configure serverless deployments](/docs/cloud/serverless-deployments) for your projects
- [Set up local development](/docs/cloud/local-development) for testing
- [Learn about observability](/docs/cloud/observability) across environments
# Observability & tracing
GenSX provides observability tools that make it easy to understand, debug, and optimize your workflows. Every component execution is automatically traced, giving you full visibility into what's happening inside your LLM workflows. You can view traces in real-time as workflows execute, and view historical traces to debug production issues like hallucinations.
## Viewing traces
When you run a workflow, GenSX automatically generates a trace that captures the entire execution flow, including all component inputs, outputs, and timing information.
### Accessing the trace viewer
The GenSX cloud console includes a trace viewer. You can access traces in several ways:
1. **From the Console**: Navigate to your project in the GenSX Console and select the "Executions" tab
2. **Trace URL**: When running a workflow with `printUrl: true`, a direct link to the trace is printed to the console
3. **API Response**: When running a workflow in the cloud, the execution ID from API responses can be used to view traces in the console
```ts
// Executing a workflow with trace URL printing
const result = await MyWorkflow({ input: "What is GenSX?" });
// Console output includes:
// [GenSX] View execution at: https://console/your_org/executions/your_execution_id
```
### Understanding the flame graph
The flame graph visualizes the entire execution tree including branches, all nested sub-components, and timing:

- **Component hierarchy**: See the nested structure of your components and their relationships
- **Execution timing**: The width of each bar represents the relative execution time
- **Status indicators**: Quickly spot errors or warnings with color coding
- **Component filtering**: Focus on specific components or component types
Click on any component in the flame graph to inspect its details, including inputs, outputs, and timing information.
### Viewing component inputs and outputs
For each component in your workflow, you can inspect:
1. **Input properties**: All props passed to the component
2. **Output values**: The data returned by the component
3. **Execution duration**: How long the component took to execute
4. **Metadata**: Additional information like token counts for LLM calls

This visualization is particularly valuable for debugging production and user-reported issues like hallucinations.
### Viewing historical traces
The GenSX Console maintains a history of all your workflow executions, allowing you to:
- **Compare executions**: See how behavior changes across different runs
- **Identify patterns**: Spot recurring issues or performance bottlenecks
- **Filter by status**: Focus on successful, failed, or in-progress executions
- **Search**: Find historical executions
Historical traces are automatically organized by project and environment, making it easy to find relevant executions.
## Configuring traces
GenSX provides flexible options for configuring and organizing traces for the GenSX Cloud serverless platform, local development, and any other deployment platform like vercel, cloudflare and AWS.
### Tracing GenSX Cloud workflows
When running workflows deployed to GenSX Cloud, tracing is automatically configured:
- **Project context**: Traces are associated with the correct project
- **Environment segregation**: Development, staging, and production traces are kept separate
- **Authentication**: API keys and organization information are handled automatically
- **Retention**: Traces are stored according to your plan limits
No additional configuration is needed โ everything works out of the box.
### Tracing on other deployment platforms
To enable tracing for workflows deployed outside of GenSX Cloud (like AWS Lambda, GCP Cloud Run, etc.), you need to set several environment variables:
```bash
# Required variables
GENSX_API_KEY=your_api_key_here
GENSX_ORG=your_gensx_org_name
GENSX_PROJECT=your_project_name
# Optional variables
GENSX_ENVIRONMENT=your_environment_name # Separate traces into specific environments
GENSX_CHECKPOINTS=false # Explicitly disable
```
### Configuring traces for local development
For local development, the tracing configuration is automatically inferred from:
1. The `gensx.yaml` file in your project root
2. Your local configuration managed by the `gensx` CLI in `~/.config/gensx/config`
3. Optionally the `GENSX_ENVIRONMENT` environment variable can be set to separate local traces from other environments
The local development server started with `gensx start` uses this same configuration scheme as well.
### Organizing traces by environment
GenSX allows you to organize traces by environment (such as development, staging, production, etc.) to keep your debugging data well-structured:
```bash
# Deploy to a specific environment with its own traces
gensx deploy src/workflows.ts --env production
```
In the GenSX Console, you can filter traces by environment to focus on relevant executions. This separation also helps when:
- Debugging issues specific to an environment
- Comparing behavior between environments
- Isolating production traces from development noise
## Instrumenting additional code
Every GenSX component is automatically traced. If want to trace additional sub-steps of a workflow, wrap that code in a `gensx.Component` and execute it via `myComponent(props)`.
```ts
import * as gensx from "@gensx/core";
const MyWorkflow = gensx.Component(
"MyWorkflow",
async ({ input }: MyWorkflowInput) => {
// Step 1: Process input
const processedData = await ProcessData({ data: input });
// Step 2: Generate response
const response = await GenerateResponse({ data: processedData });
return response;
},
);
// Create a component to trace a specific processing step
const ProcessData = gensx.Component(
"ProcessData",
async ({ data }: ProcessDataInput) => {
// This entire function execution will be captured in traces
const parsedData = JSON.parse(data);
const enrichedData = await fetchAdditionalInfo(parsedData);
return enrichedData;
},
);
// Create a component to trace response generation
const GenerateResponse = gensx.Component(
"GenerateResponse",
async ({ data }: GenerateResponseInput) => {
// This will appear as a separate node in the trace
return `Processed result: ${JSON.stringify(data)}`;
},
);
```
## Secrets scrubbing
GenSX enables you to configure which input props and outputs are marked as secrets and redacted from traces. Scrubbing happens locally before traces are sent to GenSX Cloud.
### How secrets scrubbing works
When a component executes, GenSX automatically:
1. Identifies secrets in component props and outputs
2. Replaces these secrets with `[secret]` in the trace data
3. Propagates secret detection across the entire component hierarchy
Even if a secret is passed down through multiple components, it remains scrubbed in all traces.
### Marking secrets in component props
To mark specific props as containing secrets:
```ts
import * as gensx from "@gensx/core";
const AuthenticatedClient = gensx.Component(
"AuthenticatedClient",
({ apiKey, endpoint, query, credentials }: AuthenticatedClientInput) => {
// Use apiKey securely, knowing it won't appear in traces
return fetchData(endpoint, query, apiKey, credentials);
},
{
// Mark these props as containing sensitive data
secretProps: ["apiKey", "credentials.privateKey"],
},
);
```
The `secretProps` option can specify both top-level props and nested paths using dot notation.
### Marking component outputs as secrets
For components that might return sensitive information, you can mark the entire output as sensitive:
```ts
const GenerateCredentials = gensx.Component(
"GenerateCredentials",
async ({ userId }: { userId: string }) => {
// This entire output will be marked as secret
return {
accessToken: "sk-1234567890abcdef",
refreshToken: "rt-0987654321fedcba",
expiresAt: Date.now() + 3600000,
};
},
{
secretOutputs: true,
},
);
```
When `secretOutputs` is set to `true`, the entire output object or value will be treated as sensitive and masked in traces.
## Limits
GenSX observability features have certain limits based on your subscription tier:
| Feature | Free Tier | Pro Tier | Enterprise |
| ------------------------- | -------------- | ------------------ | ---------- |
| Traced components | 100K per month | 1M per month | Custom |
| Overage cost | N/A | Per 10K components | Custom |
| Trace retention | 7 days | 30 days | Custom |
| Maximum input/output size | 4MB each | 4MB each | 4MB each |
A few important notes on these limits:
- **Component count**: Each component execution in your workflow counts as one traced component
- **Size limits**: Component inputs and outputs are limited to 4MB each; larger data is truncated
- **Secret scrubbing**: API keys and sensitive data are automatically redacted from traces
- **Retention**: After the retention period, traces are automatically deleted
For use cases requiring higher limits or longer retention, contact the GenSX team for enterprise options.
## Next steps
- [Set up serverless deployments](/docs/cloud/serverless-deployments) to automatically trace cloud workflows
- [Learn about local development](/docs/cloud/local-development) for testing with traces
- [Explore project and environment organization](/docs/cloud/projects-environments) to structure your traces
# GenSX Cloud MCP server
`@gensx/gensx-cloud-mcp` is a Model Context Protocol server for [GenSX Cloud](/docs/cloud) workflows. It enables you to connect your GenSX Cloud workflows to MCPโcompatible tools like Claude desktop, Cursor, and more.

## Usage
Once you have run [`gensx deploy`](/docs/cli-reference/deploy) to deploy your project to the [GenSX Cloud serverless runtime](/docs/cloud/serverless-deployments), you can consume those workflows via the `@gensx/gensx-cloud-mcp` server.
MCP-compatible tools use a standard JSON file to configure available MCP servers.
Update your MCP config file for your tool of choice to include the following:
```json
{
"mcpServers": {
"gensx": {
"command": "npx",
"args": [
"-y",
"@gensx/gensx-cloud-mcp",
"your_org_name",
"your_project_name",
"your_environment_name"
]
}
}
}
```
Your MCP client will run this command automatically at startup and handle acquiring the GenSX Cloud MCP server on your behalf. See the [Claude desktop](https://modelcontextprotocol.io/quickstart/user), and [Cursor docs](https://docs.cursor.com/context/model-context-protocol) on configuring MCP servers for more details.
By default, the server reads your API credentials from the config saved by running the `gensx login` command. Alternatively, you can specify your GenSX API key as an environment variable in your MCP config:
```json
{
"mcpServers": {
"gensx": {
"command": "npx",
"args": [
"@gensx/gensx-cloud-mcp",
"your_org_name",
"your_project_name",
"your_environment_name"
],
"env": {
"GENSX_API_KEY": "my_api_key"
}
}
}
}
```
The GenSX build process automatically extracts input and output schemas from your typescript types, so no additional configuration or manual `zod` schema is required to consume your workflows from an MCP server.
# Local development server
GenSX provides a local development experience that mirrors the cloud environment, making it easy to build and test workflows on your machine before deploying them.
## Starting the dev server
The `gensx start` command launches a local development server with hot-reloading:
```bash
gensx start ./src/workflows.ts
```
```bash
๐ Starting GenSX Dev Server...
โน Starting development server...
โ Compilation completed
โ Generating schema
Importing compiled JavaScript file: /Users/evan/code/gensx-console/samples/support-tools/dist/src/workflows.js
๐ GenSX Dev Server running at http://localhost:1337
๐งช Swagger UI available at http://localhost:1337/swagger-ui
๐ Available workflows:
- RAGWorkflow: http://localhost:1337/workflows/RAGWorkflow
- AnalyzeDiscordWorkflow: http://localhost:1337/workflows/AnalyzeDiscordWorkflow
- TextToSQLWorkflow: http://localhost:1337/workflows/TextToSQLWorkflow
- ChatAgent: http://localhost:1337/workflows/ChatAgent
โ Server is running. Press Ctrl+C to stop.
```
## Development server features
### Identical API shape
The local API endpoints match exactly what you'll get in production, making it easy to test your workflows before deploying them. The only difference is that the `/org/{org}/project/{project}/environments/{env}` path is left out of the url for simplicity.
```
http://localhost:1337/workflows/{workflow}
```
Every workflow you export is automatically available as an API endpoint.
### Hot reloading
The development server watches your TypeScript files and automatically:
1. Recompiles when files change
2. Regenerates API schemas
3. Restarts the server with your updated code
This enables a fast development cycle without manual restarts.
### API documentation
The development server includes a built-in Swagger UI for exploring and testing your workflows:
```
http://localhost:1337/swagger-ui
```

The Swagger interface provides:
- Complete documentation of all your workflow APIs
- Interactive testing
- Request/response examples
- Schema information
## Running workflows locally
### Using the API
You can use any HTTP client to interact with your local API:
```bash
# Run a workflow synchronously
curl -X POST http://localhost:1337/workflows/ChatAgent \
-H "Content-Type: application/json" \
-d '{"input": {"prompt": "Tell me about GenSX"}}'
# Run asynchronously
curl -X POST http://localhost:1337/workflows/ChatAgent/start \
-H "Content-Type: application/json" \
-d '{"input": {"prompt": "Tell me about GenSX"}}'
```
The inputs and outputs of the APIs match exactly what you'll encounter in production.
### Using the Swagger UI
The built-in Swagger UI provides an easy way to inspect and test your workflows:
1. Navigate to `http://localhost:1337/swagger-ui`
2. Select the workflow you want to test
3. Click the "Try it out" button
4. Enter your input data
5. Execute the request and view the response

## Local storage options
GenSX provides local implementations for cloud storage services, enabling you to develop and test stateful workflows without deploying to the cloud.
### Blob storage
When using `useBlob` in local development, data is stored in your local file system:
```ts
import { useBlob } from "@gensx/storage";
const StoreData = gensx.Component(
"StoreData",
async ({ key, data }: StoreDataInput) => {
// Locally, this will write to .gensx/blobs directory
const blob = useBlob(`data/${key}.json`);
await blob.putJSON(data);
return { success: true };
},
);
```
Files are stored in the `.gensx/blobs` directory in your project, making it easy to inspect the stored data.
### SQL databases
When using `useDatabase` locally, GenSX uses [libSQL](https://github.com/libsql/libsql) to provide a SQLite-compatible database:
```ts
import { useDatabase } from "@gensx/storage";
const QueryData = gensx.Component(
"QueryData",
async ({ query }: QueryDataInput) => {
// Locally, this creates a SQLite database in .gensx/databases
const db = await useDatabase("my-database");
const result = await db.execute(query);
return result.rows;
},
);
```
Database files are stored in the `.gensx/databases` directory as SQLite files that you can inspect with any SQLite client.
### Vector search
For vector search operations with `useSearch`, your local environment connects to the cloud service:
```ts
import { useSearch } from "@gensx/storage";
const SearchDocs = gensx.Component(
"SearchDocs",
async ({ query }: SearchDocsInput) => {
// Uses cloud vector search even in local development
const namespace = await useSearch("documents");
const results = await namespace.query({
text: query,
topK: 5,
});
return results;
},
);
```
## Next steps
- [Deploying to production](/docs/cloud/serverless-deployments)
- [Working with cloud storage](/docs/cloud/storage)
- [Setting up observability and tracing](/docs/cloud/observability)
# GenSX Cloud
> **Note**: GenSX Cloud is currently in developer preview.
GenSX Cloud provides everything you need to ship production-grade agents and workflows:
- **Serverless runtime**: One command to deploy all of your workflows and agents as REST APIs running on serverless infrastructure optimized for long-running agents and workflows. Support for synchronous and background invocation, streaming, and intermediate status included.
- **Cloud storage**: build stateful agents and workflows with built-in blob storage, SQL databases, and full-text + vector search namespaces -- all provisioned at runtime.
- **Tracing and observability**: Real-time tracing of all component inputs and outputs, tool calls, and LLM calls within your agents and workflows. Tools to visualize and debug all historic executions.
- **Collaboration**: Organize agents, workflows, and traces into projects and environments. Search and view traces and to debug historical executions.
Unlike traditional serverless offerings, GenSX Cloud is optimized for long-running workflows. Free tier workflows can run up to 5 minutes and Pro tier workflows can run for up to 60 minutes.
All of this is available on a free tier for individuals.
## Serverless deployments
Serverless deployments allow you to turn your GenSX workflows and agents into APIs with a single command:
- **Generated REST APIs**: `gensx deploy` generates a REST API complete with schema and validation for every workflow in your project.
- **Long-running**: GenSX Cloud is optimized for long-running LLM workloads. Workflows can run up to 5 minutes on the free tier and 60 minutes on the Pro tier.
- **Fast cold starts**: on the order of ~100ms.
Serverless deployments are billed per-second, with 50,000 seconds included per month in the free tier for individuals.
Projects are deployed with a single CLI command:
```bash
$ npx gensx deploy ./src/workflows.ts
```
```bash
โ Building workflow using Docker
โ Generating schema
โ Successfully built project
โน Using project name from gensx.yaml: support-tools
โ Deploying project to GenSX Cloud (Project: support-tools)
โ Successfully deployed project to GenSX Cloud
Dashboard: console/support-tools/default/workflows
Available workflows:
- ChatAgent
- TextToSQLWorkflow
- RAGWorkflow
- AnalyzeDiscordWorkflow
Project: support-tools
```
Each workflow is available via both a synchronous and asynchronous API:
```
// For synchronous and streaming calls:
https://api.gensx.com/org/{orgName}/projects/{projectName}/environments/{environmentName}/workflows/{workflowName}
// For running workflows async in the background
https://api.gensx.com/org/{orgName}/projects/{projectName}/environments/{environmentName}/workflows/{workflowName}/start
```
For more details see the full [serverless deployments reference](/docs/cloud/serverless-deployments).
## Cloud storage
GenSX Cloud includes runtime-provisioned storage to build stateful agents and workflows:
- **Blob storage**: Store and retrieve JSON and binary data for things like conversation history, agent memory, and audio and image generation.
- **SQL databases**: Runtime provisioned databases for scenarios like text-to-SQL.
- **Full-text + vector search**: Store and query vector embeddings for semantic search and retrieval augmented generation (RAG).
State can be long-lived and shared across workflows and agents, or it can be provisioned ephemerally on a per-request basis.
### Blob storage
GenSX Cloud provides blob storage for persisting unstructured data like JSON, text, and binary files. With the `useBlob` hook, you can easily store and retrieve data across workflow executions.
Common scenarios enabled by blob storage include:
- Persistent chat thread history.
- Simple memory implementations.
- Storing generated audio, video, and photo files.
```ts
import { useBlob } from "@gensx/storage";
// Store and retrieve data with the useBlob hook
const UpdateConversation = gensx.Component(
"UpdateConversation",
async ({ userInput, threadId }: UpdateConversationInput) => {
// Get access to a blob at a specific path
const blob = useBlob(`chats/${threadId}.json`);
// Load existing data (returns null if it doesn't exist)
const history = (await blob.getJSON()) ?? [];
// Add new data
history.push({ role: "user", content: userInput });
// Save updated data
await blob.putJSON(history);
return "Data stored successfully";
},
);
```
Blob storage automatically adapts between local development (using filesystem) and cloud deployment with zero configuration changes.
For more details see the full [storage components reference](docs/component-reference/storage-components/blob-reference).
### SQL databases
GenSX Cloud provides SQLite-compatible databases powered by [Turso](https://turso.tech), enabling structured data storage with several properties important to agentic workloads:
- **Millisecond provisioning**: Databases are created on-demand in milliseconds, making them perfect for ephemeral workloads like parsing and querying user-uploaded CSVs or creating per-agent structured data stores.
- **Strong consistency**: All operations are linearizable, maintaining an ordered history, with writes fully serialized and subsequent writes awaiting transaction completion.
- **Zero configuration**: Like all GenSX storage components, databases work identically in both development and production.
- **Local development**: Uses libsql locally to enable a fast, isolated development loop without external dependencies.
```ts
import { useDatabase } from "@gensx/storage";
// Access a database with the useDatabase hook
const QueryTeamStats = gensx.Component(
"QueryTeamStats",
async ({ team }: QueryTeamStatsInput) => {
// Get access to a database (created on first use)
const db = await useDatabase("baseball");
// Execute SQL queries directly
const result = await db.execute("SELECT * FROM players WHERE team = ?", [
team,
]);
return result.rows; // Returns the query results
},
);
```
For more details see the full [storage components reference](docs/component-reference/storage-components/database-reference).
### Full-text and vector search
GenSX Cloud provides vector and full-text search capabilities powered by [turbopuffer](https://turbopuffer.com/), enabling semantic search and retrieval augmented generation (RAG) with minimal setup:
- **Vector search**: Store and query high-dimensional vectors for semantic similarity search with millisecond-level latency, perfect for RAG applications and finding content based on meaning rather than exact matches.
- **Full-text search**: Built-in BM25 search engine for string and string array fields, enabling traditional keyword search with low latency.
- **Hybrid search**: Combine vector similarity with full-text BM25 search to get both semantically relevant results and exact keyword matches in a single query.
- **Rich filtering**: Apply metadata filters to narrow down search results based on categories, timestamps, or any custom attributes, enhancing precision and relevance.
```ts
import { useNamespace } from "@gensx/storage";
import { embed } from "@gensx/vercel-ai";
import { openai } from "@ai-sdk/openai";
// Perform semantic search with the useNamespace hook
const SearchDocuments = gensx.Component(
"SearchDocuments",
async ({ query }) => {
// Get access to a vector search namespace
const namespace = await useNamespace("documents");
// Generate an embedding for the query
const { embedding } = await embed({
model: openai.embedding("text-embedding-3-small"),
value: query,
});
// Search for similar documents
const results = await namespace.query({
vector: embedding,
topK: 5,
});
return results.map((r) => r.attributes?.title);
},
);
```
> **Note**: Unlike blob storage and SQL databases, vector search doesn't have a local development implementation. When using `useSearch` locally, you'll connect to the cloud service.
For more details see the full [storage components reference](docs/component-reference/storage-components/search-reference).
## Observability
GenSX Cloud provides comprehensive tracing and observability for all your workflows and agents.

- **Complete execution traces**: Every workflow execution generates a detailed trace that captures the entire flow from start to finish, allowing you to understand exactly what happened during execution.
- **Comprehensive component visibility**: Each component in your workflow automatically records its inputs and outputs, including:
- All LLM calls with full prompts, parameters, and responses
- Every tool invocation with input arguments and return values
- All intermediate steps and state changes in your agents and workflows
- **Real-time monitoring**: Watch your workflows execute step by step in real time, which is especially valuable for debugging long-running agents or complex multi-step workflows.
- **Historical execution data**: Access and search through all past executions to diagnose issues, analyze performance patterns, and understand user interactions over time.
- **Project and environment organization**: Traces are automatically organized by project (a collection of related workflows in a codebase) and environment (such as development, staging, or production), making it easy to find relevant executions.
```ts
// Traces are automatically captured when workflows are executed
// No additional instrumentation required
const result = await MyWorkflow({ input: "some query" });
```
The trace viewer provides multiple ways to analyze workflow execution:
- **Timeline view**: See how long each component took and their sequence of execution
- **Component tree**: Navigate the hierarchical structure of your workflow
- **Input/output inspector**: Examine the exact data flowing between components
- **Error highlighting**: Quickly identify where failures occurred

For more details see the full [observability reference](/docs/cloud/observability).
## Local development
GenSX provides a seamless development experience that mirrors the cloud environment, allowing you to build and test your workflows locally before deployment:
### Development server
The `gensx start` command launches a local development server that:
- Compiles your TypeScript workflows on the fly
- Automatically generates schemas for your workflows
- Creates local REST API endpoints identical to the cloud endpoints
- Hot-reloads your code when files change
- Provides the same API shape locally as in production
```bash
# Start the development server with a TypeScript file
npx gensx start ./src/workflows.ts
```
When you start the development server, you'll see something like this:
```bash
๐ Starting GenSX Dev Server...
โน Starting development server...
โ Compilation completed
โ Generating schema
๐ GenSX Dev Server running at http://localhost:1337
๐งช Swagger UI available at http://localhost:1337/swagger-ui
๐ Available workflows:
- MyWorkflow: http://localhost:1337/workflows/MyWorkflow
โ Server is running. Press Ctrl+C to stop.
```
### Local storage providers
GenSX provides local implementations for most storage providers, enabling development without cloud dependencies:
- **BlobProvider**: Uses local filesystem storage (`.gensx/blobs`) for development
- **DatabaseProvider**: Uses local SQLite databases (`.gensx/databases`) for development
- **SearchProvider**: Connects to the cloud vector search service even in development mode
The local APIs mirror the cloud APIs exactly, so code that works locally will work identically when deployed:
```ts
// This component works the same locally and in the cloud
const SaveData = gensx.Component(
"SaveData",
async ({ key, data }: { key: string; data: any }) => {
// Blob storage works the same locally (filesystem) and in cloud
const blob = useBlob(`data/${key}.json`);
await blob.putJSON(data);
return null;
},
);
```
For more details see the full [local development reference](/docs/cloud/local-development).
## Projects & environments
GenSX Cloud organizes your workflows and deployments using a flexible structure of projects and environments:
**Projects** are a collection of related workflows that are deployed together, typically corresponding to a codebase or application. Projects help you organize and manage your AI components as cohesive units.
Projects are defined by the `projectName` field in your `gensx.yaml` configuration file at the root of your codebase:
```yaml
# gensx.yaml
projectName: my-chatbot-app
```
**Environments** are sub-groupings within a project that allow you to deploy multiple instances of the same workflows with different configuration. This supports the common development pattern of separating dev, staging, and production environments.
```bash
# Deploy to the default environment
npx gensx deploy ./src/workflows.ts
# Deploy to a specific environment
npx gensx deploy ./src/workflows.ts --env production
```
Each environment can have its own configuration and environment variables to match the rest of your deployed infrastructure.
Traces and observability data are also separated by project and environment, making it easier to:
- Distinguish between development testing and production traffic
- Isolate and debug issues specific to a particular environment
- Compare performance or behavior between environments
This organizational structure is designed to be flexible and adaptable, allowing you to customize it to fit with the rest of your development, testing, and deployment lifecycle.
For more details see the full [projects and environments reference](/docs/cloud/projects-environments).
For more details, [contact us](mailto:contact@gensx.com) for enterprise needs.
## Get started
Ready to build AI agents and workflows with GenSX Cloud? Follow our step-by-step [quickstart guide](/docs/quickstart) to create and deploy your first project in minutes:
1. Install the GenSX CLI: `npm install -g gensx`
2. Create a new project: `gensx new my-project`
3. Run it locally: `gensx start src/workflows.ts`
4. Deploy to the cloud: `gensx deploy src/workflows.ts`
# gensx start
The `gensx start` command starts a local development server that enables you to test and debug your GenSX workflows.
## Usage
```bash
gensx start [options]
```
## Arguments
| Argument | Description |
| -------- | ------------------------------------------------------- |
| `` | The workflow file to serve (e.g., `src/workflows.tsx`). |
## Options
| Option | Description |
| --------------- | ----------------------------- |
| `--port ` | Port to run the server on. |
| `-q, --quiet` | Suppress output. |
| `-h, --help` | Display help for the command. |
## Description
This command starts a local development server that:
- Watches your workflow file for changes and automatically reloads
- Provides a web interface to test and debug your workflows
- Simulates the cloud environment locally
- Runs your workflow components in a development mode
The development server includes:
- A web UI for testing your workflows
- Real-time logs and execution visibility
- Access to the GenSX development dashboard
## Examples
```bash
# Start server with a specific workflow file
gensx start src/workflows.ts
# Start server with minimal output
gensx start src/workflows.ts --quiet
# Start server on port 3000
gensx start src/workflows.ts --port 3000
```
## Notes
- The server runs on port 1337 by default
- You can access the development UI at `http://localhost:1337/swagger-ui`
- Environment variables from your local environment are available to the workflow
- For more complex environment variable setups, consider using a `.env` file in your project root
# gensx run
The `gensx run` command executes a workflow that has been deployed to GenSX Cloud. By default it infers your project from the `gensx.yaml` file in the current working directory.
## Usage
```bash
gensx run [options]
```
## Arguments
| Argument | Description |
| ------------ | ---------------------------- |
| `` | Name of the workflow to run. |
## Options
| Option | Description |
| ---------------------- | ------------------------------------------------------------ |
| `-i, --input ` | Input to pass to the workflow (as JSON). |
| `--no-wait` | Do not wait for the workflow to finish (run asynchronously). |
| `-p, --project ` | Project name where the workflow is deployed. |
| `--env ` | Environment name where the workflow is deployed. |
| `-o, --output ` | Output file to write the workflow result to. |
| `-y, --yes` | Automatically answer yes to all prompts. |
| `-h, --help` | Display help for the command. |
## Description
This command triggers execution of a deployed workflow on GenSX Cloud with the specified input. By default, it waits for the workflow to complete and displays the result.
When running a workflow, you can:
- Provide input data as JSON
- Choose whether to wait for completion or run asynchronously
- Save the output to a file
## Examples
```bash
# Run a workflow with no input
gensx run MyWorkflow
# Run a workflow with JSON input
gensx run MyWorkflow --input '{"text": "Hello, world!"}'
# Run a workflow in a specific project and environment
gensx run MyWorkflow --project my-project --env prod
# Run a workflow asynchronously (don't wait for completion)
gensx run MyWorkflow --no-wait
# Run a workflow and save output to a file
gensx run MyWorkflow --output result.json
```
## Notes
- You must be logged in to GenSX Cloud to run workflows (`gensx login`)
- The workflow must have been previously deployed using `gensx deploy`
- When using `--input`, the input must be valid JSON
- When using `--no-wait`, the command returns immediately with a workflow ID that can be used to check status later
- Error handling: if the workflow fails, the command will return with a non-zero exit code and display the error
# gensx new
The `gensx new` command creates a new GenSX project with a predefined structure and dependencies.
## Usage
```bash
gensx new [options]
```
## Arguments
| Argument | Description |
| --------------------- | ---------------------------------------------------------------------------- |
| `` | Directory to create the project in. If it doesn't exist, it will be created. |
## Options
| Option | Description |
| -------------------------- | ----------------------------------------------------------------------------------------------- |
| `-t, --template ` | Template to use. Currently supports `ts` (TypeScript). |
| `-f, --force` | Overwrite existing files in the target directory. |
| `--skip-ide-rules` | Skip IDE rules selection. |
| `--ide-rules ` | Comma-separated list of IDE rules to install. Options: `cline`, `windsurf`, `claude`, `cursor`. |
| `-d, --description ` | Optional project description. |
| `-h, --help` | Display help for the command. |
## Description
This command scaffolds a new GenSX project with the necessary files and folder structure. It sets up:
- Project configuration files (`package.json`, `tsconfig.json`)
- Basic project structure with example workflows
- Development dependencies
- IDE integrations based on selected rules
## Examples
```bash
# Create a basic project
gensx new my-gensx-app
# Create a project with a specific template and description
gensx new my-gensx-app --template ts --description "My AI workflow app"
# Create a project with specific IDE rules
gensx new my-gensx-app --ide-rules cursor,claude
# Force create even if directory has existing files
gensx new my-gensx-app --force
```
## Notes
- If no template is specified, `ts` (TypeScript) is used by default.
- The command will install all required dependencies, so make sure you have npm installed.
- After creation, you can navigate to the project directory and start the development server with `gensx start`.
# gensx login
The `gensx login` command authenticates you with GenSX Cloud, allowing you to deploy and run workflows remotely.
## Usage
```bash
gensx login
```
## Description
When you run this command, it will:
1. Open your default web browser to the GenSX authentication page
2. Prompt you to log in with your GenSX account or create a new one
3. Store your authentication credentials locally for future CLI commands
After successful login, you can use other commands that require authentication, such as `deploy` and `run`.
## Examples
```bash
# Log in to GenSX Cloud
gensx login
```
## Notes
- Your authentication token is stored in your user directory (typically `~/.gensx/config.json`)
- The token is valid until you log out or revoke it from the GenSX dashboard
- If you're behind a corporate firewall or using strict network policies, ensure that outbound connections to `api.gensx.com` are allowed
# GenSX CLI reference
The GenSX command-line interface (CLI) provides a set of commands to help you build, deploy, and manage your GenSX applications.
## Installation
The GenSX CLI is included when you install the main GenSX package:
```bash
npm install -g gensx
```
## Available commands
### Auth
| Command | Description |
| -------------------------------------- | --------------------- |
| [`gensx login`](./cli-reference/login) | Log in to GenSX Cloud |
### Development
| Command | Description |
| -------------------------------------- | -------------------------------- |
| [`gensx new`](./cli-reference/new) | Create a new GenSX project |
| [`gensx start`](./cli-reference/start) | Start a local development server |
| [`gensx build`](./cli-reference/build) | Build a workflow for deployment |
### Deployment & Execution
| Command | Description |
| ---------------------------------------- | -------------------------------- |
| [`gensx deploy`](./cli-reference/deploy) | Deploy a workflow to GenSX Cloud |
| [`gensx run`](./cli-reference/run) | Run a workflow on GenSX Cloud |
### Environment Management
| Command | Description |
| ---------------------------------------------------- | ------------------------------------ |
| [`gensx env`](./cli-reference/env/show) | Show the current environment details |
| [`gensx env create`](./cli-reference/env/create) | Create a new environment |
| [`gensx env ls`](./cli-reference/env/ls) | List all environments for a project |
| [`gensx env select`](./cli-reference/env/select) | Select an environment as active |
| [`gensx env unselect`](./cli-reference/env/unselect) | Unselect the current environment |
### Project Management
| Command | Description |
| -------------------------------------------------------- | -------------------- |
| [`gensx project`](./cli-reference/project/show) | Show project details |
| [`gensx project create`](./cli-reference/project/create) | Create a new project |
| [`gensx project ls`](./cli-reference/project/ls) | List all projects |
### Examples
| Command | Description |
| -------------------------------------------------------- | --------------------------- |
| [`gensx examples`](./cli-reference/examples) | List all available examples |
| [`gensx examples clone`](./cli-reference/examples/clone) | Clone an example project |
## Common Workflows
### Starting a New Project
You can start a new project either from scratch or by cloning an example:
#### Option 1: Use a starter project
```bash
# Log in to GenSX Cloud
gensx login
# Create a new project
gensx new my-project
cd my-project
# Start local development
gensx start src/workflows.ts
```
#### Option 2: Clone an example
```bash
# Log in to GenSX Cloud
gensx login
# See available examples
gensx examples
# Clone an example project
gensx examples clone chat-ux my-chat-app
cd my-chat-app
# Follow the README for setup instructions
```
### Managing Environments
```bash
# Create and switch to a development environment
gensx env create dev
gensx env select dev
# View current environment
gensx env
```
### Deploying and Running Workflows
```bash
# Build and deploy your workflow
gensx deploy src/workflows.ts
# Run a workflow
gensx run my-workflow --input '{"message": "Hello, world!"}'
```
For detailed information about each command, please refer to the corresponding documentation pages.
# gensx deploy
The `gensx deploy` command uploads and deploys a workflow to GenSX Cloud, making it available for remote execution.
## Usage
```bash
gensx deploy [options]
```
## Arguments
| Argument | Description |
| -------- | ------------------------------------------------------------------------------- |
| `` | File to deploy. This should be a TypeScript file that exports a GenSX workflow. |
## Options
| Option | Description |
| --------------------------- | ---------------------------------------------------------------------------- |
| `-e, --env-var ` | Environment variable to include with deployment. Can be used multiple times. |
| `-p, --project ` | Project name to deploy to. |
| `--env ` | Environment name to deploy to. |
| `-y, --yes` | Automatically answer yes to all prompts. |
| `-h, --help` | Display help for the command. |
## Description
This command:
1. Builds your workflow
2. Uploads it to GenSX Cloud
3. Creates or updates the deployment
4. Sets up any environment variables specified
After successful deployment, your workflow will be available for remote execution via the GenSX Cloud console or through the `gensx run` command.
## Examples
```bash
# Deploy a workflow
gensx deploy src/workflows.ts
# Deploy to a specific project and environment
gensx deploy src/workflows.ts --project my-production-project --env dev
# Deploy with environment variables
gensx deploy src/workflows.ts -e API_KEY=abc123 -e DEBUG=true
# Deploy with an environment variable taken from your local environment
gensx deploy src/workflows.ts -e OPENAI_API_KEY
```
## Notes
- You must be logged in to GenSX Cloud to deploy (`gensx login`)
- `gensx deploy` requires Docker to be running
- If your workflow requires API keys or other secrets, provide them using the `-e` or `--env-var` option
- For environment variables without a specified value, the CLI will use the value from your local environment
- After deployment, you can manage your workflows from the GenSX Cloud console
- The deployment process automatically handles bundling dependencies
# gensx build
The `gensx build` command compiles and bundles a GenSX workflow for deployment to GenSX Cloud.
## Usage
```bash
gensx build [options]
```
## Arguments
| Argument | Description |
| -------- | ---------------------------------------------------------------------------------------------------------------- |
| `` | Workflow file to build (e.g., `src/workflows.ts`). This should export an object with one or more GenSX workflows |
## Options
| Option | Description |
| ----------------------- | ----------------------------------------------- |
| `-o, --out-dir ` | Output directory for the built files. |
| `-t, --tsconfig ` | Path to a custom TypeScript configuration file. |
| `-h, --help` | Display help for the command. |
## Description
This command builds your GenSX workflow into an optimized bundle that can be deployed to GenSX Cloud. It:
- Transpiles TypeScript to JavaScript
- Bundles all dependencies
- Optimizes the code for production
- Prepares the workflow for deployment
After building, the command outputs the path to the bundled file, which can be used with the [`gensx deploy`](/docs/cli-reference/deploy) command.
## Examples
```bash
# Build a workflow with default options
gensx build src/workflows.ts
# Build a workflow with a custom output directory
gensx build src/workflows.ts --out-dir ./dist
# Build a workflow with a custom TypeScript configuration
gensx build src/workflows.ts --tsconfig ./custom-tsconfig.json
```
## Notes
- The build process requires that your workflow file exports an object with one or more GenSX workflows.
- `gensx build` requires Docker to be running
- If no output directory is specified, the build files will be placed in a `.gensx` directory
- The build process does not include environment variables - these should be provided during deployment
# Search reference
API reference for GenSX Cloud search components. Search is powered by turbopuffer, and their documentation for [query](https://turbopuffer.com/docs/query) and [write](https://turbopuffer.com/docs/write) operations is a useful reference to augment this document.
## Installation
```bash
npm install @gensx/storage
```
## useSearch
Hook that provides access to vector search for a specific namespace.
### Import
```tsx
import { useSearch } from "@gensx/storage";
```
### Signature
```tsx
function useSearch(
name: string,
options?: SearchStorageOptions,
): Promise;
```
### Parameters
| Parameter | Type | Default | Description |
| --------- | ----------------------------------------------- | -------- | --------------------------------- |
| `name` | `string` | Required | The namespace name to access |
| `options` | [`SearchStorageOptions`](#searchstorageoptions) | `{}` | Optional configuration properties |
### Returns
Returns a namespace object with methods to interact with vector search.
### Example
```tsx
// Simple usage
const namespace = await useSearch("documents");
const results = await namespace.query({
vector: queryEmbedding,
includeAttributes: true,
});
// With explicit configuration
const namespace = await useSearch("documents", {
project: "my-project",
environment: "production",
});
```
## Namespace methods
The namespace object returned by `useSearch` provides these methods:
### write
Inserts, updates, or deletes vectors in the namespace.
```tsx
async write(options: WriteParams): Promise<{ message: string; rowsAffected: number }>
```
#### Parameters
| Parameter | Type | Default | Description |
| ---------------- | ---------------- | ------- | ------------------------------------------- |
| `upsertColumns` | `UpsertColumns` | none | Column-based format for upserting documents |
| `upsertRows` | `UpsertRows` | none | Row-based format for upserting documents |
| `patchColumns` | `PatchColumns` | none | Column-based format for patching documents |
| `patchRows` | `PatchRows` | none | Row-based format for patching documents |
| `deletes` | `ID[]` | none | Array of document IDs to delete |
| `deleteByFilter` | `Filter` | none | Filter to match documents for deletion |
| `distanceMetric` | `DistanceMetric` | none | Distance metric for similarity calculations |
| `schema` | `Schema` | none | Optional schema definition for attributes |
#### Example
```tsx
// Upsert documents in column-based format
const result = await namespace.write({
upsertColumns: {
id: ["doc-1", "doc-2"],
vector: [
[0.1, 0.2, 0.3],
[0.4, 0.5, 0.6],
],
text: ["Document 1", "Document 2"],
category: ["article", "blog"],
},
distanceMetric: "cosine_distance",
schema: {
text: { type: "string" },
category: { type: "string" },
},
});
console.log(result); // { message: "Successfully wrote 2 rows", rowsAffected: 2 }
// Upsert documents in row-based format
await namespace.write({
upsertRows: [
{
id: "doc-1",
vector: [0.1, 0.2, 0.3],
text: "Document 1",
category: "article",
},
{
id: "doc-2",
vector: [0.4, 0.5, 0.6],
text: "Document 2",
category: "blog",
},
],
distanceMetric: "cosine_distance",
});
// Delete documents by ID
await namespace.write({
deletes: ["doc-1", "doc-2"],
});
// Delete documents by filter
await namespace.write({
deleteByFilter: [
"And",
[
["category", "Eq", "article"],
["createdAt", "Lt", "2023-01-01"],
],
],
});
// Patch documents (update specific fields)
await namespace.write({
patchRows: [
{
id: "doc-1",
category: "updated-category",
},
],
});
```
#### Return value
Returns an object with a success message and the number of rows affected by the operation:
```tsx
{
message: "Successfully wrote 2 rows",
rowsAffected: 2
}
```
### query
Searches for similar vectors based on a query vector or other ranking criteria.
```tsx
async query(options: QueryOptions): Promise
```
#### Parameters
| Parameter | Type | Default | Description |
| ------------------- | --------------------- | ------- | ---------------------------------------- |
| `rankBy` | `RankBy` | none | Vector, text, or attribute-based ranking |
| `topK` | `number` | none | Number of results to return |
| `includeAttributes` | `boolean \| string[]` | `['id']` | Include all attributes or specified ones |
| `filters` | `Filter` | none | Metadata filters |
| `aggregateBy` | `AggregateBy` | none | Aggregate results by specified fields |
| `consistency` | `Consistency` | none | Consistency level for reads |
#### Example
```tsx
const results = await namespace.query({
topK: 10, // Number of results to return
includeAttributes: true, // Include all attributes or specific ones
filters: [ // Optional metadata filters
"And",
[
["category", "Eq", "article"],
["createdAt", "Gte", "2023-01-01"]
]
],
rankBy: ["vector", "ANN", [0.1, 0.2, 0.3, ...]], // Vector similarity search
// OR
rankBy: ["text", "BM25", "search query"], // Text search
// OR
rankBy: ["importance", "desc"], // Attribute-based ranking
});
```
#### Return value
Returns a `QueryResults` object with an array of matched documents:
```tsx
{
rows: [
{
id: "doc-1", // Document ID
$dist: 0.13, // Distance score (lower is more similar for most metrics)
vector: number[], // If specified in includeAttributes
text: "Document content", // Other attributes specified in includeAttributes
category: "article",
createdAt: "2023-07-15"
},
// ...more results
],
aggregations: { // Aggregation results (if aggregateBy was specified)
"numberOfDocuments": 100,
}
}
```
### getSchema
Retrieves the current schema for the namespace.
```tsx
async getSchema(): Promise
```
#### Example
```tsx
const schema = await namespace.getSchema();
console.log(schema);
// {
// text: "string",
// category: "string",
// createdAt: "string"
// }
```
### updateSchema
Updates the schema for the namespace.
```tsx
async updateSchema(options: { schema: Schema }): Promise
```
#### Parameters
| Parameter | Type | Description |
| --------- | -------- | --------------------- |
| `schema` | `Schema` | New schema definition |
#### Example
```tsx
const updatedSchema = await namespace.updateSchema({
schema: {
text: "string",
category: "string",
createdAt: "string",
newField: "number", // Add new field
tags: "string[]", // Add array field
},
});
```
#### Return value
Returns the updated schema.
## SearchClient
The `SearchClient` class provides a way to interact with GenSX vector search capabilities outside of the GenSX workflow context, such as from regular Node.js applications or server endpoints.
### Import
```tsx
import { SearchClient } from "@gensx/storage";
```
### Constructor
```tsx
constructor(options?: SearchStorageOptions);
```
#### Parameters
| Parameter | Type | Default | Description |
| --------- | ----------------------------------------------- | ------- | --------------------------------- |
| `options` | [`SearchStorageOptions`](#searchstorageoptions) | `{}` | Optional configuration properties |
#### Example
```tsx
// Default client
const searchClient = new SearchClient();
// With configuration
const searchClient = new SearchClient({
project: "my-project",
environment: "production",
});
```
### Methods
#### getNamespace
Get a namespace instance and ensure it exists first.
```tsx
async getNamespace(name: string): Promise
```
##### Example
```tsx
const namespace = await searchClient.getNamespace("products");
// Then use the namespace to upsert or query vectors
await namespace.write({
upsertRows: [
{
id: "product-1",
vector: [0.1, 0.2, 0.3, ...],
name: "Product 1",
category: "electronics"
}
],
distanceMetric: "cosine_distance"
});
```
#### ensureNamespace
Create a namespace if it doesn't exist.
```tsx
async ensureNamespace(name: string): Promise
```
##### Example
```tsx
const { created } = await searchClient.ensureNamespace("products");
if (created) {
console.log("Namespace was created");
}
```
#### listNamespaces
List all namespaces.
```tsx
async listNamespaces(options?: {
prefix?: string;
limit?: number;
cursor?: string;
}): Promise<{
namespaces: { name: string; createdAt: Date }[];
nextCursor?: string;
}>
```
##### Example
```tsx
const { namespaces, nextCursor } = await searchClient.listNamespaces();
console.log("Available namespaces:", namespaces.map(ns => ns.name)); // ["products", "customers", "orders"]
```
#### deleteNamespace
Delete a namespace.
```tsx
async deleteNamespace(name: string): Promise
```
##### Example
```tsx
const { deleted } = await searchClient.deleteNamespace("temp-namespace");
if (deleted) {
console.log("Namespace was removed");
}
```
#### namespaceExists
Check if a namespace exists.
```tsx
async namespaceExists(name: string): Promise
```
##### Example
```tsx
if (await searchClient.namespaceExists("products")) {
console.log("Products namespace exists");
} else {
console.log("Products namespace doesn't exist yet");
}
```
### Usage in applications
The SearchClient is particularly useful when you need to access vector search functionality from:
- Regular Express.js or Next.js API routes
- Background jobs or workers
- Custom scripts or tools
- Any Node.js application outside the GenSX workflow context
```tsx
// Example: Using SearchClient in an Express handler
import express from "express";
import { SearchClient } from "@gensx/storage";
import { OpenAI } from "openai";
const app = express();
const searchClient = new SearchClient();
const openai = new OpenAI();
app.post("/api/search", async (req, res) => {
try {
const { query } = req.body;
// Generate embedding for the query
const embedding = await openai.embeddings.create({
model: "text-embedding-3-small",
input: query,
});
// Search for similar documents
const namespace = await searchClient.getNamespace("documents");
const results = await namespace.query({
rankBy: ["vector", "ANN", embedding.data[0].embedding],
topK: 5,
includeAttributes: true,
});
res.json(results);
} catch (error) {
console.error("Search error:", error);
res.status(500).json({ error: "Search error" });
}
});
app.listen(3000, () => {
console.log("Server running on port 3000");
});
```
## Filter operators
Filters use a structured array format with the following pattern:
```tsx
// Basic filter structure
[
"Operation", // And, Or, Not
[
// Array of conditions
["field", "Operator", value],
],
];
```
Available operators:
| Operator | Description | Example |
| ------------- | ---------------------- | -------------------------------------------- |
| `Eq` | Equals | `["field", "Eq", "value"]` |
| `Ne` | Not equals | `["field", "Ne", "value"]` |
| `Gt` | Greater than | `["field", "Gt", 10]` |
| `Gte` | Greater than or equal | `["field", "Gte", 10]` |
| `Lt` | Less than | `["field", "Lt", 10]` |
| `Lte` | Less than or equal | `["field", "Lte", 10]` |
| `In` | In array | `["field", "In", ["a", "b"]]` |
| `Nin` | Not in array | `["field", "Nin", ["a", "b"]]` |
| `Contains` | String contains | `["field", "Contains", "text"]` |
| `ContainsAny` | Contains any of values | `["tags", "ContainsAny", ["news", "tech"]]` |
| `ContainsAll` | Contains all values | `["tags", "ContainsAll", ["imp", "urgent"]]` |
## RankBy options
The `rankBy` parameter can be used in two primary ways:
### Attribute-based ranking
Sorts by a field in ascending or descending order:
```tsx
// Sort by the createdAt attribute in ascending order
rankBy: ["createdAt", "asc"];
// Sort by price in descending order (highest first)
rankBy: ["price", "desc"];
```
### Text-based ranking
For full-text search relevance scoring:
```tsx
// Basic BM25 text ranking
rankBy: ["text", "BM25", "search query"];
// BM25 with multiple search terms
rankBy: ["text", "BM25", ["term1", "term2"]];
// Combined text ranking strategies
rankBy: [
"Sum",
[
["text", "BM25", "search query"],
["text", "BM25", "another term"],
],
];
// Weighted text ranking (multiply BM25 score by 0.5)
rankBy: ["Product", [["text", "BM25", "search query"], 0.5]];
// Alternative syntax for weighted ranking
rankBy: ["Product", [0.5, ["text", "BM25", "search query"]]];
```
Use these options to fine-tune the relevance and ordering of your search results.
## SearchStorageOptions
Configuration properties for search operations.
| Prop | Type | Default | Description |
| ------------- | -------- | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `project` | `string` | Auto-detected | Project to use for cloud storage. If you don't set this, it'll first check your `GENSX_PROJECT` environment variable, then look for the project name in your local `gensx.yaml` file. |
| `environment` | `string` | Auto-detected | Environment to use for cloud storage. If you don't set this, it'll first check your `GENSX_ENV` environment variable, then use whatever environment you've selected in the CLI with `gensx env select`. |
# Database reference
API reference for GenSX Cloud SQL database components.
## Installation
```bash
npm install @gensx/storage
```
## useDatabase
Hook that provides access to a specific SQL database.
### Import
```tsx
import { useDatabase } from "@gensx/storage";
```
### Signature
```tsx
function useDatabase(
name: string,
options?: DatabaseStorageOptions,
): Promise;
```
### Parameters
| Parameter | Type | Description |
| --------- | --------------------------------------------------- | --------------------------------- |
| `name` | `string` | The database name to access |
| `options` | [`DatabaseStorageOptions`](#databasestorageoptions) | Optional configuration properties |
### Returns
Returns a database object with methods to interact with SQL database.
### Example
```tsx
// Simple usage
const db = await useDatabase("users");
const result = await db.execute("SELECT * FROM users WHERE id = ?", [
"user-123",
]);
// With configuration
const db = await useDatabase("users", {
kind: "cloud",
project: "my-project",
environment: "production",
});
```
## Database methods
The database object returned by `useDatabase` provides these methods:
### execute
Executes a single SQL statement with optional parameters.
```tsx
async execute(sql: string, params?: InArgs): Promise
```
#### Parameters
| Parameter | Type | Description |
| --------- | -------- | ------------------------------------------ |
| `sql` | `string` | SQL statement to execute |
| `params` | `InArgs` | Optional parameters for prepared statement |
> `InArgs` can be provided as an array of values or as a record with named parameters. Values can be primitives, booleans, Uint8Array, or Date objects.
#### Example
```tsx
// Query with parameters
const result = await db.execute("SELECT * FROM users WHERE email = ?", [
"user@example.com",
]);
// Insert data
await db.execute("INSERT INTO users (id, name, email) VALUES (?, ?, ?)", [
"user-123",
"John Doe",
"john@example.com",
]);
// Update data
await db.execute("UPDATE users SET last_login = ? WHERE id = ?", [
new Date().toISOString(),
"user-123",
]);
```
#### Return value
Returns a result object with the following properties:
```tsx
{
columns: string[]; // Column names from result set
rows: unknown[][]; // Array of result rows as arrays
rowsAffected: number; // Number of rows affected by statement
lastInsertId?: number; // ID of last inserted row (for INSERT statements)
}
```
### batch
Executes multiple SQL statements in a single transaction.
```tsx
async batch(statements: DatabaseStatement[]): Promise
```
#### Parameters
| Parameter | Type | Description |
| ------------ | --------------------- | ------------------------------------------------ |
| `statements` | `DatabaseStatement[]` | Array of SQL statements with optional parameters |
#### DatabaseStatement format
```tsx
{
sql: string; // SQL statement
params?: InArgs; // Optional parameters
}
```
#### Example
```tsx
const results = await db.batch([
{
sql: "INSERT INTO users (id, name) VALUES (?, ?)",
params: ["user-123", "John Doe"],
},
{
sql: "INSERT INTO user_preferences (user_id, theme) VALUES (?, ?)",
params: ["user-123", "dark"],
},
]);
```
#### Return value
Returns a result object containing an array of individual results:
```tsx
{
results: [
{
columns: [],
rows: [],
rowsAffected: 1,
lastInsertId: 42,
},
{
columns: [],
rows: [],
rowsAffected: 1,
lastInsertId: 43,
},
];
}
```
### executeMultiple
Executes multiple SQL statements as a script (without transaction semantics).
```tsx
async executeMultiple(sql: string): Promise
```
#### Parameters
| Parameter | Type | Description |
| --------- | -------- | ----------------------------------------------- |
| `sql` | `string` | Multiple SQL statements separated by semicolons |
#### Example
```tsx
const results = await db.executeMultiple(`
CREATE TABLE IF NOT EXISTS users (
id TEXT PRIMARY KEY,
name TEXT NOT NULL,
created_at TEXT DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX IF NOT EXISTS idx_users_name ON users(name);
`);
```
#### Return value
Returns a result object containing an array of individual results, one for each statement executed.
### migrate
Executes SQL migration statements with foreign keys disabled.
```tsx
async migrate(sql: string): Promise
```
#### Parameters
| Parameter | Type | Description |
| --------- | -------- | ------------------------ |
| `sql` | `string` | SQL migration statements |
#### Example
```tsx
const results = await db.migrate(`
-- Migration v1: Initial schema
CREATE TABLE IF NOT EXISTS users (
id TEXT PRIMARY KEY,
name TEXT NOT NULL,
email TEXT UNIQUE,
created_at TEXT DEFAULT CURRENT_TIMESTAMP
);
-- Migration v2: Add last_login field
ALTER TABLE users ADD COLUMN last_login TEXT;
`);
```
#### Return value
Returns a result object containing an array of individual results, one for each statement executed.
### getInfo
Retrieves metadata about the database.
```tsx
async getInfo(): Promise
```
#### Example
```tsx
const info = await db.getInfo();
console.log(info);
// {
// name: "users",
// size: 12800,
// lastModified: Date("2023-07-15T12:34:56Z"),
// tables: [
// {
// name: "users",
// columns: [
// {
// name: "id",
// type: "TEXT",
// notNull: true,
// primaryKey: true
// },
// {
// name: "name",
// type: "TEXT",
// notNull: true,
// primaryKey: false
// }
// ]
// }
// ]
// }
```
## DatabaseClient
The `DatabaseClient` class provides a way to interact with GenSX databases outside of the GenSX workflow context, such as from regular Node.js applications or server endpoints.
### Import
```tsx
import { DatabaseClient } from "@gensx/storage";
```
### Constructor
```tsx
constructor(options?: DatabaseStorageOptions)
```
#### Parameters
| Parameter | Type | Default | Description |
| --------- | --------------------------------------------------- | ------- | --------------------------------- |
| `options` | [`DatabaseStorageOptions`](#databasestorageoptions) | `{}` | Optional configuration properties |
#### Example
```tsx
// Default client (uses filesystem locally, cloud in production)
const dbClient = new DatabaseClient();
// Explicitly use filesystem storage
const localClient = new DatabaseClient({
kind: "filesystem",
rootDir: "./my-data",
});
// Explicitly use cloud storage
const cloudClient = new DatabaseClient({ kind: "cloud" });
```
### Methods
#### getDatabase
Get a database instance and ensure it exists first.
```tsx
async getDatabase(name: string): Promise
```
##### Example
```tsx
const db = await dbClient.getDatabase("users");
const results = await db.execute("SELECT * FROM users LIMIT 10");
```
#### ensureDatabase
Create a database if it doesn't exist.
```tsx
async ensureDatabase(name: string): Promise
```
##### Example
```tsx
const { created } = await dbClient.ensureDatabase("analytics");
if (created) {
console.log("Database was created");
}
```
#### listDatabases
List databases.
```tsx
async listDatabases(options?: { limit?: number; cursor?: string }): Promise<{
databases: string[];
nextCursor?: string;
}>
```
##### Example
```tsx
const { databases, nextCursor } = await dbClient.listDatabases();
console.log("Available databases:", databases); // ["users", "products", "analytics"]
```
#### deleteDatabase
Delete a database.
```tsx
async deleteDatabase(name: string): Promise
```
##### Example
```tsx
const { deleted } = await dbClient.deleteDatabase("temp-db");
if (deleted) {
console.log("Database was removed");
}
```
#### databaseExists
Check if a database exists.
```tsx
async databaseExists(name: string): Promise
```
##### Example
```tsx
if (await dbClient.databaseExists("users")) {
console.log("Users database exists");
} else {
console.log("Users database doesn't exist yet");
}
```
### Usage in applications
The DatabaseClient is particularly useful when you need to access GenSX databases from:
- Regular Express.js or Next.js API routes
- Background jobs or workers
- Custom scripts or tools
- Any Node.js application outside the GenSX workflow context
```tsx
// Example: Using DatabaseClient in an Express handler
import express from "express";
import { DatabaseClient } from "@gensx/storage";
const app = express();
const dbClient = new DatabaseClient();
app.get("/api/users", async (req, res) => {
try {
const db = await dbClient.getDatabase("users");
const result = await db.execute("SELECT * FROM users");
res.json(result.rows);
} catch (error) {
console.error("Database error:", error);
res.status(500).json({ error: "Database error" });
}
});
app.listen(3000, () => {
console.log("Server running on port 3000");
});
```
## DatabaseStorageOptions
Configuration properties for database operations.
| Prop | Type | Default | Description |
| ------------- | ------------------------- | -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `kind` | `"filesystem" \| "cloud"` | Auto-detected | The storage backend to use. Defaults filesystem when running locally and cloud when deployed to the serverless runtime. |
| `rootDir` | `string` | `".gensx/databases"` | Root directory for storing database files (filesystem only) |
| `project` | `string` | Auto-detected | Project to use for cloud storage. If you don't set this, it'll first check your `GENSX_PROJECT` environment variable, then look for the project name in your local `gensx.yaml` file. |
| `environment` | `string` | Auto-detected | Environment to use for cloud storage. If you don't set this, it'll first check your `GENSX_ENV` environment variable, then use whatever environment you've selected in the CLI with `gensx env select`. |
# Blob storage reference
API reference for GenSX Cloud blob storage components.
## Installation
```bash
npm install @gensx/storage
```
## useBlob
Hook that provides access to blob storage for a specific key.
### Import
```tsx
import { useBlob } from "@gensx/storage";
```
### Signature
```tsx
function useBlob(
key: string,
options?: BlobStorageOptions,
): Blob;
```
### Parameters
| Parameter | Type | Description |
| --------- | ------------------------------------------- | --------------------------------- |
| `key` | `string` | The unique key for the blob |
| `options` | [`BlobStorageOptions`](#blobstorageoptions) | Optional configuration properties |
| `T` | Generic type | Type of the JSON data (optional) |
### Configuration Properties
| Prop | Type | Default | Description |
| --------------- | ------------------------- | -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `kind` | `"filesystem" \| "cloud"` | Auto-detected | Storage backend to use. Defaults filesystem when running locally and cloud when deployed to the serverless runtime. |
| `rootDir` | `string` | `.gensx/blobs` | Root directory for filesystem storage |
| `defaultPrefix` | `string` | `undefined` | Optional prefix for all blob keys |
| `project` | `string` | Auto-detected | Project to use for cloud storage. If you don't set this, it'll first check your `GENSX_PROJECT` environment variable, then look for the project name in your local `gensx.yaml` file. |
| `environment` | `string` | Auto-detected | Environment to use for cloud storage. If you don't set this, it'll first check your `GENSX_ENV` environment variable, then use whatever environment you've selected in the CLI with `gensx env select`. |
### Returns
Returns a blob object with methods to interact with blob storage.
### Example
```tsx
// Simple usage
const blob = useBlob("users/123.json");
const profile = await blob.getJSON();
// With configuration
const blob = useBlob("users/123.json", {
kind: "cloud",
defaultPrefix: "app-data/",
});
```
## Blob methods
The blob object returned by `useBlob` provides these methods:
### JSON operations
```tsx
// Get JSON data
const data = await blob.getJSON(); // Returns null if not found
// Save JSON data
await blob.putJSON(data, options); // Returns { etag: string }
```
### String operations
```tsx
// Get string content
const text = await blob.getString(); // Returns null if not found
// Save string content
await blob.putString("Hello world", options); // Returns { etag: string }
```
### Binary operations
```tsx
// Get binary data with metadata
const result = await blob.getRaw(); // Returns null if not found
// Returns { content, contentType, etag, lastModified, size, metadata }
// Save binary data
await blob.putRaw(buffer, options); // Returns { etag: string }
```
### Stream operations
```tsx
// Get data as a stream
const stream = await blob.getStream();
// Save data from a stream
await blob.putStream(readableStream, options); // Returns { etag: string }
```
### Metadata operations
```tsx
// Check if blob exists
const exists = await blob.exists(); // Returns boolean
// Delete blob
await blob.delete();
// Get metadata
const metadata = await blob.getMetadata(); // Returns null if not found
// Update metadata
await blob.updateMetadata({
key1: "value1",
key2: "value2",
});
```
## Options object
Many methods accept an options object with these properties:
```tsx
{
contentType?: string, // MIME type of the content
etag?: string, // For optimistic concurrency control
metadata?: { // Custom metadata key-value pairs
[key: string]: string
}
}
```
## BlobClient
The `BlobClient` class provides a way to interact with GenSX blob storage outside of the GenSX workflow context, such as from regular Node.js applications or server-side endpoints.
### Import
```tsx
import { BlobClient } from "@gensx/storage";
```
### Constructor
```tsx
constructor(options?: BlobStorageOptions)
```
#### Parameters
| Parameter | Type | Default | Description |
| --------- | ------------------------------------------- | ------- | --------------------------------- |
| `options` | [`BlobStorageOptions`](#blobstorageoptions) | `{}` | Optional configuration properties |
#### Example
```tsx
// Default client (uses filesystem locally, cloud in production)
const blobClient = new BlobClient();
// Explicitly use filesystem storage
const localClient = new BlobClient({
kind: "filesystem",
rootDir: "./my-data",
});
// Explicitly use cloud storage with a prefix
const cloudClient = new BlobClient({
kind: "cloud",
defaultPrefix: "app-data/",
});
```
### Methods
#### getBlob
Get a blob instance for a specific key.
```tsx
getBlob(key: string): Blob
```
##### Example
```tsx
const userBlob = blobClient.getBlob("users/123.json");
const profile = await userBlob.getJSON();
// Update the profile
profile.lastLogin = new Date().toISOString();
await userBlob.putJSON(profile);
```
#### listBlobs
List all blobs.
```tsx
async listBlobs(options?: { prefix?: string; limit?: number; cursor?: string }): Promise<{
blobs: Array<{ key: string; lastModified: string; size: number }>;
nextCursor?: string;
}>
```
##### Example
```tsx
const { blobs, nextCursor } = await blobClient.listBlobs({
prefix: "chats",
});
console.log(
"Chat histories:",
blobs.map((blob) => blob.key),
); // ["chats/123.json", "chats/456.json"]
```
#### blobExists
Check if a blob exists.
```tsx
async blobExists(key: string): Promise
```
##### Example
```tsx
if (await blobClient.blobExists("settings.json")) {
console.log("Settings file exists");
} else {
console.log("Need to create settings file");
}
```
#### deleteBlob
Delete a blob.
```tsx
async deleteBlob(key: string): Promise
```
##### Example
```tsx
const { deleted } = await blobClient.deleteBlob("temp/cache.json");
if (deleted) {
console.log("Cache file was deleted");
}
```
### Usage in applications
The BlobClient is particularly useful when you need to access blob storage from:
- Express.js or Next.js API routes
- Background jobs or workers
- Custom scripts or tools
- Any Node.js application outside the GenSX workflow context
```tsx
// Example: Using BlobClient in an Express handler
import express from "express";
import { BlobClient } from "@gensx/storage";
const app = express();
const blobClient = new BlobClient();
// Save user data endpoint
app.post("/api/users/:userId", async (req, res) => {
try {
const { userId } = req.params;
const userBlob = blobClient.getBlob(`users/${userId}.json`);
// Get existing profile or create new one
const existingProfile = (await userBlob.getJSON()) || {};
// Merge with updated data
const updatedProfile = {
...existingProfile,
...req.body,
updatedAt: new Date().toISOString(),
};
// Save the updated profile
await userBlob.putJSON(updatedProfile);
res.json({ success: true });
} catch (error) {
console.error("Error saving user data:", error);
res.status(500).json({ error: "Failed to save user data" });
}
});
app.listen(3000, () => {
console.log("Server running on port 3000");
});
```
## BlobStorageOptions
Configuration properties for blob storage operations.
| Prop | Type | Default | Description |
| --------------- | ------------------------- | -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `kind` | `"filesystem" \| "cloud"` | Auto-detected | Storage backend to use. Defaults filesystem when running locally and cloud when deployed to the serverless runtime. |
| `rootDir` | `string` | `.gensx/blobs` | Root directory for filesystem storage |
| `defaultPrefix` | `string` | `undefined` | Optional prefix for all blob keys |
| `project` | `string` | Auto-detected | Project to use for cloud storage. If you don't set this, it'll first check your `GENSX_PROJECT` environment variable, then look for the project name in your local `gensx.yaml` file. |
| `environment` | `string` | Auto-detected | Environment to use for cloud storage. If you don't set this, it'll first check your `GENSX_ENV` environment variable, then use whatever environment you've selected in the CLI with `gensx env select`. |
# SQL database
GenSX's SQL database service provides zero-configuration SQLite databases. It enables you to create, query, and manage relational data without worrying about infrastructure or database administration. Because new databases can be provisioned in milliseconds, they are perfect for per-agent or per workflow state.
Cloud databases are powered by [Turso](https://turso.tech), with several properties that make them ideal for AI agents and workflows:
- **Millisecond provisioning**: Databases are created on-demand in milliseconds, making them perfect for ephemeral workloads like parsing and querying user-uploaded CSVs or creating per-agent structured data stores.
- **Strong consistency**: All operations are linearizable, maintaining an ordered history, with writes fully serialized and subsequent writes awaiting transaction completion.
- **Zero configuration**: Like all GenSX storage components, databases work identically in both development and production environments with no setup required.
- **Local development**: Uses libsql locally to enable a fast, isolated development loop without external dependencies.
## Basic usage
To use SQL databases in your GenSX application:
1. Install the storage package:
```bash
npm install @gensx/storage
```
2. **Next.js Configuration** (if using Next.js): Add the following webpack configuration to your `next.config.ts` or `next.config.js` file:
```typescript
/** @type {import('next').NextConfig} */
const nextConfig = {
// ... other config options
webpack: (config: any) => {
// Ignore @libsql/client package for client-side builds
config.resolve.alias = {
...config.resolve.alias,
"@libsql/client": false,
};
return config;
},
// ... other config options
};
module.exports = nextConfig;
```
This configuration prevents bundling issues while allowing the storage hooks to work properly in server components and API routes. See the [client-side-tools example](https://github.com/gensx-inc/gensx/tree/main/examples/client-side-tools) for a complete implementation.
3. Access databases within your components using the `useDatabase` hook:
```ts
import { useDatabase } from "@gensx/storage";
const db = await useDatabase("my-database");
```
### Executing queries
The simplest way to interact with a database is by executing SQL queries:
```ts
import * as gensx from "@gensx/core";
import { useDatabase } from "@gensx/storage";
const QueryTeamStats = gensx.Component("QueryTeamStats", async ({ team }) => {
// Get access to a database (creates it if it doesn't exist)
const db = await useDatabase("baseball");
// Execute SQL queries with parameters
const result = await db.execute("SELECT * FROM players WHERE team = ?", [
team,
]);
// Access query results
console.log(result.columns); // Column names
console.log(result.rows); // Data rows
console.log(result.rowsAffected); // Number of rows affected
return result.rows;
});
```
### Creating tables and initializing data
You can create database schema and populate it with data:
```ts
const InitializeDatabase = gensx.Component("InitializeDatabase", async () => {
const db = await useDatabase("baseball");
// Create table if it doesn't exist
await db.execute(`
CREATE TABLE IF NOT EXISTS baseball_stats (
player TEXT,
team TEXT,
position TEXT,
at_bats INTEGER,
hits INTEGER,
runs INTEGER,
home_runs INTEGER,
rbi INTEGER,
batting_avg REAL
)
`);
// Check if data already exists
const result = await db.execute("SELECT COUNT(*) FROM baseball_stats");
const count = result.rows[0][0] as number;
if (count === 0) {
// Insert sample data
await db.execute(`
INSERT INTO baseball_stats (player, team, position, at_bats, hits, runs, home_runs, rbi, batting_avg)
VALUES
('Marcus Bennett', 'Portland Pioneers', '1B', 550, 85, 25, 32, 98, 0.312),
('Ethan Carter', 'San Antonio Stallions', 'SS', 520, 92, 18, 24, 76, 0.298)
`);
}
return "Database initialized";
});
```
## Practical examples
### Text-to-SQL agent
One of the most powerful applications is building a natural language to SQL interface:
```ts
import * as gensx from "@gensx/core";
import { useDatabase } from "@gensx/storage";
import { generateText } from "@gensx/vercel-ai";
import { openai } from "@ai-sdk/openai";
import { tool } from "ai";
import { z } from "zod";
// Create a tool that executes SQL queries
const queryTool = tool({
description: "Execute a SQL query against the baseball database",
parameters: z.object({
query: z.string().describe("The SQL query to execute"),
}),
execute: async ({ query }) => {
const db = await useDatabase("baseball");
const result = await db.execute(query);
return JSON.stringify(result, null, 2);
},
});
// SQL Copilot component that answers questions using SQL
const SqlCopilot = gensx.Component("SqlCopilot", ({ question }) => {
return generateText({
messages: [
{
role: "system",
content: `You are a SQL assistant. The database has a baseball_stats table with
columns: player, team, position, at_bats, hits, runs, home_runs, rbi, batting_avg.
Use the execute_query tool to run SQL queries.`,
},
{ role: "user", content: question },
],
model: openai("gpt-4o-mini"),
tools: { execute_query: queryTool },
maxSteps: 10,
});
});
```
### Transactions with batch operations
For operations that need to be performed atomically, you can use batch operations:
```ts
const TransferFunds = gensx.Component(
"TransferFunds",
async ({ fromAccount, toAccount, amount }: TransferFundsInput) => {
const db = await useDatabase("banking");
try {
// Execute multiple statements as a transaction
const result = await db.batch([
{
sql: "UPDATE accounts SET balance = balance - ? WHERE account_id = ?",
params: [amount, fromAccount],
},
{
sql: "UPDATE accounts SET balance = balance + ? WHERE account_id = ?",
params: [amount, toAccount],
},
]);
return { success: true, rowsAffected: result.rowsAffected };
} catch (error) {
return { success: false, error: error.message };
}
},
);
```
### Multi-statement scripts
For complex database changes, you can execute multiple statements at once:
```ts
const SetupUserSystem = gensx.Component("SetupUserSystem", async () => {
const db = await useDatabase("users");
// Execute a SQL script with multiple statements
await db.executeMultiple(`
CREATE TABLE IF NOT EXISTS users (
id TEXT PRIMARY KEY,
name TEXT NOT NULL,
email TEXT UNIQUE
);
CREATE TABLE IF NOT EXISTS user_preferences (
user_id TEXT PRIMARY KEY,
theme TEXT DEFAULT 'light',
notifications BOOLEAN DEFAULT 1,
FOREIGN KEY (user_id) REFERENCES users(id)
);
CREATE INDEX IF NOT EXISTS idx_users_email ON users(email);
`);
return "User system set up successfully";
});
```
### Database schema migrations
When you need to update your database schema, use migrations:
```ts
const MigrateDatabase = gensx.Component(
"MigrateDatabase",
async ({ version }) => {
const db = await useDatabase("app_data");
if (version === "v2") {
// Run migrations with foreign key checks disabled
await db.migrate(`
ALTER TABLE products ADD COLUMN category TEXT;
CREATE TABLE product_categories (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
description TEXT
);
`);
return "Database migrated to v2";
}
return "No migration needed";
},
);
```
## Development vs. production
GenSX SQL databases work identically in both local development and cloud environments:
- **Local development**: Databases are stored as SQLite files in the `.gensx/databases` directory by default
- **Cloud deployment**: Databases are automatically provisioned in the cloud
If you don't specify a "kind" that the framework auto-infers this value for you based on the runtime environment.
No code changes are needed when moving from development to production.
## Use cases
### Data-backed agents
Create agents that can query and update structured data, using the components defined above:
```ts
const DataAnalyst = gensx.Component(
"DataAnalyst",
async ({ query }: DataAnalystInput) => {
// Initialize the database with the baseball stats
await InitializeDatabase();
// Use the SQL Copilot to answer the question
return await SqlCopilot({ question: query });
},
);
```
### User data storage
Store user data and preferences in a structured format:
```ts
const UserPreferences = gensx.Component(
"UserPreferences",
async ({ userId, action, data }: UserPreferencesInput) => {
const db = await useDatabase("user_data");
if (action === "get") {
const result = await db.execute(
"SELECT * FROM preferences WHERE user_id = ?",
[userId],
);
return result.rows.length > 0 ? result.rows[0] : null;
} else if (action === "set") {
await db.execute(
"INSERT OR REPLACE INTO preferences (user_id, settings) VALUES (?, ?)",
[userId, JSON.stringify(data)],
);
return { success: true };
}
},
);
```
### Collaborative workflows
Build workflows that share structured data between steps:
```ts
const DataCollector = gensx.Component(
"DataCollector",
async ({ source }: DataCollectorInput) => {
const db = await useDatabase("workflow_data");
// Collect data from source and store in database
// ...
return { success: true };
},
);
const DataAnalyzer = gensx.Component(
"DataAnalyzer",
async ({ query }: DataAnalyzerInput) => {
const db = await useDatabase("workflow_data");
// Analyze data from database
// ...
return { results: "..." };
},
);
```
## Reference
See the [database component reference](docs/component-reference/storage-components/database-reference) for full details.
# Search
GenSX's Cloud search service provides full-text and vector search for AI applications. It enables you to store, query, and manage vector embeddings for semantic search, retrieval-augmented generation (RAG), and other AI use cases.
Search is powered by [turbopuffer](https://turbopuffer.com/), fully featured and ready for AI workloads:
- **Combined vector and keyword search**: Perform hybrid searches using both semantic similarity (vectors) and keyword matching (BM25).
- **Millisecond query latency**: Get results quickly, even with large vector collections.
- **Flexible filtering**: Apply metadata filters to narrow down search results based on categories, timestamps, or any custom attributes.
## Basic usage
To use search in your GenSX application:
1. Install the storage package:
```bash
npm install @gensx/storage
```
2. **Next.js Configuration** (if using Next.js): Add the following webpack configuration to your `next.config.ts` or `next.config.js` file:
```typescript
/** @type {import('next').NextConfig} */
const nextConfig = {
// ... other config options
webpack: (config: any) => {
// Ignore @libsql/client package for client-side builds
config.resolve.alias = {
...config.resolve.alias,
"@libsql/client": false,
};
return config;
},
// ... other config options
};
module.exports = nextConfig;
```
This configuration prevents bundling issues while allowing the storage hooks to work properly in server components and API routes. See the [client-side-tools example](https://github.com/gensx-inc/gensx/tree/main/examples/client-side-tools) for a complete implementation.
3. Access search namespaces within your components using the `useSearch` hook:
```ts
import { useSearch } from "@gensx/storage";
const search = await useSearch("documents");
```
### Storing vector embeddings
The first step in using search is to convert your data into vector embeddings and store them:
```ts
import * as gensx from "@gensx/core";
import { useSearch } from "@gensx/storage";
import { embed, embedMany, generateText } from "@gensx/vercel-ai";
import { openai } from "@ai-sdk/openai";
const IndexDocuments = gensx.Component(
"IndexDocuments",
async ({ documents }) => {
// Get access to a search namespace
const search = await useSearch("documents");
// Generate embeddings for the documents
const { embeddings } = await embedMany({
model: openai.embedding("text-embedding-3-small"),
values: documents.map((doc) => doc.text),
});
// Store the embeddings with original text as metadata
await search.write({
upsertRows: documents.map((doc, index) => ({
id: doc.id,
vector: embeddings[index],
text: doc.text,
category: doc.category,
createdAt: new Date().toISOString(),
})),
distanceMetric: "cosine_distance",
});
return { success: true, count: documents.length };
},
);
```
### Searching for similar documents
Once you've stored embeddings, you can search for semantically similar content:
```ts
const SearchDocuments = gensx.Component(
"SearchDocuments",
async ({ query, category }: SearchDocumentsInput) => {
// Get access to the search namespace
const search = await useSearch("documents");
// Generate an embedding for the query
const { embedding } = await embed({
model: openai.embedding("text-embedding-3-small"),
value: query,
});
// Build query options
const queryOptions = {
rankBy: ["vector", "ANN", embedding] as const,
includeAttributes: true,
topK: 5, // Return top 5 results
};
// Add filters if category is specified
if (category) {
queryOptions.filters = ["category", "Eq", category];
}
// Perform the search
const results = await search.query(queryOptions);
// Process and return results from the rows array
return results.rows?.map((result) => ({
id: result.id,
text: result.text,
distance: result.$dist,
})) || [];
},
);
```
## Building a RAG application
Retrieval-Augmented Generation (RAG) is one of the most common use cases for vector search. Here's how to build a complete RAG workflow:
### Step 1: Index your documents
First, create a component to prepare and index your documents:
```ts
const PrepareDocuments = gensx.Component("PrepareDocuments", async () => {
// Sample baseball player data
const documents = [
{
id: "1",
text: "Marcus Bennett is a first baseman for the Portland Pioneers. He has 32 home runs this season.",
category: "player",
},
{
id: "2",
text: "Ethan Carter plays shortstop for the San Antonio Stallions with 24 home runs.",
category: "player",
},
{
id: "3",
text: "The Portland Pioneers are leading the Western Division with a 92-70 record.",
category: "team",
},
];
// Index the documents
return await IndexDocuments({ documents });
});
```
### Step 2: Create a query tool
Next, create a tool that can access your search index:
```ts
import { tool } from "ai";
import { z } from "zod";
// Define a tool to query the search index
const queryTool = tool({
description: "Query the baseball knowledge base",
parameters: z.object({
query: z.string().describe("The text query to search for"),
}),
execute: async ({ query }) => {
// Access search index
const search = await useSearch("baseball");
// Generate query embedding
const { embedding } = await embed({
model: openai.embedding("text-embedding-3-small"),
value: query,
});
// Search for relevant documents
const results = await search.query({
rankBy: ["vector", "ANN", embedding],
includeAttributes: true,
topK: 10,
});
// Return formatted results
return JSON.stringify(
results.rows?.map((r) => r.text) || [],
null,
2,
);
},
});
```
### Step 3: Create the RAG agent
Now, create an agent that uses the query tool to access relevant information:
```ts
const RagAgent = gensx.Component("RagAgent", ({ question }) => {
return generateText({
messages: [
{
role: "system",
content: `You are a baseball expert assistant. Use the query tool to
look up relevant information before answering questions.`,
},
{ role: "user", content: question },
],
model: openai("gpt-4.1-mini"),
tools: { query: queryTool },
maxSteps: 5,
});
});
```
### Step 4: Combine Everything in a Workflow
Finally, put it all together in a workflow:
```ts
const RagWorkflow = gensx.Component(
"RagWorkflow",
async ({ question, shouldReindex }: RagWorkflowInput) => {
// Optionally reindex documents
if (shouldReindex) {
await PrepareDocuments();
}
// Use the RAG agent to answer the question
return await RagAgent({ question });
},
);
```
## Practical examples
### Agent memory system
One powerful application of vector search is creating a long-term memory system for AI agents:
```ts
import * as gensx from "@gensx/core";
import { useSearch } from "@gensx/storage";
import { embed, embedMany, generateText } from "@gensx/vercel-ai";
import { openai } from "@ai-sdk/openai";
// Component to store a memory
const StoreMemory = gensx.Component(
"StoreMemory",
async ({ userId, memory, importance = "medium" }) => {
const search = await useSearch(`memories-${userId}`);
// Generate embedding for this memory
const { embedding } = await embed({
model: openai.embedding("text-embedding-3-small"),
value: memory,
});
// Store the memory with metadata
await search.write({
upsertRows: [
{
id: `memory-${Date.now()}`,
vector: embedding,
content: memory,
timestamp: new Date().toISOString(),
importance: importance, // "high", "medium", "low"
source: "user-interaction",
},
],
distanceMetric: "cosine_distance",
});
return { success: true };
},
);
// Component to recall relevant memories
const RecallMemories = gensx.Component(
"RecallMemories",
async ({ userId, context, maxResults = 5 }) => {
const search = await useSearch(`memories-${userId}`);
// Generate embedding for the context
const { embedding } = await embed({
model: openai.embedding("text-embedding-3-small"),
value: context,
});
// Query for relevant memories, prioritizing important ones
const results = await search.query({
rankBy: ["vector", "ANN", embedding],
topK: maxResults,
includeAttributes: true,
});
// Format memories for the agent from the rows array
return results.rows?.map((result) => ({
content: result.content,
timestamp: result.timestamp,
distance: result.$dist?.toFixed(3),
})) || [];
},
);
// Component that uses memories in a conversation
const MemoryAwareAgent = gensx.Component(
"MemoryAwareAgent",
async ({ userId, userMessage }) => {
// Recall relevant memories based on the current conversation
const memories = await RecallMemories({
userId,
context: userMessage,
maxResults: 3,
});
// Use memories to inform the response
const response = await generateText({
messages: [
{
role: "system",
content: `You are an assistant with memory. Consider these relevant memories about this user:
${memories.map((m) => `[${m.timestamp}] ${m.content} (distance: ${m.distance})`).join("\n")}`,
},
{ role: "user", content: userMessage },
],
model: openai("gpt-4.1-mini"),
});
// Store this interaction as a new memory
await StoreMemory({
userId,
memory: `User asked: "${userMessage}". I replied: "${response.text}"`,
importance: "medium",
});
return response.text;
},
);
```
### Knowledge base search
Another powerful application is a knowledge base with faceted search capabilities:
```ts
const SearchKnowledgeBase = gensx.Component(
"SearchKnowledgeBase",
async ({ query, filters = {} }) => {
const search = await useSearch("knowledge-base");
// Generate embedding for the query
const { embedding } = await embed({
model: openai.embedding("text-embedding-3-small"),
value: query,
});
// Build filter conditions from user-provided filters
let filterConditions = ["And", []];
if (filters.category) {
filterConditions[1].push(["category", "Eq", filters.category]);
}
if (filters.dateRange) {
filterConditions[1].push([
"publishedDate",
"Gte",
filters.dateRange.start,
]);
filterConditions[1].push(["publishedDate", "Lte", filters.dateRange.end]);
}
if (filters.tags && filters.tags.length > 0) {
filterConditions[1].push(["tags", "ContainsAny", filters.tags]);
}
// Perform hybrid search (vector + keyword) with filters
const results = await search.query({
rankBy: ["text", "BM25", query], // Text-based ranking for hybrid search
includeAttributes: true,
topK: 10,
filters: filterConditions[1].length > 0 ? filterConditions : undefined,
});
// Return formatted results from the rows array
return results.rows?.map((result) => ({
title: result.title,
snippet: result.snippet,
url: result.url,
category: result.category,
tags: result.tags,
score: result.$dist,
})) || [];
},
);
```
## Advanced usage
### Filtering by metadata
Use filters to narrow down search results:
```ts
const search = await useSearch("articles");
// Search with filters
const results = await search.query({
rankBy: ["vector", "ANN", queryEmbedding],
topK: 10,
filters: [
"And",
[
["category", "Eq", "sports"],
["publishDate", "Gte", "2023-01-01"],
["publishDate", "Lt", "2024-01-01"],
["author", "In", ["Alice", "Bob", "Carol"]],
],
],
});
```
### Updating schema
Manage your vector collection's schema:
```ts
const search = await useSearch("products");
// Get current schema
const currentSchema = await search.getSchema();
// Update schema to add new fields
await search.updateSchema({
schema: {
...currentSchema,
newField: { type: "int" },
anotherField: { type: "[]string" },
},
});
```
## Reference
See the [search component reference](docs/component-reference/storage-components/search-reference) for full details.
# Blob storage
Blob storage provides zero-configuration persistent storage for your GenSX applications. It enables you to store JSON, text, or binary data for your agents and workflows without worrying about managing infrastructure.
## Basic usage
To use blob storage in your GenSX application:
1. Install the storage package:
```bash
npm install @gensx/storage
```
2. **Next.js Configuration** (if using Next.js): Add the following webpack configuration to your `next.config.ts` or `next.config.js` file:
```typescript
/** @type {import('next').NextConfig} */
const nextConfig = {
// ... other config options
webpack: (config: any) => {
// Ignore @libsql/client package for client-side builds
config.resolve.alias = {
...config.resolve.alias,
"@libsql/client": false,
};
return config;
},
// ... other config options
};
module.exports = nextConfig;
```
This configuration prevents bundling issues while allowing the storage hooks to work properly in server components and API routes. See the [client-side-tools example](https://github.com/gensx-inc/gensx/tree/main/examples/client-side-tools) for a complete implementation.
3. Access blobs within your components using the `useBlob` hook:
```ts
import { useBlob } from "@gensx/storage";
const blob = useBlob("your-key.json");
```
### Reading blobs
The `useBlob` hook provides simple methods to read different types of data:
```ts
import { useBlob } from "@gensx/storage";
// Read JSON data
const profileBlob = useBlob("users/profile.json");
const profile = await profileBlob.getJSON();
console.log(profile?.name);
// Read text data
const notesBlob = useBlob("notes/meeting.txt");
const notes = await notesBlob.getString();
// Read binary data
const imageBlob = useBlob("images/photo.jpg");
const image = await imageBlob.getRaw();
console.log(image?.contentType); // "image/jpeg"
```
### Writing blobs
You can write data in various formats:
```ts
import { useBlob } from "@gensx/storage";
// Write JSON data
const profileBlob = useBlob("users/profile.json");
await profileBlob.putJSON({ name: "Alice", preferences: { theme: "dark" } });
// Write text data
const notesBlob = useBlob("notes/meeting.txt");
await notesBlob.putString(
"Meeting agenda:\n1. Project updates\n2. Action items",
);
// Write binary data
const imageBlob = useBlob("images/photo.jpg");
await imageBlob.putRaw(imageBuffer, {
contentType: "image/jpeg",
metadata: { originalName: "vacation.jpg" },
});
```
## Practical examples
### Persistent chat threads
One of the most common use cases for blob storage is maintaining conversation history across multiple interactions:
```ts
import * as gensx from "@gensx/core";
import { openai } from "@ai-sdk/openai";
import { useBlob } from "@gensx/storage";
import { generateText } from "@gensx/vercel-ai";
interface ChatMessage {
role: "system" | "user" | "assistant";
content: string;
}
const ChatWithMemory = gensx.Component(
"ChatWithMemory",
async ({ userInput, threadId }: ChatInput) => {
// Get a reference to the thread's storage
const blob = useBlob(`chats/${threadId}.json`);
// Load existing messages or start with a system prompt
const messages = (await blob.getJSON()) ?? [
{
role: "system",
content: "You are a helpful assistant.",
},
];
// Add the new user message
messages.push({ role: "user", content: userInput });
// Generate a response using the full conversation history
const result = await generateText({
messages,
model: openai("gpt-4.1-mini"),
});
// Save the assistant's response to the history
messages.push({ role: "assistant", content: result.text });
await blob.putJSON(messages);
return result.text;
},
);
```
### Memory for agents
For more complex agents, you can store structured memory:
```ts
interface AgentMemory {
facts: string[];
tasks: { description: string; completed: boolean }[];
lastUpdated: string;
}
const AgentWithMemory = gensx.Component(
"AgentWithMemory",
async ({ input, agentId }: AgentInput) => {
// Load agent memory
const memoryBlob = useBlob(`agents/${agentId}/memory.json`);
const memory = (await memoryBlob.getJSON()) ?? {
facts: [],
tasks: [],
lastUpdated: new Date().toISOString(),
};
// Process input using memory
// ...
// Update and save memory
memory.facts.push("New fact learned from input");
memory.tasks.push({ description: "Follow up on X", completed: false });
memory.lastUpdated = new Date().toISOString();
await memoryBlob.putJSON(memory);
return "Response that uses memory context";
},
);
```
### Saving files
You can use blob storage to save and retrieve binary files like images:
```ts
const StoreImage = gensx.Component(
"StoreImage",
async ({ imageBuffer, filename }: StoreImageInput) => {
const imageBlob = useBlob(`images/${filename}`);
// Save image with metadata
await imageBlob.putRaw(imageBuffer, {
contentType: "image/png",
metadata: {
uploadedAt: new Date().toISOString(),
pixelSize: "800x600",
},
});
return { success: true, path: `images/${filename}` };
},
);
const GetImage = gensx.Component(
"GetImage",
async ({ filename }: GetImageInput) => {
const imageBlob = useBlob(`images/${filename}`);
// Check if image exists
const exists = await imageBlob.exists();
if (!exists) {
return { found: false };
}
// Get the image with metadata
const image = await imageBlob.getRaw();
return {
found: true,
data: image?.content,
contentType: image?.contentType,
metadata: image?.metadata,
};
},
);
```
### Optimistic concurrency control
For scenarios where multiple processes might update the same data, you can use ETags to prevent conflicts:
```ts
const UpdateCounter = gensx.Component(
"UpdateCounter",
async ({ counterName }: UpdateCounterInput) => {
const blob = useBlob(`counters/${counterName}.json`);
// Get current value and metadata
const counter = (await blob.getJSON<{ value: number }>()) ?? { value: 0 };
const metadata = await blob.getMetadata();
// Update counter
counter.value += 1;
try {
// Save with ETag to prevent conflicts
await blob.putJSON(counter, {
etag: metadata?.etag,
});
return { success: true, value: counter.value };
} catch (error) {
if (error.name === "BlobConflictError") {
return {
success: false,
message: "Counter was updated by another process",
};
}
throw error;
}
},
);
```
## Development vs. production
GenSX blob storage works identically in both local development and cloud environments:
- **Local development**: Blobs are stored in the `.gensx/blobs` directory by default
- **Cloud deployment**: Blobs are automatically stored in cloud storage
If you don't specify a "kind" that the framework auto-infers this value for you based on the runtime environment.
No code changes are needed when moving from development to production.
## Reference
See the [blob storage component reference](docs/component-reference/storage-components/blob-reference) for full details.
# gensx project
The `gensx project` command displays detailed information about a GenSX project, including its environments and workflows.
## Usage
```bash
gensx project [options]
```
## Options
| Option | Description |
| ---------------------- | -------------------------------- |
| `-p, --project ` | Project name to show details for |
| `-h, --help` | Display help for the command |
## Examples
```bash
# Show current project details
gensx project
# Show specific project details
gensx project --project my-app
```
## Notes
- You must be logged in to GenSX Cloud to show project details (`gensx login`)
- If no project is specified, the command uses the project from `gensx.yaml` in the current directory
- The command shows all environments and workflows in the currently selected environment
- Use this command to verify your current project and environment setup
# gensx project ls
The `gensx project ls` command lists all projects in your GenSX organization.
## Usage
```bash
gensx project ls [options]
```
## Options
| Option | Description |
| ------------ | ----------------------------- |
| `-h, --help` | Display help for the command. |
## Examples
```bash
# List all projects in your organization
gensx project ls
```
## Notes
- You must be logged in to GenSX Cloud to list projects (`gensx login`)
- The command shows all projects you have access to in your organization
# gensx project create
The `gensx project create` command creates a new project in your GenSX organization. Projects are containers for environments, workflows, and deployments.
## Usage
```bash
gensx project create [name] [options]
```
## Arguments
| Argument | Description |
| -------- | --------------------------------------------------------- |
| `[name]` | Name of the project (optional if specified in gensx.yaml) |
## Options
| Option | Description |
| -------------------------- | --------------------------------------- |
| `-d, --description ` | Optional project description |
| `--env ` | Initial environment name |
| `-y, --yes` | Automatically answer yes to all prompts |
| `-h, --help` | Display help for the command |
## Examples
```bash
# Create from gensx.yaml configuration
gensx project create
# Create a project with prompts
gensx project create my-project
# Create a project with specific environment
gensx project create my-project --env staging
# Create a project with description
gensx project create my-project --description "My data processing pipeline"
# Create a project automatically (skip prompts)
gensx project create my-project --yes
```
## Notes
- You must be logged in to GenSX Cloud to create projects (`gensx login`)
- Project names must be unique within your organization
- If no project is specified, the command uses the project from `gensx.yaml` in the current directory
- Each project is created with an initial environment that becomes active
# gensx examples
The `gensx examples` command lists all available GenSX example projects that you can clone and use as starting points for your own projects.
## Usage
```bash
gensx examples
```
## Examples
```bash
# List all available examples
gensx examples
```
## Notes
- Examples include chat interfaces, research tools, writing platforms, and AI utilities
- Use the example name with `gensx examples clone ` to clone a project
- All examples are built with modern frameworks like Next.js
# gensx examples clone
The `gensx examples clone` command clones an existing GenSX example project to your local machine, providing a ready-to-use starting point for building AI applications.
## Usage
```bash
gensx examples clone [options]
```
## Arguments
| Argument | Description |
| ---------------- | ----------------------------------------------------------------------------- |
| `` | Name of the example to clone. Use `gensx examples` to see available examples. |
## Options
| Option | Description |
| ---------------------- | ------------------------------------------------------------------ |
| `-p, --project ` | Project name to clone to. If not specified, uses the example name. |
| `-y, --yes` | Automatically answer yes to all prompts. |
| `-h, --help` | Display help for the command. |
## Examples
```bash
# Clone the chat-ux example
gensx examples clone chat-ux
# Clone to a custom directory name
gensx examples clone chat-ux --project my-chat-app
```
## Notes
- Use `gensx examples` to see all available examples
- Dependencies are automatically installed after cloning
- Follow the README.md in the cloned project for setup instructions
# gensx env unselect
The `gensx env unselect` command de-selects the currently selected environment for your project. This means subsequent commands will require explicit environment specification.
## Usage
```bash
gensx env unselect [options]
```
## Options
| Option | Description |
| ----------------------- | ---------------------------------------------------------------------------- |
| `-p, --project ` | Project name to unselect the environment in. |
| `-h, --help` | Display help for the command. |
## Examples
```bash
# Unselect the current environment
gensx env unselect
# Unselect environment in a specific project
gensx env unselect --project my-app
```
## Notes
- You must be logged in to GenSX Cloud to unselect environments (`gensx login`)
- Unselecting an environment does not delete it, it just removes the selection.
- You can check if an environment is selected using `gensx env`
- To select a new environment, use `gensx env select`
- After unselecting, you'll need to specify the environment for each command that requires one
# gensx env
The `gensx env` command displays the name of the currently selected environment.
## Usage
```bash
gensx env [options]
```
## Options
| Option | Description |
| ----------------------- | ---------------------------------------------------------------------------- |
| `-p, --project ` | Project name to show environment details for. |
| `-h, --help` | Display help for the command. |
## Examples
```bash
# Show the current environment
gensx env
# Show the current environment for a specific project
gensx env --project my-app
```
## Notes
- You must be logged in to GenSX Cloud to show environment details (`gensx login`)
- You can use this command to verify your current environment before running important operations
- If no environment is selected, the command will indicate this
# gensx env select
The `gensx env select` command sets a specific environment as the active environment for your current project. This environment will be used by default for subsequent commands like `deploy` and `run`.
## Usage
```bash
gensx env select [options]
```
## Arguments
| Argument | Description |
| -------- | ------------------------------------------------------------------------------------------ |
| `` | Name of the environment to select. |
## Options
| Option | Description |
| ----------------------- | ---------------------------------------------------------------------------- |
| `-p, --project ` | Project name to select the environment in. |
| `-h, --help` | Display help for the command. |
## Description
This command:
1. Sets the specified environment as active for your current project
2. Updates your local configuration to remember this selection
3. Makes this environment the default target for subsequent commands
After selecting an environment:
- `gensx deploy` will deploy to this environment by default
- `gensx run` will run workflows in this environment by default
- You can still override the environment for specific commands using the `--env` option
## Examples
```bash
# Select the development environment
gensx env select dev
# Select a production environment in a specific project
gensx env select production --project my-app
```
## Notes
- You must be logged in to GenSX Cloud to select environments (`gensx login`)
- The selected environment persists across CLI sessions
- You can check the currently selected environment using `gensx env show`
- To unselect an environment, use `gensx env unselect`
# gensx env ls
The `gensx env ls` command lists all environments in your GenSX project.
## Usage
```bash
gensx env ls [options]
```
## Options
| Option | Description |
| ----------------------- | ---------------------------------------------------------------------------- |
| `-p, --project ` | Project name to list environments for. |
| `-h, --help` | Display help for the command. |
## Examples
```bash
# List all environments in the current project
gensx env ls
# List environments in a specific project
gensx env ls --project my-app
```
## Notes
- You must be logged in to GenSX Cloud to list environments (`gensx login`)
# gensx env create
The `gensx env create` command creates a new environment in your GenSX project. Environments allow you to manage different deployment configurations (like development, staging, and production) for your workflows.
## Usage
```bash
gensx env create [options]
```
## Arguments
| Argument | Description |
| -------- | ------------------------------------------------------------------------- |
| `` | Name of the environment to create (e.g., "dev", "staging", "production"). |
## Options
| Option | Description |
| ---------------------- | ------------------------------------------ |
| `-p, --project ` | Project name to create the environment in. |
| `-h, --help` | Display help for the command. |
## Examples
```bash
# Create a development environment
gensx env create dev
# Create a production environment in a specific project
gensx env create production --project my-app
```
## Notes
- You must be logged in to GenSX Cloud to create environments (`gensx login`)
- Each project can have multiple environments
- Environment names should be descriptive and follow a consistent naming convention