# Why JSX?
GenSX uses JSX for workflow composition because it's a natural fit for the programming model. Most people think of React and frontend when JSX is mentioned, so choosing it for a backend workflow orchestration framework may seem surprising.
This page explains why JSX is a perfect fit for anyone building LLM applications, whether it be simple linear workflows or complex cyclical agents. At the end of the day, building agents and workflows is all about constructing a dataflow graph. And agents in particular need to dynamically branch and execute conditionally at runtime. This is exactly what GenSX excels at.
Read the full blog post on [why a React-like model perfect for building agents and workflows](/blog/why-react-is-the-best-backend-workflow-engine).
## Why not graphs?
Graph APIs are the standard for LLM frameworks. They provide APIs to define nodes, edges between those nodes, and a global state object that is passed around the workflow.
A workflow for writing a blog post might look like this:
```tsx
const graph = new Graph()
.addNode("hnCollector", collectHNStories)
.addNode("analyzeHNPosts", analyzePosts)
.addNode("trendAnalyzer", analyzeTrends)
.addNode("pgEditor", editAsPG)
.addNode("pgTweetWriter", writeTweet);
graph
.addEdge(START, "hnCollector")
.addEdge("hnCollector", "analyzeHNPosts")
.addEdge("analyzeHNPosts", "trendAnalyzer")
.addEdge("trendAnalyzer", "pgEditor")
.addEdge("pgEditor", "pgTweetWriter")
.addEdge("pgTweetWriter", END);
```
Can you easily read this code and visualize the workflow?
On the other hand, the same workflow with GenSX and JSX reads top to bottom like a normal programming language:
```tsx
{(stories) => (
{({ analyses }) => (
{(report) => (
{(editedReport) => (
)}
)}
)}
)}
```
As you'll see in the next section, trees are just another kind of graph and you can express all of the same things.
## Graphs, DAGs, and trees
Most workflow frameworks use explicit graph construction with nodes and edges. This makes sense - workflows are fundamentally about connecting steps together, and graphs are a natural way to represent these connections.
Trees are just a special kind of graph - one where each node has a single parent. At first glance, this might seem more restrictive than a general graph. But JSX gives us something powerful: the ability to express _programmatic_ trees.
Consider a cycle in a workflow:
```tsx
const AgentWorkflow = gensx.Component<{}, AgentWorkflowOutput>(
"AgentWorkflow",
{(result) =>
result.needsMoreWork ? (
// Recursion creates AgentWorkflow -> AgentStep -> AgentWorkflow -> etc.
) : (
result
)
}
,
);
```
This tree structure visually represents the workflow, and programmatic JSX and typescript allow you to express cycles through normal programming constructs. This gives you the best of both worlds:
- Visual clarity of a tree structure
- Full expressiveness of a graph API
- Natural control flow through standard TypeScript
- No explicit edge definitions needed
JSX isn't limited to static trees. It gives you a way to express dynamic, programmatic trees that can represent any possible workflow.
## Pure functional components
GenSX uses JSX to encourage a functional component model, enabling you to compose your workflows from discrete, reusable steps.
Functional and reusable components can be published and shared on `npm`, and it's easy to test and evaluate them in isolation.
Writing robust evals is the difference between a prototype and a high quality AI app. Usually you start with end to end evals, but as workflows grow these become expensive, take a long time to run, and it can be difficult to isolate and understand the impact of changes in your workflow.
By breaking down your workflow into discrete components, you can write more focused evals that are easier to run, faster to complete, and test the impact of specific changes in your workflow.
## Nesting via child functions
Standard JSX allows you to nest components to form a tree:
```tsx
| Description |
| :-------------------------------------------- | :------------------------------------------------------------- |
| [`StreamText`](#streamtext) | Stream text responses from language models |
| [`StreamObject`](#streamobject) | Stream structured JSON objects from language models |
| [`GenerateText`](#generatetext) | Generate complete text responses from language models |
| [`GenerateObject`](#generateobject) | Generate complete structured JSON objects from language models |
| [`Embed`](#embed) | Generate embeddings for a single text input |
| [`EmbedMany`](#embedmany) | Generate embeddings for multiple text inputs |
| [`GenerateImage`](#generateimage) | Generate images from text prompts |
## Component Reference
#### ``
The [StreamText](https://sdk.vercel.ai/docs/ai-sdk-core/generating-text#streamtext) component streams text responses from language models, making it ideal for chat interfaces and other applications where you want to show responses as they're generated.
```tsx
import { StreamText } from "@gensx/vercel-ai-sdk";
import { openai } from "@ai-sdk/openai";
const languageModel = openai("gpt-4o");
```
```tsx
```
##### Props
The `StreamText` component accepts all parameters from the Vercel AI SDK's `streamText` function:
- `prompt` (required): The text prompt to send to the model
- `model` (required): The language model to use (from Vercel AI SDK)
- Plus all other parameters supported by the Vercel AI SDK
##### Return Type
Returns a streaming response that can be consumed token by token.
#### ``
The `StreamObject` component streams structured JSON objects from language models, allowing you to get structured data with type safety.
```tsx
import * as gensx from "@gensx/core";
import { StreamObject } from "@gensx/vercel-ai-sdk";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
const languageModel = openai("gpt-4o");
// Define a schema for the response
const recipeSchema = z.object({
recipe: z.object({
name: z.string(),
ingredients: z.array(z.string()),
steps: z.array(z.string()),
}),
});
```
```tsx
// Streams a structured object when executed
```
##### Props
The [StreamObject](https://sdk.vercel.ai/docs/ai-sdk-core/generating-structured-data#stream-object) component accepts all parameters from the Vercel AI SDK's `streamObject` function:
- `prompt` (required): The text prompt to send to the model
- `model` (required): The language model to use (from Vercel AI SDK)
- `schema`: A Zod schema defining the structure of the response
- `output`: The output format ("object", "array", or "no-schema")
- Plus all other parameters supported by the Vercel AI SDK
##### Return Type
Returns a structured object matching the provided schema.
#### ``
The [GenerateText](https://sdk.vercel.ai/docs/ai-sdk-core/generating-text#generatetext) component generates complete text responses from language models, waiting for the entire response before returning.
```tsx
import * as gensx from "@gensx/core";
import { GenerateText } from "@gensx/vercel-ai-sdk";
import { openai } from "@ai-sdk/openai";
const languageModel = openai("gpt-4o");
```
```tsx
// Generates a complete text response when executed
```
##### Props
The `GenerateText` component accepts all parameters from the Vercel AI SDK's `generateText` function:
- `prompt` (required): The text prompt to send to the model
- `model` (required): The language model to use (from Vercel AI SDK)
- Plus any other parameters supported by the Vercel AI SDK
##### Return Type
Returns a complete text string containing the model's response.
#### ``
The [GenerateObject](https://sdk.vercel.ai/docs/ai-sdk-core/generating-structured-data#generate-object) component generates complete structured JSON objects from language models, with type safety through Zod schemas.
```tsx
import * as gensx from "@gensx/core";
import { GenerateObject } from "@gensx/vercel-ai-sdk";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
const languageModel = openai("gpt-4o");
// Define a schema for the response
const userSchema = z.object({
user: z.object({
name: z.string(),
age: z.number(),
interests: z.array(z.string()),
contact: z.object({
email: z.string().email(),
phone: z.string().optional(),
}),
}),
});
```
```tsx
// Generates a structured object when executed
```
##### Props
The `GenerateObject` component accepts all parameters from the Vercel AI SDK's `generateObject` function:
- `prompt` (required): The text prompt to send to the model
- `model` (required): The language model to use (from Vercel AI SDK)
- `schema`: A Zod schema defining the structure of the response
- `output`: The output format ("object", "array", or "no-schema")
- Plus any other optional parameters supported by the Vercel AI SDK
##### Return Type
Returns a structured object matching the provided schema.
#### ``
The [Embed](https://sdk.vercel.ai/docs/ai-sdk-core/embeddings) component generates embeddings for a single text input, which can be used for semantic search, clustering, and other NLP tasks.
```tsx
import * as gensx from "@gensx/core";
import { Embed } from "@gensx/vercel-ai-sdk";
import { openai } from "@ai-sdk/openai";
const embeddingModel = openai.embedding("text-embedding-3-small");
```
```tsx
// Generates an embedding when executed
```
##### Props
The `Embed` component accepts all parameters from the Vercel AI SDK's `embed` function:
- `value` (required): The text to generate an embedding for
- `model` (required): The embedding model to use (from Vercel AI SDK)
- Plus any other optional parameters supported by the Vercel AI SDK
##### Return Type
Returns a vector representation (embedding) of the input text.
#### ``
The [EmbedMany](https://sdk.vercel.ai/docs/ai-sdk-core/embeddings#embedding-many-values) component generates embeddings for multiple text inputs in a single call, which is more efficient than making separate calls for each text.
```tsx
import * as gensx from "@gensx/core";
import { EmbedMany } from "@gensx/vercel-ai-sdk";
import { openai } from "@ai-sdk/openai";
const embeddingModel = openai.embedding("text-embedding-3-small");
```
```tsx
// Generates embeddings for multiple texts when executed
```
##### Props
The `EmbedMany` component accepts all parameters from the Vercel AI SDK's `embedMany` function:
- `values` (required): Array of texts to generate embeddings for
- `model` (required): The embedding model to use (from Vercel AI SDK)
- Plus any other optional parameters supported by the Vercel AI SDK
##### Return Type
Returns an array of vector representations (embeddings) for the input texts.
#### ``
The [GenerateImage](https://sdk.vercel.ai/docs/ai-sdk-core/image-generation) component generates images from text prompts using image generation models.
```tsx
import * as gensx from "@gensx/core";
import { GenerateImage } from "@gensx/vercel-ai-sdk";
import { openai } from "@ai-sdk/openai";
const imageModel = openai.image("dall-e-3");
```
```tsx
// Generates an image when executed
```
##### Props
The `GenerateImage` component accepts all parameters from the Vercel AI SDK's `experimental_generateImage` function:
- `prompt` (required): The text description of the image to generate
- `model` (required): The image generation model to use (from Vercel AI SDK)
- Plus any other optional parameters supported by the Vercel AI SDK
##### Return Type
Returns an object containing information about the generated image, including its URL.
## Usage with Different Models
The Vercel AI SDK supports multiple model providers. Here's how to use different providers with GenSX components:
```tsx
// OpenAI
import { openai } from "@ai-sdk/openai";
const openaiModel = openai("gpt-4o");
// Anthropic
import { anthropic } from "@ai-sdk/anthropic";
const anthropicModel = anthropic("claude-3-opus-20240229");
// Cohere
import { cohere } from "@ai-sdk/cohere";
const cohereModel = cohere("command-r-plus");
// Use with GenSX components
import { GenerateText } from "@gensx/vercel-ai-sdk";
const openaiResponse = await gensx.execute(
,
);
const anthropicResponse = await gensx.execute(
,
);
```
For more information on the Vercel AI SDK, visit the [official documentation](https://sdk.vercel.ai/docs).
# OpenRouter
[OpenRouter](https://openrouter.ai) provides a unified API to access various AI models from different providers. You can use GenSX with OpenRouter by configuring the OpenAIProvider component with OpenRouter's API endpoint.
## Installation
To use OpenRouter with GenSX, you need to install the OpenAI package:
```bash
npm install @gensx/openai
```
## Configuration
Configure the `OpenAIProvider` with your OpenRouter API key and the OpenRouter base URL:
```tsx
import { OpenAIProvider } from "@gensx/openai";
{/* Your components here */}
;
```
## Example Usage
Here's a complete example of using OpenRouter with GenSX:
```tsx
import * as gensx from "@gensx/core";
import { ChatCompletion, OpenAIProvider } from "@gensx/openai";
interface RespondProps {
userInput: string;
}
type RespondOutput = string;
const GenerateText = gensx.Component(
"GenerateText",
({ userInput }) => (
),
);
const OpenRouterExampleComponent = gensx.Component<
{ userInput: string },
string
>("OpenRouter", ({ userInput }) => (
));
const workflow = gensx.Workflow(
"OpenRouterWorkflow",
OpenRouterExampleComponent,
);
const result = await workflow.run({
userInput: "Hi there! Write me a short story about a cat that can fly.",
});
```
## Specifying Models
When using OpenRouter, you can specify models using their full identifiers:
- `anthropic/claude-3.7-sonnet`
- `openai/gpt-4o`
- `google/gemini-1.5-pro`
- `mistral/mistral-large-latest`
Check the [OpenRouter documentation](https://openrouter.ai/docs) for a complete list of available models.
## Provider Options
You can use the `provider` property in the `ChatCompletion` component to specify OpenRouter-specific options:
```tsx
```
## Learn More
- [OpenRouter Documentation](https://openrouter.ai/docs)
- [GenSX OpenAI Components](/docs/component-reference/openai)
# OpenAI
The [@gensx/openai](https://www.npmjs.com/package/@gensx/openai) package provides OpenAI API compatible components for GenSX.
## Installation
To install the package, run the following command:
```bash
npm install @gensx/openai
```
Then import the components you need from the package:
```tsx
import { OpenAIProvider, GSXChatCompletion } from "@gensx/openai";
```
## Supported components
|
Component
| Description |
| :---------------------------------------------- | :---------------------------------------------------------------------------------------------- |
| [`OpenAIProvider`](#openaiprovider) | OpenAI Provider that handles configuration and authentication for child components |
| [`GSXChatCompletion`](#gsxchatcompletion) | Enhanced component with enhanced features for OpenAI chat completions |
| [`ChatCompletion`](#chatcompletion) | Simplified component for chat completions with streamlined output interface |
| [`OpenAIChatCompletion`](#openaichatcompletion) | Low-level component that directly matches the OpenAI SDK interface for the Chat Completions API |
| [`OpenAIResponses`](#openairesponses) | Low-level component that directly matches the OpenAI SDK interface for the Responses API |
| [`OpenAIEmbedding`](#openaiembedding) | Low-level component that directly matches the OpenAI SDK interface for the Embeddings API |
## Component Comparison
The package provides three different chat completion components to suit different use cases:
- **OpenAIChatCompletion**: Direct mapping to the OpenAI API with identical inputs and outputs
- **GSXChatCompletion**: Enhanced component with additional features like structured output and automated tool calling
- **ChatCompletion**: Simplified interface that returns string responses or simple streams while maintaining identical inputs to the OpenAI API
## Reference
#### ``
The `OpenAIProvider` component initializes and provides an OpenAI client instance to all child components. Any components that use OpenAI's API need to be wrapped in an `OpenAIProvider`.
```tsx
```
By configuring the baseURL, you can also use the `OpenAIProvider` with other OpenAI compatible APIs like [x.AI](https://docs.x.ai/docs/overview#featured-models) and [Groq](https://console.groq.com/docs/openai).
```tsx
```
##### Props
The `OpenAIProvider` accepts all configuration options from the [OpenAI Node.js client library](https://github.com/openai/openai-node) including:
- `apiKey` (required): Your OpenAI API key
- `organization`: Optional organization ID
- `baseURL`: Optional API base URL
#### ``
The `GSXChatCompletion` component is an advanced chat completion component that provides enhanced features beyond the standard OpenAI API. It supports structured output, tool calling, and streaming, with automatic handling of tool execution.
To get a structured output, pass a [Zod schema](https://www.npmjs.com/package/zod) to the `outputSchema` prop.
```tsx
// Returns an object matching the outputSchema when executed
```
To use tools, create a `GSXTool` object:
```tsx
const calculator = GSXTool.create({
name: "calculator",
description: "Perform mathematical calculations",
schema: z.object({
expression: z.string(),
}),
run: async ({ expression }) => {
return { result: eval(expression) };
},
});
```
Then pass the tool to the `tools` prop.
```tsx
```
##### Props
The `GSXChatCompletion` component accepts all parameters from OpenAI's chat completion API plus additional options:
- `model` (required): ID of the model to use (e.g., `"gpt-4o"`, `"gpt-4o-mini"`)
- `messages` (required): Array of messages in the conversation
- `stream`: Whether to stream the response (when `true`, returns a `Stream`)
- `tools`: Array of `GSXTool` instances for function calling
- `outputSchema`: Zod schema for structured output (when provided, returns data matching the schema)
- `structuredOutputStrategy`: Strategy to use for structured output. Supported values are `default`, `tools`, and `response_format`.
- Plus all standard OpenAI chat completion parameters (temperature, maxTokens, etc.)
##### Return Types
The return type of `GSXChatCompletion` depends on the props:
- With `stream: true`: Returns `Stream` from OpenAI SDK
- With `outputSchema`: Returns data matching the provided Zod schema
- Default: Returns `GSXChatCompletionResult` (OpenAI response with message history)
#### ``
The `ChatCompletion` component provides a simplified interface for chat completions. It returns either a string or a simple stream of string tokens while having identical inputs to the OpenAI API.
```tsx
// Returns a string when executed
// Returns an AsyncIterableIterator when executed
```
##### Props
The `ChatCompletion` component accepts all parameters from OpenAI's chat completion API:
- `model` (required): ID of the model to use (e.g., `"gpt-4o"`, `"gpt-4o-mini"`)
- `messages` (required): Array of messages in the conversation
- `temperature`: Sampling temperature (0-2)
- `stream`: Whether to stream the response
- `maxTokens`: Maximum number of tokens to generate
- `responseFormat`: Format of the response (example: `{ "type": "json_object" }`)
- `tools`: Array of `GSXTool` instances for function calling
##### Return Types
- With `stream: false` (default): Returns a string containing the model's response
- With `stream: true`: Returns an `AsyncIterableIterator` that yields tokens as they're generated
#### ``
The `OpenAIChatCompletion` component is a low-level component that directly maps to the OpenAI SDK. It has identical inputs and outputs to the OpenAI API, making it suitable for advanced use cases where you need full control.
```tsx
```
##### Props
The `OpenAIChatCompletion` component accepts all parameters from the OpenAI SDK's `chat.completions.create` method:
- `model` (required): ID of the model to use
- `messages` (required): Array of messages in the conversation
- `temperature`: Sampling temperature
- `stream`: Whether to stream the response
- `maxTokens`: Maximum number of tokens to generate
- `tools`: Array of OpenAI tool definitions for function calling
- Plus all other OpenAI chat completion parameters
##### Return Types
- With `stream: false` (default): Returns the full `ChatCompletionOutput` object from OpenAI SDK
- With `stream: true`: Returns a `Stream` from OpenAI SDK
#### ``
The `OpenAIResponses` component is a low-level component that directly maps to the [OpenAI Responses API](https://platform.openai.com/docs/api-reference/responses). It has identical inputs and outputs to the OpenAI Responses API, making it suitable for advanced use cases where you need full control.
```tsx
```
##### Props
The `OpenAIResponses` component accepts all parameters from the OpenAI SDK's `responses.create` method:
- `model` (required): ID of the model to use
- `input` (required): The input to the response
- Plus all other optional parameters from the OpenAI Responses API
##### Return Types
- With `stream: false` (default): Returns the full `Response` object from OpenAI SDK
- With `stream: true`: Returns a `Stream` from OpenAI SDK
#### ``
The `OpenAIEmbedding` component is a low-level component that directly maps to the OpenAI SDK's `embeddings.create` method. It has identical inputs and outputs to the OpenAI Embeddings API, making it suitable for advanced use cases where you need full control.
```tsx
```
##### Props
The `OpenAIEmbedding` component accepts all parameters from the OpenAI SDK's `embeddings.create` method:
- `model` (required): ID of the model to use
- `input` (required): The input to the embedding (`string` or `string[]`)
- Plus all other optional parameters from the OpenAI Embeddings API
##### Return Types
- Returns the full `CreateEmbeddingResponse` object from OpenAI SDK
# MCP Components
The `@gensx/mcp` package provides Model Control Protocol (MCP) components for GenSX, bringing in a provider and context helper that enables you to easily call tools, resources, and prompts that are provided by the MCP server.
This supports either a server command and arguments, and will manage the lifecycle of that MCP process, or it accepts a pre-connected MCP client as the source for MCP resources.
See the example [here](../examples/mcp).
## Installation
To install the package, run the following command:
```bash
npm install @gensx/mcp
```
## Supported components and utilities
|
Component/Utility
| Description |
| :---------------------------------------------------- | :--------------------------------------------------------------------- |
| [`createMCPServerContext`](#createmcpservercontext) | Creates a context provider and hook for accessing MCP server resources |
| [`MCPTool`](#mcptool) | Wrapper used to call a tool resource provided by an MCP server |
| [`MCPResource`](#mcpresource) | Wrapper used to call a resource provided by an MCP server |
| [`MCPResourceTemplate`](#mcpresourcetemplate) | Wrapper used to call a resource template provided by an MCP server |
| [`MCPPrompt`](#mcpprompt) | Wrapper used to call a prompt resource provided by an MCP server |
## Reference
### `createMCPServerContext()`
The `createMCPServerContext` function creates a context provider and hook for accessing MCP server resources. It returns an object containing a Provider component and a useContext hook.
If a server command is provided, it will be used to start the MCP server, and close the connection when the component is unmounted. Otherwise, the MCP client will be used to connect to an existing server.
```tsx
import { createMCPServerContext } from "@gensx/mcp";
const { Provider, useContext } = createMCPServerContext({
serverCommand: "your-server-command",
serverArgs: ["--arg1", "--arg2"],
// Or provide a client directly
client: yourMCPClient,
});
// Use the Provider to wrap your application
;
// Use the context hook in your components
const MyComponent = () => {
const { tools, resources, resourceTemplates, prompts } = useContext();
// Use the MCP server context...
};
```
#### Parameters
The `createMCPServerContext` function accepts a server definition object with the following properties:
- Either:
- `serverCommand`: The command to start the MCP server
- `serverArgs`: Array of arguments for the server command
- Or:
- `client`: A pre-configured MCP client instance
#### Return Value
Returns an object containing:
- `Provider`: A React component that provides the MCP server context
- `useContext`: A hook that returns the current MCP server context
### Types
#### MCPServerContext
The context object returned by `useContext` contains:
```tsx
interface MCPServerContext {
tools: MCPTool[]; // Available tools in the server
resources: MCPResource[]; // Available resources
resourceTemplates: MCPResourceTemplate[]; // Available resource templates
prompts: MCPPrompt[]; // Available prompts
}
```
#### MCPTool
Wrapper used to call a tool resource provided by an MCP server. This makes it easy to call any of the tools provided by an MCP server, with the correct arguments and parameters.
#### MCPResource
Wrapper used to call a resource provided by an MCP server. This makes it easy to access any of the resources provided by an MCP server.
#### MCPResourceTemplate
Wrapper used to call a resource template provided by an MCP server. This makes it easy to access any of the resource templates provided by an MCP server, with the correct arguments and parameters.
#### MCPPrompt
Wrapper used to call a prompt resource provided by an MCP server. This makes it easy to access any of the prompts provided by an MCP server, with the correct arguments and parameters.
## Example Usage
```tsx
import { createMCPServerContext } from "@gensx/mcp";
import { OpenAIProvider, ChatCompletion } from "@gensx/openai";
// Create the MCP server context
const { Provider, useContext: useMCPContext } = createMCPServerContext({
serverCommand: "npx",
serverArgs: ["-y", "@/"],
});
// Wrap your application with the Provider
const App = () => (
);
// Use the context in your components
const MCPComponent = () => {
const { tools, resources } = useMCPContext();
return (
tool.asGSXTool())}
/>
);
};
```
# Anthropic
The [@gensx/anthropic](https://www.npmjs.com/package/@gensx/anthropic) package provides [Anthropic API](https://docs.anthropic.com/en/api/getting-started) compatible components for GenSX.
## Installation
To install the package, run the following command:
```bash
npm install @gensx/anthropic
```
Then import the components you need from the package:
```tsx
import { AnthropicProvider, GSXChatCompletion } from "@gensx/anthropic";
```
## Supported components
|
Component
| Description |
| :---------------------------------------------------- | :------------------------------------------------------------------------------------ |
| [`AnthropicProvider`](#anthropicprovider) | Anthropic Provider that handles configuration and authentication for child components |
| [`GSXChatCompletion`](#gsxchatcompletion) | Enhanced component with advanced features for Anthropic chat completions |
| [`ChatCompletion`](#chatcompletion) | Simplified component for chat completions with streamlined output interface |
| [`AnthropicChatCompletion`](#anthropicchatcompletion) | Low-level component that directly matches the Anthropic SDK interface |
## Component Comparison
The package provides three different chat completion components to suit different use cases:
- **AnthropicChatCompletion**: Direct mapping to the Anthropic API with identical inputs and outputs
- **GSXChatCompletion**: Enhanced component with additional features like structured output and automated tool calling
- **ChatCompletion**: Simplified interface that returns string responses or simple streams while maintaining identical inputs to the Anthropic API
## Reference
#### ``
The `AnthropicProvider` component initializes and provides an Anthropic client instance to all child components. Any components that use Anthropic's API need to be wrapped in an `AnthropicProvider`.
```tsx
```
##### Props
The `AnthropicProvider` accepts all configuration options from the [Anthropic Node.js client library](https://github.com/anthropics/anthropic-sdk-typescript) including:
- `apiKey` (required): Your Anthropic API key
- Plus all other Anthropic client configuration options
#### ``
The `GSXChatCompletion` component is an advanced chat completion component that provides enhanced features beyond the standard Anthropic API. It supports structured output, tool calling, and streaming, with automatic handling of tool execution.
To get a structured output, pass a [Zod schema](https://www.npmjs.com/package/zod) to the `outputSchema` prop.
```tsx
// Returns an object matching the outputSchema when executed
```
To use tools, create a `GSXTool` object:
```tsx
const weatherTool = GSXTool.create({
name: "get_weather",
description: "Get the weather for a given location",
schema: z.object({
location: z.string(),
}),
run: async ({ location }) => {
return { weather: "sunny" };
},
});
```
Then pass the tool to the `tools` prop.
```tsx
```
##### Props
The `GSXChatCompletion` component accepts all parameters from Anthropic's messages API plus additional options:
- `model` (required): ID of the model to use (e.g., `"claude-3-7-sonnet-latest"`, `"claude-3-5-haiku-latest"`)
- `messages` (required): Array of messages in the conversation
- `max_tokens` (required): Maximum number of tokens to generate
- `system`: System prompt to set the behavior of the assistant
- `stream`: Whether to stream the response (when `true`, returns a `Stream`)
- `tools`: Array of `GSXTool` instances for function calling
- `outputSchema`: Zod schema for structured output (when provided, returns data matching the schema)
- `temperature`: Sampling temperature
- Plus all standard Anthropic message parameters
##### Return Types
The return type of `GSXChatCompletion` depends on the props:
- With `stream: true`: Returns `Stream` from Anthropic SDK
- With `outputSchema`: Returns data matching the provided Zod schema
- Default: Returns `GSXChatCompletionResult` (Anthropic response with message history)
#### ``
The `ChatCompletion` component provides a simplified interface for chat completions. It returns either a string or a simple stream of string tokens while having identical inputs to the Anthropic API.
```tsx
// Returns a string when executed
// Returns an AsyncIterableIterator when executed
```
##### Props
The `ChatCompletion` component accepts all parameters from Anthropic's messages API:
- `model` (required): ID of the model to use (e.g., `"claude-3-5-sonnet-latest"`, `"claude-3-haiku-latest"`)
- `messages` (required): Array of messages in the conversation
- `max_tokens` (required): Maximum number of tokens to generate
- `system`: System prompt to set the behavior of the assistant
- `temperature`: Sampling temperature
- `stream`: Whether to stream the response
- `tools`: Array of `GSXTool` instances for function calling (not compatible with streaming)
##### Return Types
- With `stream: false` (default): Returns a string containing the model's response
- With `stream: true`: Returns an `AsyncIterableIterator` that yields tokens as they're generated
#### ``
The `AnthropicChatCompletion` component is a low-level component that directly maps to the Anthropic SDK. It has identical inputs and outputs to the Anthropic API, making it suitable for advanced use cases where you need full control.
```tsx
```
##### Props
The `AnthropicChatCompletion` component accepts all parameters from the Anthropic SDK's `messages.create` method:
- `model` (required): ID of the model to use
- `messages` (required): Array of messages in the conversation
- `max_tokens` (required): Maximum number of tokens to generate
- `system`: System prompt to set the behavior of the assistant
- `temperature`: Sampling temperature
- `stream`: Whether to stream the response
- `tools`: Array of Anthropic tool definitions for function calling
- Plus all other Anthropic message parameters
##### Return Types
- With `stream: false` (default): Returns the full `Message` object from Anthropic SDK
- With `stream: true`: Returns a `Stream` from Anthropic SDK
# Serverless deployments
Deploy your GenSX workflows as serverless APIs with support for both synchronous and asynchronous execution, as well as long-running operations.
## Deploy with the CLI
Projects are a collection of workflows and environment variables that deploy together into an `environment` that you configure.
Each project has a `gensx.yaml` file at the root and a `workflows.tsx` file that exports all of your deployable workflows.
Run `gensx deploy` from the root of your project to deploy it:
```bash
# Deploy the workflow file with default settings
npx gensx deploy src/workflows.tsx
# Deploy with environment variables
npx gensx deploy src/workflows.tsx -ev OPENAI_API_KEY
```
Environment variables are encrypted with per-project encryption keys.
### Deploying to different environments
GenSX supports multiple environments within a project (such as development, staging, and production) to help manage your deployment lifecycle.
```bash
# Deploy to a specific environment
npx gensx deploy src/workflows.tsx --env production
# Deploy to staging with environment-specific variables
npx gensx deploy src/workflows.tsx --env staging -ev OPENAI_API_KEY -ev LOG_LEVEL=debug
```
Each environment can have its own configuration and environment variables, allowing you to test in isolation before promoting changes to production.
When you deploy a workflow, GenSX:
1. Builds your TypeScript code for production
2. Bundles your dependencies
3. Uploads the package to GenSX Cloud
4. Configures serverless infrastructure
5. Creates API endpoints for each exported workflow
6. Encrypts and sets up environment variables
7. Activates the deployment
The entire process typically takes 15 seconds.
## Running workflows from the CLI
Once deployed, you can execute workflows directly from the CLI:
```bash
# Run a workflow synchronously with input data
npx gensx run MyWorkflow --input '{"prompt":"Generate a business name"}' --project my-app
# Run and save the output to a file
npx gensx run MyWorkflow --input '{"prompt":"Generate a business name"}' --output results.json
# Run asynchronously (start the workflow but don't wait for completion)
npx gensx run MyWorkflow --input '{"prompt":"Generate a business name"}' --project my-app
```
### CLI run options
| Option | Description |
| ----------- | ----------------------------------------------- |
| `--input` | JSON string with input data |
| `--no-wait` | Do not wait for workflow to finish |
| `--output` | Save results to a file |
| `--project` | Specify the project name |
| `--env` | Specify the environment name |
## API endpoints
Each workflow is exposed as an API endpoint:
```
https://api.gensx.com/org/{org}/projects/{project}/environments/{environment}/workflows/{workflow}
```
- `{org}` - Your organization ID
- `{project}` - Your project name
- `{environment}` - The environment (defaults to "default")
- `{workflow}` - The name of your workflow
For example, if you have a workflow named `BlogWriter` in project `content-tools`, the endpoint would be:
```
https://api.gensx.com/org/your-org/projects/content-tools/environments/default/workflows/BlogWriter
```
## Authentication
All GenSX Cloud API endpoints require authentication using your GenSX API key as a bearer token:
```bash
curl -X POST https://api.gensx.com/org/your-org/projects/your-project/environments/default/workflows/YourWorkflow \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{"prompt": "Tell me about GenSX"}'
```
### Obtaining an API Key
To generate or manage API keys:
1. Log in to the [GenSX Cloud console](https://app.gensx.com)
2. Navigate to Settings > API Keys
3. Create a new key
## Execution modes
### Synchronous Execution
By default, API calls execute synchronously, returning the result when the workflow completes:
```bash
curl -X POST https://api.gensx.com/org/your-org/projects/your-project/environments/default/workflows/YourWorkflow \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{"prompt": "Tell me about GenSX"}'
```
### Asynchronous execution
For longer-running workflows, use asynchronous execution by calling the `/start` endpoint:
```bash
# Request asynchronous execution
curl -X POST https://api.gensx.com/org/your-org/projects/your-project/environments/default/workflows/YourWorkflow/start \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{"prompt": "Tell me about GenSX"}'
# Response includes an execution ID
# {
# "status": "ok",
# "data": {
# "executionId": "exec_123abc"
# }
# }
# Check status later
curl -X GET https://api.gensx.com/executions/exec_123abc \
-H "Authorization: Bearer your-api-key"
```
### Streaming responses
For workflows that support streaming, you can receive tokens as they're generated:
```bash
curl -X POST https://api.gensx.com/org/your-org/projects/your-project/environments/default/workflows/YourWorkflow \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{"prompt": "Tell me about GenSX", "stream": true }'
```
The response is delivered as a stream of server-sent events (SSE).
## Execution time limits
GenSX Cloud is optimized for long-running workflows and agents, with generous execution time limits:
| Plan | Maximum Execution Time |
| ---------- | ----------------------- |
| Free Tier | Up to 5 minutes |
| Pro Tier | Up to 60 minutes |
| Enterprise | Custom limits available |
These extended timeouts make GenSX ideal for complex AI workflows that might involve:
- Multiple LLM calls in sequence
- Real-time agent tool use
- Complex data processing
- Extensive RAG operations
## Cold starts and performance
The GenSX Cloud serverless architecture is designed to minimize cold starts:
- **Millisecond-level cold starts**: Initial cold starts typically range from 10-30ms
- **Warm execution**: Subsequent executions of recently used workflows start in 1-5ms
- **Auto-scaling**: Infrastructure automatically scales with workloads
## Managing deployments in the console
GenSX Cloud provides a console to run, debug, and view all of your workflows.
### Viewing workflows

1. Log in to [app.gensx.com](https://app.gensx.com)
2. Navigate to your project and environment
3. The workflows tab shows all deployed workflows with status information
4. Click on a workflow to view its details, including schema, recent executions, and performance metrics
The workflow page includes API documentation and code snippets that you can copy/paste to run your workflow from within another app:

### Running workflows manually
You can test workflows directly from the console:
1. Navigate to the workflow detail page
2. Click the "Run" button
3. Enter JSON input in the provided editor
4. Choose execution mode (sync, async, or streaming)
5. View results directly in the console

### Viewing execution history
Each workflow execution generates a trace you can review:
1. Navigate to the "Executions" tab in your project
2. Browse the list of recent executions
3. Click on any execution to see detailed traces
4. Explore the component tree, inputs/outputs, and execution timeline
## Next steps
- [Learn about cloud storage options](/docs/cloud/storage)
- [Explore observability and tracing](/docs/cloud/observability)
# Projects and environments
GenSX organizes your workflows and deployments using a flexible structure of projects and environments, making it easy to match the rest of your application architecture and CI/CD topology. Projects are a top level resource and environments are instances of a project that you deploy to.
## Project structure
A project in GenSX is a collection of related workflows that are deployed, managed, and monitored together:
- **Projects as logical units**: Group related workflows that serve a common purpose
- **Shared configuration**: Apply settings across all workflows in a project
- **Collective deployment**: Deploy all workflows within a project in one operation
- **Unified monitoring**: View traces and metrics for an entire project
Projects typically correspond to a codebase or application that contains multiple workflows.
## Environment separation
Within each project, you can have multiple environments. For example, you could create three environments for each project:
- **Development**: For building and testing new features
- **Staging**: For pre-production validation
- **Production**: For live, user-facing workflows
You have full control over your environments so you can organize them however you see fit.
Each environment maintains separate:
- Workflow deployments
- Configuration and environment variables
- Execution traces and monitoring data
## Configuring projects
### Project configuration file
Projects are defined using a `gensx.yaml` file at the root of your codebase:
```yaml
# gensx.yaml
projectName: customer-support-bot
description: AI assistant for customer support
```
This configuration applies to both local development and cloud deployments.
## Working with environments
### Deploying to different environments
Deploy your workflows to specific environments using the CLI:
```bash
# Deploy to the default environment
gensx deploy src/workflows.tsx
# Deploy to a staging environment
gensx deploy src/workflows.tsx --env staging
# Deploy to production
gensx deploy src/workflows.tsx --env production
```
### Environment-specific configuration
Set environment-specific variables during deployment:
```bash
# Development-specific settings
gensx deploy src/workflows.tsx --env development \
-ev LOG_LEVEL=debug \
-ev OPENAI_API_KEY
# Production-specific settings
gensx deploy src/workflows.tsx --env production \
-ev LOG_LEVEL=error \
-ev OPENAI_API_KEY
```
## Projects in the GenSX Console
The GenSX Console organizes everything by project and environment:

Selecting an environment brings you to the workflows view:

When you click into a workflow, you can trigger it within the console if you've deployed it to GenSX Cloud:

You can also see API documentation and sample code for calling that workflow:

## Next steps
- [Configure serverless deployments](/docs/cloud/serverless-deployments) for your projects
- [Set up local development](/docs/cloud/local-development) for testing
- [Learn about observability](/docs/cloud/observability) across environments
# Pricing & limits
GenSX Cloud offers flexible pricing tiers designed to scale with your needs, including a free tier for individuals.
## Pricing tiers
GenSX Cloud offers three pricing tiers:
- **Free Tier**: Perfect for learning, experimentation, and small projects
- **Pro Tier** ($20/month per developer): For professional development and production workloads
- **Enterprise**: Custom pricing for large-scale deployments with additional features and support
Each plan includes monthly allowances for compute, tracing, and storage:
| Resource | Free Tier | Pro Tier ($20/month/dev) | Overage/Action |
| ------------------ | ---------------------- | ------------------------ | ---------------- |
| Serverless Compute | 50K sec | 500K sec | $0.00003/sec |
| Traces (events) | 100K events | 1M events | $0.20/10K |
| Blob Storage | 500MB | 5GB | $0.25/GB |
| SQLite Storage | 500MB | 1GB | $1.50/GB |
| Vector Storage | 250MB | 1GB | $1.00/GB |
| Execution time | Up to 5 minutes | Up to 60 minutes | Custom |
| Observability | 7 days trace retention | 30 days trace retention | Custom retention |
The free tier is only for individuals, and you will need to upgrade to the Pro tier before adding additional members to your org.
## Limits
When limits are reached, additional operations may be throttled or declined. Exceeding any limits on the free tier requires upgrading to the Pro tier.
### Serverless
- Free tier: Maximum workflow execution time of 5 minutes
- Pro tier: Maximum workflow execution time of 60 minutes
- Maximum payload size: 10MB per request
- Maximum response size: 10MB
### Blob storage
- Maximum blob size: 100MB
- Maximum number of blobs: Unlimited (subject to total storage limits)
- Rate limits: 100 operations/second on free tier, 1000 operations/second on pro tier
### Databases
- Maximum database size: Limited by your storage quota
- Maximum databases: unlimited
- SQLite Writes: 1M rows/month on free tier, 10M rows/month on pro tier
- SQLite Reads: 100M rows/month on free tier, 1B rows/month on pro tier
### Full-text & vector search
- Maximum documents per search namespace: 100M
- Maximum writes: 10k writes/second per namespace
- Vector Writes: 1GB written/month on free tier, 10GB written/month on pro tier
- Vector Reads: 10GB queried/month on free tier, 100GB queried/month on pro tier
- Maximum vector dimensions: 10,752
- Maximum attributes per document: 256
## Enterprise Plans
Enterprise plans include:
- **Higher resource limits** with customizable quotas
- **Volume discounts** on all resources
- **Advanced security features** including SSO, RBAC, and audit logs
- **Priority support** with dedicated account management
- **SLA guarantees** for uptime and performance
- **Custom integrations** with your existing infrastructure
- **Training and onboarding** for your team
- **SOC2** certification and custom data processing agreements.
To learn more about Enterprise plans, [contact our sales team](mailto:contact@gensx.com).
## Try GenSX Cloud
Start with our free tier today - no credit card required:
[Get Started for Free](https://signin.gensx.com/sign-up)
For any questions about pricing or custom plans, please [contact our sales team](mailto:contact@gensx.com).
# Observability & tracing
GenSX provides observability tools that make it easy to understand, debug, and optimize your workflows. Every component execution is automatically traced, giving you full visibility into what's happening inside your LLM workflows. You can view traces in realtime as workflows execute, and view historical traces to debug production issues like hallucinations.
## Viewing traces
When you run a workflow, GenSX automatically generates a trace that captures the entire execution flow, including all component inputs, outputs, and timing information.
### Accessing the trace viewer
The GenSX cloud console includes a trace viewer. You can access traces in several ways:
1. **From the Console**: Navigate to your project in the [GenSX Console](https://app.gensx.com) and select the "Executions" tab
2. **Trace URL**: When running a workflow with `printUrl: true`, a direct link to the trace is printed to the console
3. **API Response**: When running a workflow in the cloud, the execution ID from API responses can be used to view traces in the console
```tsx
// Executing a workflow with trace URL printing
const result = await MyWorkflow.run(
{ input: "What is GenSX?" },
{ printUrl: true },
);
// Console output includes:
// [GenSX] View execution at: https://app.gensx.com/your_org/executions/your_execution_id
```
### Understanding the flame graph
The flame graph visualizes the entire execution tree including branches, all nested sub-components, and timing:

- **Component hierarchy**: See the nested structure of your components and their relationships
- **Execution timing**: The width of each bar represents the relative execution time
- **Status indicators**: Quickly spot errors or warnings with color coding
- **Component filtering**: Focus on specific components or component types
Click on any component in the flame graph to inspect its details, including inputs, outputs, and timing information.
### Viewing component inputs and outputs
For each component in your workflow, you can inspect:
1. **Input properties**: All props passed to the component
2. **Output values**: The data returned by the component
3. **Execution duration**: How long the component took to execute
4. **Metadata**: Additional information like token counts for LLM calls

This visualization is particularly valuable for debugging production and user-reported issues like hallucinations.
### Viewing historical traces
The GenSX Console maintains a history of all your workflow executions, allowing you to:
- **Compare executions**: See how behavior changes across different runs
- **Identify patterns**: Spot recurring issues or performance bottlenecks
- **Filter by status**: Focus on successful, failed, or in-progress executions
- **Search**: Find historical executions
Historical traces are automatically organized by project and environment, making it easy to find relevant executions.
## Configuring traces
GenSX provides flexible options for configuring and organizing traces for the GenSX Cloud serverless platform, local development, and any other deployment platform like vercel, cloudflare and AWS.
### Tracing GenSX Cloud workflows
When running workflows deployed to GenSX Cloud, tracing is automatically configured:
- **Project context**: Traces are associated with the correct project
- **Environment segregation**: Development, staging, and production traces are kept separate
- **Authentication**: API keys and organization information are handled automatically
- **Retention**: Traces are stored according to your plan limits
No additional configuration is needed β everything works out of the box.
### Tracing on other deployment platforms
To enable tracing for workflows deployed outside of GenSX Cloud (like AWS Lambda, GCP Cloud Run, etc.), you need to set several environment variables:
```bash
# Required variables
GENSX_API_KEY=your_api_key_here
GENSX_ORG=your_gensx_org_name
GENSX_PROJECT=your_project_name
# Optional variables
GENSX_ENVIRONMENT=your_environment_name # Separate traces into specific environments
GENSX_CHECKPOINTS=false # Explicitly disable
```
### Configuring traces for local development
For local development, the tracing configuration is automatically inferred from:
1. The `gensx.yaml` file in your project root
2. Your local configuration managed by the `gensx` CLI in `~/.config/gensx/config`
3. Optionally the `GENSX_ENVIRONMENT` environment variable can be set to separate local traces from other environments
The local development server started with `gensx start` uses this same configuration scheme as well.
### Organizing traces by environment
GenSX allows you to organize traces by environment (such as development, staging, production, etc.) to keep your debugging data well-structured:
```bash
# Deploy to a specific environment with its own traces
gensx deploy src/workflows.tsx --env production
```
In the GenSX Console, you can filter traces by environment to focus on relevant executions. This separation also helps when:
- Debugging issues specific to an environment
- Comparing behavior between environments
- Isolating production traces from development noise
## Instrumenting additional code
Every GenSX component is automatically traced. If want to trace additional sub-steps of a workflow, wrap that code in a `gensx.Component` and execute it via `myComponent.Run(props)`.
```tsx
import * as gensx from "@gensx/core";
const MyWorkflow = gensx.Component("MyWorkflow", async ({ input }) => {
// Step 1: Process input
const processedData = await ProcessData.run({ data: input });
// Step 2: Generate response
const response = await GenerateResponse.run({ data: processedData });
return response;
});
// Create a component to trace a specific processing step
const ProcessData = gensx.Component("ProcessData", async ({ data }) => {
// This entire function execution will be captured in traces
const parsedData = JSON.parse(data);
const enrichedData = await fetchAdditionalInfo(parsedData);
return enrichedData;
});
// Create a component to trace response generation
const GenerateResponse = gensx.Component(
"GenerateResponse",
async ({ data }) => {
// This will appear as a separate node in the trace
return `Processed result: ${JSON.stringify(data)}`;
},
);
```
## Secrets scrubbing
GenSX enables you to configure which input props and outputs are marked as secrets and redacted from traces. Scrubbing happens locally before traces are sent to GenSX Cloud.
### How secrets scrubbing works
When a component executes, GenSX automatically:
1. Identifies secrets in component props and outputs
2. Replaces these secrets with `[secret]` in the trace data
3. Propagates secret detection across the entire component hierarchy
Even if a secret is passed down through multiple components, it remains scrubbed in all traces.
### Marking secrets in component props
To mark specific props as containing secrets:
```tsx
import * as gensx from "@gensx/core";
const AuthenticatedClient = gensx.Component(
"AuthenticatedClient",
({ apiKey, endpoint, query, credentials }) => {
// Use apiKey securely, knowing it won't appear in traces
return fetchData(endpoint, query, apiKey, credentials);
},
{
// Mark these props as containing sensitive data
secretProps: ["apiKey", "credentials.privateKey"],
},
);
```
The `secretProps` option can specify both top-level props and nested paths using dot notation.
### Marking component outputs as secrets
For components that might return sensitive information, you can mark the entire output as sensitive:
```tsx
const GenerateCredentials = gensx.Component(
"GenerateCredentials",
async ({ userId }) => {
// This entire output will be marked as secret
return {
accessToken: "sk-1234567890abcdef",
refreshToken: "rt-0987654321fedcba",
expiresAt: Date.now() + 3600000,
};
},
{
secretOutputs: true,
},
);
```
When `secretOutputs` is set to `true`, the entire output object or value will be treated as sensitive and masked in traces.
## Limits
GenSX observability features have certain limits based on your subscription tier:
| Feature | Free Tier | Pro Tier ($20/month/dev) | Enterprise |
| ------------------------- | -------------- | ------------------------ | ---------- |
| Traced components | 100K per month | 1M per month | Custom |
| Overage cost | N/A | $0.20 per 10K components | Custom |
| Trace retention | 7 days | 30 days | Custom |
| Maximum input/output size | 4MB each | 4MB each | 4MB each |
A few important notes on these limits:
- **Component count**: Each component execution in your workflow counts as one traced component
- **Size limits**: Component inputs and outputs are limited to 4MB each; larger data is truncated
- **Secret scrubbing**: API keys and sensitive data are automatically redacted from traces
- **Retention**: After the retention period, traces are automatically deleted
For use cases requiring higher limits or longer retention, contact the GenSX team for enterprise options.
## Next steps
- [Set up serverless deployments](/docs/cloud/serverless-deployments) to automatically trace cloud workflows
- [Learn about local development](/docs/cloud/local-development) for testing with traces
- [Explore project and environment organization](/docs/cloud/projects-environments) to structure your traces
# GenSX Cloud MCP server
`@gensx/gensx-cloud-mcp` is a Model Context Protocol server for [GenSX Cloud](/docs/cloud) workflows. It enables you to connect your GenSX Cloud workflows to MCP-compatible tools like Claude desktop, Cursor, and more.

## Usage
Once you have run [`gensx deploy`](/docs/cli-reference/deploy) to deploy your project to the [GenSX Cloud serverless runtime](/docs/cloud/serverless-deployments), you can consume those workflows via the `@gensx/gensx-cloud-mcp` server.
MCP-compatible tools use a standard JSON file to configure available MCP servers.
Update your MCP config file for your tool of choice to include the following:
```json
{
"mcpServers": {
"gensx": {
"command": "npx",
"args": [
"-y",
"@gensx/gensx-cloud-mcp",
"you_org_name",
"your_project_name",
"your_environment_name"
]
}
}
}
```
Your MCP client will run this command automatically at startup and handle acquiring the GenSX Cloud MCP server on your behalf. See the [Claude desktop](https://modelcontextprotocol.io/quickstart/user), and [Cursor docs](https://docs.cursor.com/context/model-context-protocol) on configuring MCP servers for more details.
By default, the server reads your API credentials from the config saved by running the `gensx login` command. Alternatively, you can specify your GenSX API key as an environment variable in your MCP config:
```json
{
"mcpServers": {
"gensx": {
"command": "npx",
"args": [
"@gensx/gensx-cloud-mcp",
"you_org_name",
"your_project_name",
"your_environment_name"
],
"env": {
"GENSX_API_KEY": "my_api_key"
}
}
}
}
```
The GenSX build process automatically extracts input and output schemas from your typescript types, so no additional configuration or manual `zod` schema is required to consume your workflows from an MCP server.
# Local development server
GenSX provides local development experience that mirrors the cloud environment, making it easy to build and test workflows on your machine before deploying them.
## Starting the dev server
The `gensx start` command launches a local development server with hot-reloading:
```bash
gensx start ./src/workflows.tsx
```
```bash
π Starting GenSX Dev Server...
βΉ Starting development server...
β Compilation completed
β Generating schema
Importing compiled JavaScript file: /Users/evan/code/gensx-console/samples/support-tools/dist/src/workflows.js
π GenSX Dev Server running at http://localhost:1337
π§ͺ Swagger UI available at http://localhost:1337/swagger-ui
π Available workflows:
- RAGWorkflow: http://localhost:1337/workflows/RAGWorkflow
- AnalyzeDiscordWorkflow: http://localhost:1337/workflows/AnalyzeDiscordWorkflow
- TextToSQLWorkflow: http://localhost:1337/workflows/TextToSQLWorkflow
- ChatAgent: http://localhost:1337/workflows/ChatAgent
β Server is running. Press Ctrl+C to stop.
```
## Development server features
### Identical API shape
The local API endpoints match exactly what you'll get in production, making it easy to test your workflows before deploying them. The only difference is that the `/org/{org}/project/{project}/environments/{env}` path is left out of the url for simplicity.
```
http://localhost:1337/workflows/{workflow}
```
Every workflow you export is automatically available as an API endpoint.
### Hot reloading
The development server watches your TypeScript files and automatically:
1. Recompiles when files change
2. Regenerates API schemas
3. Restarts the server with your updated code
This enables a fast development cycle without manual restarts.
### API documentation
The development server includes a built-in Swagger UI for exploring and testing your workflows:
```
http://localhost:1337/swagger-ui
```

The Swagger interface provides:
- Complete documentation of all your workflow APIs
- Interactive testing
- Request/response examples
- Schema information
## Running workflows locally
### Using the API
You can use any HTTP client to interact with your local API:
```bash
# Run a workflow synchronously
curl -X POST http://localhost:1337/workflows/ChatAgent \
-H "Content-Type: application/json" \
-d '{"input": {"prompt": "Tell me about GenSX"}}'
# Run asynchronously
curl -X POST http://localhost:1337/workflows/ChatAgent/start \
-H "Content-Type: application/json" \
-d '{"input": {"prompt": "Tell me about GenSX"}}'
```
The inputs and outputs of the APIs match exactly what you'll encounter in production.
### Using the Swagger UI
The built-in Swagger UI provides an easy way to inspect and test your workflows:
1. Navigate to `http://localhost:1337/swagger-ui`
2. Select the workflow you want to test
3. Click the "Try it out" button
4. Enter your input data
5. Execute the request and view the response

## Local storage options
GenSX provides local implementations for cloud storage services, enabling you to develop and test stateful workflows without deploying to the cloud.
### Blob storage
When using `BlobProvider` in local development, data is stored in your local filesystem:
```tsx
import { BlobProvider, useBlob } from "@gensx/storage";
const StoreData = gensx.Component("StoreData", async ({ key, data }) => {
// Locally, this will write to .gensx/blobs directory
const blob = useBlob(`data/${key}.json`);
await blob.putJSON(data);
return { success: true };
});
```
Files are stored in the `.gensx/blobs` directory in your project, making it easy to inspect the stored data.
### SQL databases
When using `DatabaseProvider` locally, GenSX uses [libSQL](https://github.com/libsql/libsql) to provide a SQLite-compatible database:
```tsx
import { DatabaseProvider, useDatabase } from "@gensx/storage";
const QueryData = gensx.Component("QueryData", async ({ query }) => {
// Locally, this creates a SQLite database in .gensx/databases
const db = await useDatabase("my-database");
const result = await db.execute(query);
return result.rows;
});
```
Database files are stored in the `.gensx/databases` directory as SQLite files that you can inspect with any SQLite client.
### Vector search
For vector search operations with `SearchProvider`, your local environment connects to the cloud service:
```tsx
import { SearchProvider, useSearch } from "@gensx/storage";
const SearchDocs = gensx.Component("SearchDocs", async ({ query }) => {
// Uses cloud vector search even in local development
const namespace = await useSearch("documents");
const results = await namespace.query({
text: query,
topK: 5,
});
return results;
});
```
## Next steps
- [Deploying to production](/docs/cloud/serverless-deployments)
- [Working with cloud storage](/docs/cloud/storage)
- [Setting up observability and tracing](/docs/cloud/observability)
# GenSX Cloud
GenSX Cloud provides everything you need to ship production-grade agents and workflows:
- **Serverless runtime**: One command to deploy all of your workflows and agents as REST APIs running on serverless infrastructure optimized for long-running agents and workflows. Support for synchronous and background invocation, streaming, and intermediate status included.
- **Cloud storage**: build stateful agents and workflows with builtin blob storage, SQL databases, and full-text + vector search indices -- all provisioned at runtime.
- **Tracing and observability**: Real-time tracing of all component inputs and outputs, tool calls, and LLM calls within your agents and workflows. Tools to visualize and debug all historic executions.
- **Collaboration**: Organize agents, workflows, and traces into projects and environments. Search and view traces and to debug historical executions.
Unlike traditional serverless offerings, GenSX Cloud is optimized for long-running workflows. Free tier workflows can run up to 5 minutes and Pro tier workflows can run for up to 60 minutes.
All of this is available on a free tier for individuals and with $20/developer pricing for teams.
## Serverless deployments
Serverless deployments allow you to turn your GenSX workflows and agents into APIs with a single command:
- **Generated REST APIs**: `gensx deploy` generates a REST API complete with schema and validation for every workflow in your project.
- **Long-running**: GenSX Cloud is optimized for long running LLM workloads. Workflows can run up to 5 minutes on the free tier and 60 minutes on the Pro tier.
- **Millisecond-level cold starts**: initial cold starts are on the order of 10s of milliseconds -- an order of magnitude faster than other serverless providers.
Serverless deployments are billed per-second, with 50,000 seconds included per month in the free tier for individuals.
Projects are deployed with a single CLI command:
```bash
$ npx gensx deploy ./src/workflows.tsx
```
```bash
β Building workflow using Docker
β Generating schema
β Successfully built project
βΉ Using project name from gensx.yaml: support-tools
β Deploying project to GenSX Cloud (Project: support-tools)
β Successfully deployed project to GenSX Cloud
Dashboard: https://app.gensx.com/gensx/support-tools/default/workflows
Available workflows:
- ChatAgent
- TextToSQLWorkflow
- RAGWorkflow
- AnalyzeDiscordWorkflow
Project: support-tools
```
Each workflow is available via both a synchronous and asynchronous API:
```
// For synchronous and streaming calls:
https://api.gensx.com/org/{orgName}/projects/{projectName}/environments/{environmentName}/workflows/{workflowName}
// For running workflows async in the background
https://api.gensx.com/org/{orgName}/projects/{projectName}/environments/{environmentName}/workflows/{workflowName}/start
```
For more details see the full [serverless deployments reference](/docs/cloud/serverless-deployments).
## Cloud storage
GenSX Cloud includes runtime-provisioned storage to build stateful agents and workflows:
- **Blob storage**: Store and retrieve JSON and binary data for things like conversation history, agent memory, and audio and image generation.
- **SQL databases**: Runtime provisioned databases for scenarios like text-to-SQL.
- **Full-text + vector search**: Store and query vector embeddings for semantic search and retrieval augmented generation (RAG).
State can be long-lived and shared across workflows and agents, or it can be provisioned ephemerally on a per-request basis.
### Blob storage
GenSX Cloud provides blob storage for persisting unstructured data like JSON, text, and binary files. With the `BlobProvider` component and `useBlob` hook, you can easily store and retrieve data across workflow executions.
Common scenarios enabled by blob storage include:
- Persistent chat thread history.
- Simple memory implementations.
- Storing generated audio, video, and photo files.
```tsx
import { BlobProvider, useBlob } from "@gensx/storage";
// Store and retrieve data with the useBlob hook
const ChatWithMemory = gensx.Component(
"ChatWithMemory",
async ({ userInput, threadId }) => {
// Get access to a blob at a specific path
const blob = useBlob(`chats/${threadId}.json`);
// Load existing data (returns null if it doesn't exist)
const history = (await blob.getJSON()) ?? [];
// Add new data
history.push({ role: "user", content: userInput });
// Save updated data
await blob.putJSON(history);
return "Data stored successfully";
},
);
// Just wrap your workflow with BlobProvider
const Workflow = gensx.Component("MyWorkflow", ({ userInput, threadId }) => (
));
```
Blob storage automatically adapts between local development (using filesystem) and cloud deployment with zero configuration changes.
For more details see the full [storage components reference](docs/component-reference/storage-components/blob-reference).
### SQL databases
GenSX Cloud provides SQLite-compatible databases powered by [Turso](https://turso.tech), enabling structured data storage with several properties important to agentic workloads:
- **Millisecond provisioning**: Databases are created on-demand in milliseconds, making them perfect for ephemeral workloads like parsing and querying user-uploaded CSVs or creating per-agent structured data stores.
- **Strong consistency**: All operations are linearizable, maintaining an ordered history, with writes fully serialized and subsequent writes awaiting transaction completion.
- **Zero configuration**: Like all GenSX storage components, databases work identically in both development and production.
- **Local development**: Uses libsql locally to enable a fast, isolated development loop without external dependencies.
```tsx
import { DatabaseProvider, useDatabase } from "@gensx/storage";
// Access a database with the useDatabase hook
const QueryTeamStats = gensx.Component("QueryTeamStats", async ({ team }) => {
// Get access to a database (created on first use)
const db = await useDatabase("baseball");
// Execute SQL queries directly
const result = await db.execute("SELECT * FROM players WHERE team = ?", [
team,
]);
return result.rows; // Returns the query results
});
// Just wrap your workflow with DatabaseProvider
const Workflow = ({ team }) => (
);
```
For more details see the full [storage components reference](docs/component-reference/storage-components/database-reference).
### Full-text and vector search
GenSX Cloud provides vector and full-text search capabilities powered by [turbopuffer](https://turbopuffer.com/), enabling semantic search and retrieval augmented generation (RAG) with minimal setup:
- **Vector search**: Store and query high-dimensional vectors for semantic similarity search with millisecond-level latency, perfect for RAG applications and finding content based on meaning rather than exact matches.
- **Full-text search**: Built-in BM25 search engine for string and string array fields, enabling traditional keyword search with low latency.
- **Hybrid search**: Combine vector similarity with full-text BM25 search to get both semantically relevant results and exact keyword matches in a single query.
- **Rich filtering**: Apply metadata filters to narrow down search results based on categories, timestamps, or any custom attributes, enhancing precision and relevance.
```tsx
import { SearchProvider, useNamespace } from "@gensx/storage";
import { OpenAIEmbedding } from "@gensx/openai";
// Perform semantic search with the useNamespace hook
const SearchDocuments = gensx.Component(
"SearchDocuments",
async ({ query }) => {
// Get access to a vector search namespace
const namespace = await useNamespace("documents");
// Generate an embedding for the query
const embedding = await OpenAIEmbedding.run({
model: "text-embedding-3-small",
input: query,
});
// Search for similar documents
const results = await namespace.query({
vector: embedding.data[0].embedding,
topK: 5,
});
return results.map((r) => r.attributes?.title);
},
);
// Just wrap your workflow with SearchProvider
const Workflow = ({ query }) => (
);
```
> **Note**: Unlike blob storage and SQL databases, vector search doesn't have a local development implementation. When using `SearchProvider` locally, you'll connect to the cloud service.
For more details see the full [storage components reference](docs/component-reference/storage-components/search-reference).
## Observability
GenSX Cloud provides comprehensive tracing and observability for all your workflows and agents.

- **Complete execution traces**: Every workflow execution generates a detailed trace that captures the entire flow from start to finish, allowing you to understand exactly what happened during execution.
- **Comprehensive component visibility**: Each component in your workflow automatically records its inputs and outputs, including:
- All LLM calls with full prompts, parameters, and responses
- Every tool invocation with input arguments and return values
- All intermediate steps and state changes in your agents and workflows
- **Real-time monitoring**: Watch your workflows execute step by step in real time, which is especially valuable for debugging long-running agents or complex multi-step workflows.
- **Historical execution data**: Access and search through all past executions to diagnose issues, analyze performance patterns, and understand user interactions over time.
- **Project and environment organization**: Traces are automatically organized by project (a collection of related workflows in a codebase) and environment (such as development, staging, or production), making it easy to find relevant executions.
```tsx
// Traces are automatically captured when workflows are executed
// No additional instrumentation required
const result = await MyWorkflow.run(
{ input: "User query" },
{ printUrl: true }, // log the tracing URL
);
```
The trace viewer provides multiple ways to analyze workflow execution:
- **Timeline view**: See how long each component took and their sequence of execution
- **Component tree**: Navigate the hierarchical structure of your workflow
- **Input/output inspector**: Examine the exact data flowing between components
- **Error highlighting**: Quickly identify where failures occurred

For more details see the full [observability reference](/docs/cloud/observability).
## Local development
GenSX provides a seamless development experience that mirrors the cloud environment, allowing you to build and test your workflows locally before deployment:
### Development server
The `gensx start` command launches a local development server that:
- Compiles your TypeScript workflows on the fly
- Automatically generates schemas for your workflows
- Creates local REST API endpoints identical to the cloud endpoints
- Hot-reloads your code when files change
- Provides the same API shape locally as in production
```bash
# Start the development server with a TypeScript file
npx gensx start ./src/workflows.tsx
```
When you start the development server, you'll see something like this:
```bash
π Starting GenSX Dev Server...
βΉ Starting development server...
β Compilation completed
β Generating schema
π GenSX Dev Server running at http://localhost:1337
π§ͺ Swagger UI available at http://localhost:1337/swagger-ui
π Available workflows:
- MyWorkflow: http://localhost:1337/workflows/MyWorkflow
β Server is running. Press Ctrl+C to stop.
```
### Local storage providers
GenSX provides local implementations for most storage providers, enabling development without cloud dependencies:
- **BlobProvider**: Uses local filesystem storage (`.gensx/blobs`) for development
- **DatabaseProvider**: Uses local SQLite databases (`.gensx/databases`) for development
- **SearchProvider**: Connects to the cloud vector search service even in development mode
The local APIs mirror the cloud APIs exactly, so code that works locally will work identically when deployed:
```tsx
// This component works the same locally and in the cloud
const SaveData = gensx.Component<{ key: string; data: any }, null>(
"SaveData",
async ({ key, data }) => {
// Blob storage works the same locally (filesystem) and in cloud
const blob = useBlob(`data/${key}.json`);
await blob.putJSON(data);
return null;
},
);
```
For more details see the full [local development reference](/docs/cloud/local-development).
## Projects & environments
GenSX Cloud organizes your workflows and deployments using a flexible structure of projects and environments:
**Projects** are a collection of related workflows that are deployed together, typically corresponding to a codebase or application. Projects help you organize and manage your AI components as cohesive units.
Projects are defined by the `projectName` field in your `gensx.yaml` configuration file at the root of your codebase:
```yaml
# gensx.yaml
projectName: my-chatbot-app
```
**Environments** are sub-groupings within a project that allow you to deploy multiple instances of the same workflows with different configuration. This supports the common development pattern of separating dev, staging, and production environments.
```bash
# Deploy to the default environment
npx gensx deploy ./src/workflows.tsx
# Deploy to a specific environment
npx gensx deploy ./src/workflows.tsx --env production
```
Each environment can have its own configuration and environment variables to match the rest of your deployed infrastructure.
Traces and observability data are also separated by project and environment, making it easier to:
- Distinguish between development testing and production traffic
- Isolate and debug issues specific to a particular environment
- Compare performance or behavior between environments
This organizational structure is designed to be flexible and adaptable, allowing you to customize it to fit with the rest of your development, testing, and deployment lifecycle.
For more details see the full [projects and environments reference](/docs/cloud/projects-environments).
## Pricing
GenSX Cloud offers simple, predictable pricing designed to scale with your needs, including a free tier for individuals:
| Resource | Free Tier | Pro Tier ($20/month/dev) | Overage/Action |
| ------------------ | ---------------------- | ------------------------ | ---------------- |
| Serverless Compute | 50K sec | 500K sec | $0.00003/sec |
| Traces (events) | 100K events | 1M events | $0.20/10K |
| Blob Storage | 500MB | 5GB | $0.25/GB |
| SQLite Storage | 500MB | 1GB | $1.50/GB |
| Vector Storage | 250MB | 1GB | $1.00/GB |
| Execution time | Up to 5 minutes | Up to 60 minutes | Custom |
| Observability | 7 days trace retention | 30 days trace retention | Custom retention |
For more details, visit our [pricing page](/docs/cloud/pricing) or [contact us](mailto:contact@gensx.com) for enterprise needs.
## Get started
Ready to build AI agents and workflows with GenSX Cloud? Follow our step-by-step [quickstart guide](/docs/quickstart) to create and deploy your first project in minutes:
1. Install the GenSX CLI: `npm install -g gensx`
2. Create a new project: `gensx new my-project`
3. Run it locally: `gensx start src/workflows.tsx`
4. Deploy to the cloud: `gensx deploy src/workflows.tsx`
# gensx start
The `gensx start` command starts a local development server that enables you to test and debug your GenSX workflows.
## Usage
```bash
gensx start [options]
```
## Arguments
| Argument | Description |
| -------- | ------------------------------------------------------- |
| `` | The workflow file to serve (e.g., `src/workflows.tsx`). |
## Options
| Option | Description |
| ---------------------- | ---------------------------------- |
| `--port ` | Port to run the server on. |
| `-q, --quiet` | Suppress output. |
| `-h, --help` | Display help for the command. |
## Description
This command starts a local development server that:
- Watches your workflow file for changes and automatically reloads
- Provides a web interface to test and debug your workflows
- Simulates the cloud environment locally
- Runs your workflow components in a development mode
The development server includes:
- A web UI for testing your workflows
- Real-time logs and execution visibility
- Access to the GenSX development dashboard
## Examples
```bash
# Start server with a specific workflow file
gensx start src/workflows.tsx
<<<<<<< HEAD
# Start server with minimal output
gensx start src/workflows.tsx --quiet
=======
# Start server with a custom project name
gensx start src/workflows.tsx --project my-custom-name
# Start server on port 3000
gensx start src/workflows.tsx --port 3000
>>>>>>> 4c75b171c6cd6b0bd9779d07cc1ca5b9e1679e0b
```
## Notes
- The server runs on port 1337 by default
- You can access the development UI at `http://localhost:1337/swagger-ui`
- Environment variables from your local environment are available to the workflow
- For more complex environment variable setups, consider using a `.env` file in your project root
# gensx run
The `gensx run` command executes a workflow that has been deployed to GenSX Cloud. By default it infers your project from the `gensx.yaml` file in the current working directory.
## Usage
```bash
gensx run [options]
```
## Arguments
| Argument | Description |
| ------------ | ---------------------------- |
| `` | Name of the workflow to run. |
## Options
| Option | Description |
| ---------------------- | ------------------------------------------------------------ |
| `-i, --input ` | Input to pass to the workflow (as JSON). |
| `--no-wait` | Do not wait for the workflow to finish (run asynchronously). |
| `-p, --project ` | Project name where the workflow is deployed. |
| `-e, --env ` | Environment name where the workflow is deployed. |
| `-o, --output ` | Output file to write the workflow result to. |
| `-h, --help` | Display help for the command. |
## Description
This command triggers execution of a deployed workflow on GenSX Cloud with the specified input. By default, it waits for the workflow to complete and displays the result.
When running a workflow, you can:
- Provide input data as JSON
- Choose whether to wait for completion or run asynchronously
- Save the output to a file
## Examples
```bash
# Run a workflow with no input
gensx run MyWorkflow
# Run a workflow with JSON input
gensx run MyWorkflow --input '{"text": "Hello, world!"}'
# Run a workflow in a specific project and environment
gensx run MyWorkflow --project my-project --env prod
# Run a workflow asynchronously (don't wait for completion)
gensx run MyWorkflow --no-wait
# Run a workflow and save output to a file
gensx run MyWorkflow --output result.json
```
## Notes
- You must be logged in to GenSX Cloud to run workflows (`gensx login`)
- The workflow must have been previously deployed using `gensx deploy`
- When using `--input`, the input must be valid JSON
- When using `--no-wait`, the command returns immediately with a workflow ID that can be used to check status later
- Error handling: if the workflow fails, the command will return with a non-zero exit code and display the error
# gensx new
The `gensx new` command creates a new GenSX project with a predefined structure and dependencies.
## Usage
```bash
gensx new [options]
```
## Arguments
| Argument | Description |
| --------------------- | ---------------------------------------------------------------------------- |
| `` | Directory to create the project in. If it doesn't exist, it will be created. |
## Options
| Option | Description |
| -------------------------- | ----------------------------------------------------------------------------------------------- |
| `-t, --template ` | Template to use. Currently supports `ts` (TypeScript). |
| `-f, --force` | Overwrite existing files in the target directory. |
| `--skip-ide-rules` | Skip IDE rules selection. |
| `--ide-rules ` | Comma-separated list of IDE rules to install. Options: `cline`, `windsurf`, `claude`, `cursor`. |
| `-d, --description ` | Optional project description. |
| `-h, --help` | Display help for the command. |
## Description
This command scaffolds a new GenSX project with the necessary files and folder structure. It sets up:
- Project configuration files (`package.json`, `tsconfig.json`)
- Basic project structure with example workflows
- Development dependencies
- IDE integrations based on selected rules
## Examples
```bash
# Create a basic project
gensx new my-gensx-app
# Create a project with a specific template and description
gensx new my-gensx-app --template ts --description "My AI workflow app"
# Create a project with specific IDE rules
gensx new my-gensx-app --ide-rules cursor,claude
# Force create even if directory has existing files
gensx new my-gensx-app --force
```
## Notes
- If no template is specified, `ts` (TypeScript) is used by default.
- The command will install all required dependencies, so make sure you have npm installed.
- After creation, you can navigate to the project directory and start the development server with `gensx start`.
# gensx login
The `gensx login` command authenticates you with GenSX Cloud, allowing you to deploy and run workflows remotely.
## Usage
```bash
gensx login
```
## Description
When you run this command, it will:
1. Open your default web browser to the GenSX authentication page
2. Prompt you to log in with your GenSX account or create a new one
3. Store your authentication credentials locally for future CLI commands
After successful login, you can use other commands that require authentication, such as `deploy` and `run`.
## Examples
```bash
# Log in to GenSX Cloud
gensx login
```
## Notes
- Your authentication token is stored in your user directory (typically `~/.gensx/config.json`)
- The token is valid until you log out or revoke it from the GenSX dashboard
- If you're behind a corporate firewall or using strict network policies, ensure that outbound connections to `api.gensx.com` are allowed
# GenSX CLI reference
The GenSX command-line interface (CLI) provides a set of commands to help you build, deploy, and manage your GenSX applications.
## Installation
The GenSX CLI is included when you install the main GenSX package:
```bash
npm install -g gensx
```
## Available commands
### Auth
| Command | Description |
| -------------------------- | -------------------------------- |
| [`gensx login`](./login) | Log in to GenSX Cloud |
### Development
| Command | Description |
| -------------------------- | -------------------------------- |
| [`gensx new`](./new) | Create a new GenSX project |
| [`gensx start`](./start) | Start a local development server |
| [`gensx build`](./build) | Build a workflow for deployment |
### Deployment & Execution
| Command | Description |
| -------------------------- | -------------------------------- |
| [`gensx deploy`](./deploy) | Deploy a workflow to GenSX Cloud |
| [`gensx run`](./run) | Run a workflow on GenSX Cloud |
### Environment Management
| Command | Description |
| -------------------------------- | -------------------------------------------- |
| [`gensx env`](./env/show) | Show the current environment details |
| [`gensx env create`](./env/create) | Create a new environment |
| [`gensx env ls`](./env/ls) | List all environments for a project |
| [`gensx env select`](./env/select) | Select an environment as active |
| [`gensx env unselect`](./env/unselect) | Unselect the current environment |
## Common Workflows
### Starting a New Project
```bash
# Log in to GenSX Cloud
gensx login
# Create a new project
gensx new my-project
cd my-project
# Start local development
gensx start src/workflows.tsx
```
### Managing Environments
```bash
# Create and switch to a development environment
gensx env create dev
gensx env select dev
# View current environment
gensx env
```
### Deploying and Running Workflows
```bash
# Build and deploy your workflow
gensx deploy src/workflows.tsx
# Run a workflow
gensx run my-workflow --input '{"message": "Hello, world!"}'
```
For detailed information about each command, please refer to the corresponding documentation pages.
# gensx deploy
The `gensx deploy` command uploads and deploys a workflow to GenSX Cloud, making it available for remote execution.
## Usage
```bash
gensx deploy [options]
```
## Arguments
| Argument | Description |
| -------- | ------------------------------------------------------------------------------------------ |
| `` | File to deploy. This should be a TypeScript file that exports a GenSX workflow. |
## Options
| Option | Description |
| ----------------------- | ---------------------------------------------------------------------------- |
| `-ev, --env-var ` | Environment variable to include with deployment. Can be used multiple times. |
| `-p, --project ` | Project name to deploy to. |
| `-e, --env ` | Environment name to deploy to. |
| `-h, --help` | Display help for the command. |
## Description
This command:
1. Builds your workflow
2. Uploads it to GenSX Cloud
3. Creates or updates the deployment
4. Sets up any environment variables specified
After successful deployment, your workflow will be available for remote execution via the [GenSX Cloud console](https://app.gensx.com) or through the `gensx run` command.
## Examples
```bash
# Deploy a workflow
gensx deploy src/workflows.tsx
# Deploy to a specific project and environment
gensx deploy src/workflows.tsx --project my-production-project --env dev
# Deploy with environment variables
gensx deploy src/workflows.tsx -ev API_KEY=abc123 -ev DEBUG=true
# Deploy with an environment variable taken from your local environment
gensx deploy src/workflows.tsx -ev OPENAI_API_KEY
```
## Notes
- You must be logged in to GenSX Cloud to deploy (`gensx login`)
- `gensx deploy` requires Docker to be running
- If your workflow requires API keys or other secrets, provide them using the `-ev` or `--env-var` option
- For environment variables without a specified value, the CLI will use the value from your local environment
- After deployment, you can manage your workflows from the GenSX Cloud console
- The deployment process automatically handles bundling dependencies
# gensx build
The `gensx build` command compiles and bundles a GenSX workflow for deployment to GenSX Cloud.
## Usage
```bash
gensx build [options]
```
## Arguments
| Argument | Description |
| -------- | -------------------------------------------------------------------------------------------------------------- |
| `` | Workflow file to build (e.g., `src/workflows.tsx`). This should export an object with one or more GenSX workflows |
## Options
| Option | Description |
| ----------------------- | ----------------------------------------------- |
| `-o, --out-dir ` | Output directory for the built files. |
| `-t, --tsconfig ` | Path to a custom TypeScript configuration file. |
| `-h, --help` | Display help for the command. |
## Description
This command builds your GenSX workflow into an optimized bundle that can be deployed to GenSX Cloud. It:
- Transpiles TypeScript to JavaScript
- Bundles all dependencies
- Optimizes the code for production
- Prepares the workflow for deployment
After building, the command outputs the path to the bundled file, which can be used with the [`gensx deploy`](/docs/cli-reference/deploy) command.
## Examples
```bash
# Build a workflow with default options
gensx build src/workflows.tsx
# Build a workflow with a custom output directory
gensx build src/workflows.tsx --out-dir ./dist
# Build a workflow with a custom TypeScript configuration
gensx build src/workflows.tsx --tsconfig ./custom-tsconfig.json
```
## Notes
- The build process requires that your workflow file exports an object with one or more GenSX workflows.
- `gensx build` requires Docker to be running
- If no output directory is specified, the build files will be placed in a `.gensx` directory
- The build process does not include environment variables - these should be provided during deployment
# Search reference
API reference for GenSX Cloud search components. Search is powered by turbopuffer, and their documentation for [query](https://turbopuffer.com/docs/query) and [upsert operations](https://turbopuffer.com/docs/write) is a useful reference to augment this document.
## Installation
```bash
npm install @gensx/storage
```
## SearchProvider
Provides vector search capabilities to its child components.
### Import
```tsx
import { SearchProvider } from "@gensx/storage";
```
### Example
```tsx
import { SearchProvider } from "@gensx/storage";
const Workflow = gensx.Component("SearchWorkflow", ({ input }) => (
));
```
## useSearch
Hook that provides access to vector search for a specific namespace.
### Import
```tsx
import { useSearch } from "@gensx/storage";
```
### Signature
```tsx
function useSearch(name: string): Namespace;
```
### Parameters
| Parameter | Type | Description |
| --------- | -------- | ---------------------------- |
| `name` | `string` | The namespace name to access |
### Returns
Returns a namespace object with methods to interact with vector search.
### Example
```tsx
const namespace = await useSearch("documents");
const results = await namespace.query({
vector: queryEmbedding,
includeAttributes: true,
});
```
## Namespace methods
The namespace object returned by `useSearch` provides these methods:
### write
Inserts, updates, or deletes vectors in the namespace.
```tsx
async write(options: WriteParams): Promise
```
#### Parameters
| Parameter | Type | Default | Description |
| ----------------- | ---------------- | ----------- | ------------------------------------------- |
| `upsertColumns` | `UpsertColumns` | `undefined` | Column-based format for upserting documents |
| `upsertRows` | `UpsertRows` | `undefined` | Row-based format for upserting documents |
| `patchColumns` | `PatchColumns` | `undefined` | Column-based format for patching documents |
| `patchRows` | `PatchRows` | `undefined` | Row-based format for patching documents |
| `deletes` | `Id[]` | `undefined` | Array of document IDs to delete |
| `deleteByFilter` | `Filters` | `undefined` | Filter to match documents for deletion |
| `distanceMetric` | `DistanceMetric` | `undefined` | Distance metric for similarity calculations |
| `schema` | `Schema` | `undefined` | Optional schema definition for attributes |
#### Example
```tsx
// Upsert documents in column-based format
await namespace.write({
upsertColumns: {
id: ["doc-1", "doc-2"],
vector: [[0.1, 0.2, 0.3], [0.4, 0.5, 0.6]],
text: ["Document 1", "Document 2"],
category: ["article", "blog"]
},
distanceMetric: "cosine_distance",
schema: {
text: { type: "string" },
category: { type: "string" }
}
});
// Upsert documents in row-based format
await namespace.write({
upsertRows: [
{
id: "doc-1",
vector: [0.1, 0.2, 0.3],
text: "Document 1",
category: "article"
},
{
id: "doc-2",
vector: [0.4, 0.5, 0.6],
text: "Document 2",
category: "blog"
}
],
distanceMetric: "cosine_distance"
});
// Delete documents by ID
await namespace.write({
deletes: ["doc-1", "doc-2"]
});
// Delete documents by filter
await namespace.write({
deleteByFilter: [
"And",
[
["category", "Eq", "article"],
["createdAt", "Lt", "2023-01-01"]
]
]
});
// Patch documents (update specific fields)
await namespace.write({
patchRows: [
{
id: "doc-1",
category: "updated-category"
}
]
});
```
#### Return value
Returns the number of rows affected by the operation.
### query
Searches for similar vectors based on a query vector.
```tsx
async query(options: QueryOptions): Promise
```
#### Parameters
| Parameter | Type | Default | Description |
| ------------------- | ---------------------------------------- | ----------- | ---------------------------------------- |
| `vector` | `number[]` | Required | Query vector for similarity search |
| `topK` | `number` | `10` | Number of results to return |
| `includeVectors` | `boolean` | `false` | Whether to include vectors in results |
| `includeAttributes` | `boolean \| string[]` | `true` | Include all attributes or specified ones |
| `filters` | `Filters` | `undefined` | Metadata filters |
| `rankBy` | `RankBy` | `undefined` | Attribute-based ranking or text ranking |
| `consistency` | `string` | `undefined` | Consistency level for reads |
#### Example
```tsx
const results = await namespace.query({
vector: [0.1, 0.2, 0.3, ...], // Query vector
topK: 10, // Number of results to return
includeVectors: false, // Whether to include vectors in results
includeAttributes: true, // Include all attributes or specific ones
filters: [ // Optional metadata filters
"And",
[
["category", "Eq", "article"],
["createdAt", "Gte", "2023-01-01"]
]
],
rankBy: ["attributes.importance", "asc"], // Optional attribute-based ranking
});
```
#### Return value
Returns an array of matched documents with similarity scores:
```tsx
[
{
id: "doc-1", // Document ID
score: 0.87, // Similarity score (0-1)
vector?: number[], // Original vector (if includeVectors=true)
attributes?: { // Metadata (if includeAttributes=true)
text: "Document content",
category: "article",
createdAt: "2023-07-15"
}
},
// ...more results
]
```
### getSchema
Retrieves the current schema for the namespace.
```tsx
async getSchema(): Promise
```
#### Example
```tsx
const schema = await namespace.getSchema();
console.log(schema);
// {
// text: "string",
// category: "string",
// createdAt: "string"
// }
```
### updateSchema
Updates the schema for the namespace.
```tsx
async updateSchema(options: { schema: Schema }): Promise
```
#### Parameters
| Parameter | Type | Description |
| --------- | -------- | --------------------- |
| `schema` | `Schema` | New schema definition |
#### Example
```tsx
const updatedSchema = await namespace.updateSchema({
schema: {
text: "string",
category: "string",
createdAt: "string",
newField: "number", // Add new field
tags: "string[]", // Add array field
},
});
```
#### Return value
Returns the updated schema.
### getMetadata
Retrieves metadata about the namespace.
```tsx
async getMetadata(): Promise
```
#### Example
```tsx
const metadata = await namespace.getMetadata();
console.log(metadata);
// {
// vectorCount: 1250,
// dimensions: 1536,
// distanceMetric: "cosine_distance",
// created: "2023-07-15T12:34:56Z"
// }
```
## Namespace management
Higher-level operations for managing namespaces (these are accessed directly from the search object, not via `useSearch`):
```tsx
import { SearchClient } from "@gensx/storage";
const search = new SearchClient();
// List all namespaces
const namespaces = await search.listNamespaces({
prefix: "docs-", // Optional prefix filter
});
// Check if namespace exists
const exists = await search.namespaceExists("my-namespace");
// Create namespace if it doesn't exist
const { created } = await search.ensureNamespace("my-namespace");
// Delete a namespace
const { deleted } = await search.deleteNamespace("old-namespace");
// Get a namespace directly for vector operations
const namespace = search.getNamespace("products");
// Write vectors using the namespace
await namespace.write({
upsertRows: [
{
id: "product-1",
vector: [0.1, 0.3, 0.5, ...], // embedding vector
name: "Ergonomic Chair",
category: "furniture",
price: 299.99
},
{
id: "product-2",
vector: [0.2, 0.4, 0.6, ...],
name: "Standing Desk",
category: "furniture",
price: 499.99
}
],
distanceMetric: "cosine_distance",
schema: {
name: { type: "string" },
category: { type: "string" },
price: { type: "number" }
}
});
// Query vectors directly with the namespace
const searchResults = await namespace.query({
vector: [0.15, 0.35, 0.55, ...], // query vector
topK: 5,
includeAttributes: true,
filters: [
"And",
[
["category", "Eq", "furniture"],
["price", "Lt", 400]
]
]
});
```
The `SearchClient` is a standard typescript library and can be used outside of GenSX workflows in your normal application code as well.
## useSearchStorage
Hook that provides direct access to the search storage instance, which includes higher-level namespace management functions.
### Import
```tsx
import { useSearchStorage } from "@gensx/storage";
```
### Signature
```tsx
function useSearchStorage(): SearchStorage;
```
### Example
```tsx
const searchStorage = useSearchStorage();
```
The search storage object provides these management methods:
### getNamespace
Get a namespace object for direct interaction.
```tsx
// Get a namespace directly (without calling useSearch)
const searchStorage = useSearchStorage();
const namespace = searchStorage.getNamespace("documents");
// Usage example
await namespace.write({
upsertRows: [...],
distanceMetric: "cosine_distance"
});
```
### listNamespaces
List all namespaces in your project.
```tsx
const searchStorage = useSearchStorage();
const namespaces = await searchStorage.listNamespaces({
prefix: "docs-" // Optional prefix filter
});
console.log(namespaces); // ["docs-articles", "docs-products"]
```
### ensureNamespace
Create a namespace if it doesn't exist.
```tsx
const searchStorage = useSearchStorage();
const { created } = await searchStorage.ensureNamespace("documents");
if (created) {
console.log("Namespace was created");
} else {
console.log("Namespace already existed");
}
```
### deleteNamespace
Delete a namespace and all its data.
```tsx
const searchStorage = useSearchStorage();
const { deleted } = await searchStorage.deleteNamespace("old-namespace");
if (deleted) {
console.log("Namespace was deleted");
} else {
console.log("Namespace could not be deleted");
}
```
### namespaceExists
Check if a namespace exists.
```tsx
const searchStorage = useSearchStorage();
const exists = await searchStorage.namespaceExists("documents");
if (exists) {
console.log("Namespace exists");
} else {
console.log("Namespace does not exist");
}
```
### hasEnsuredNamespace
Check if a namespace has been ensured in the current session.
```tsx
const searchStorage = useSearchStorage();
const isEnsured = searchStorage.hasEnsuredNamespace("documents");
if (isEnsured) {
console.log("Namespace has been ensured in this session");
} else {
console.log("Namespace has not been ensured yet");
}
```
## SearchClient
The `SearchClient` class provides a way to interact with GenSX vector search capabilities outside of the GenSX workflow context, such as from regular Node.js applications or server endpoints.
### Import
```tsx
import { SearchClient } from "@gensx/storage";
```
### Constructor
```tsx
constructor()
```
#### Example
```tsx
const searchClient = new SearchClient();
```
### Methods
#### getNamespace
Get a namespace instance and ensure it exists first.
```tsx
async getNamespace(name: string): Promise
```
##### Example
```tsx
const namespace = await searchClient.getNamespace("products");
// Then use the namespace to upsert or query vectors
await namespace.write({
upsertRows: [
{
id: "product-1",
vector: [0.1, 0.2, 0.3, ...],
name: "Product 1",
category: "electronics"
}
],
distanceMetric: "cosine_distance"
});
```
#### ensureNamespace
Create a namespace if it doesn't exist.
```tsx
async ensureNamespace(name: string): Promise
```
##### Example
```tsx
const { created } = await searchClient.ensureNamespace("products");
if (created) {
console.log("Namespace was created");
}
```
#### listNamespaces
List all namespaces.
```tsx
async listNamespaces(options?: { prefix?: string }): Promise
```
##### Example
```tsx
const namespaces = await searchClient.listNamespaces({
prefix: "customer-" // Optional prefix filter
});
console.log("Available namespaces:", namespaces);
```
#### deleteNamespace
Delete a namespace.
```tsx
async deleteNamespace(name: string): Promise
```
##### Example
```tsx
const { deleted } = await searchClient.deleteNamespace("temp-namespace");
if (deleted) {
console.log("Namespace was removed");
}
```
#### namespaceExists
Check if a namespace exists.
```tsx
async namespaceExists(name: string): Promise
```
##### Example
```tsx
if (await searchClient.namespaceExists("products")) {
console.log("Products namespace exists");
} else {
console.log("Products namespace doesn't exist yet");
}
```
### Usage in applications
The SearchClient is particularly useful when you need to access vector search functionality from:
- Regular Express.js or Next.js API routes
- Background jobs or workers
- Custom scripts or tools
- Any Node.js application outside the GenSX workflow context
```tsx
// Example: Using SearchClient in an Express handler
import express from 'express';
import { SearchClient } from '@gensx/storage';
import { OpenAI } from 'openai';
const app = express();
const searchClient = new SearchClient();
const openai = new OpenAI();
app.post('/api/search', async (req, res) => {
try {
const { query } = req.body;
// Generate embedding for the query
const embedding = await openai.embeddings.create({
model: "text-embedding-3-small",
input: query
});
// Search for similar documents
const namespace = await searchClient.getNamespace('documents');
const results = await namespace.query({
vector: embedding.data[0].embedding,
topK: 5,
includeAttributes: true
});
res.json(results);
} catch (error) {
console.error('Search error:', error);
res.status(500).json({ error: 'Search error' });
}
});
app.listen(3000, () => {
console.log('Server running on port 3000');
});
```
## Filter operators
Filters use a structured array format with the following pattern:
```tsx
// Basic filter structure
[
"Operation", // And, Or, Not
[ // Array of conditions
["field", "Operator", value]
]
]
```
Available operators:
| Operator | Description | Example |
| ------------- | ---------------------- | -------------------------------------------- |
| `Eq` | Equals | `["field", "Eq", "value"]` |
| `Ne` | Not equals | `["field", "Ne", "value"]` |
| `Gt` | Greater than | `["field", "Gt", 10]` |
| `Gte` | Greater than or equal | `["field", "Gte", 10]` |
| `Lt` | Less than | `["field", "Lt", 10]` |
| `Lte` | Less than or equal | `["field", "Lte", 10]` |
| `In` | In array | `["field", "In", ["a", "b"]]` |
| `Nin` | Not in array | `["field", "Nin", ["a", "b"]]` |
| `Contains` | String contains | `["field", "Contains", "text"]` |
| `ContainsAny` | Contains any of values | `["tags", "ContainsAny", ["news", "tech"]]` |
| `ContainsAll` | Contains all values | `["tags", "ContainsAll", ["imp", "urgent"]]` |
## RankBy options
The `rankBy` parameter can be used in two primary ways:
### Attribute-based ranking
Sorts by a field in ascending or descending order:
```tsx
// Sort by the createdAt attribute in ascending order
rankBy: ["createdAt", "asc"]
// Sort by price in descending order (highest first)
rankBy: ["price", "desc"]
```
### Text-based ranking
For full-text search relevance scoring:
```tsx
// Basic BM25 text ranking
rankBy: ["text", "BM25", "search query"]
// BM25 with multiple search terms
rankBy: ["text", "BM25", ["term1", "term2"]]
// Combined text ranking strategies
rankBy: ["Sum", [
["text", "BM25", "search query"],
["text", "BM25", "another term"]
]]
// Weighted text ranking (multiply BM25 score by 0.5)
rankBy: ["Product", [["text", "BM25", "search query"], 0.5]]
// Alternative syntax for weighted ranking
rankBy: ["Product", [0.5, ["text", "BM25", "search query"]]]
```
Use these options to fine-tune the relevance and ordering of your search results.
# Database reference
API reference for GenSX Cloud SQL database components.
## Installation
```bash
npm install @gensx/storage
```
## DatabaseProvider
Provides SQL database capabilities to its child components.
### Import
```tsx
import { DatabaseProvider } from "@gensx/storage";
```
### Props
| Prop | Type | Default | Description |
| --------------- | --------------------- | ----------- | ------------------------------------------------ |
| `kind` | `"filesystem" \| "cloud"` | Auto-detected | The storage backend to use. Defaults filesystem when running locally and cloud when deployed to the serverless runtime. |
| `rootDir` | `string` | `".gensx/databases"` | Root directory for storing database files (filesystem only) |
### Example
```tsx
import { DatabaseProvider } from "@gensx/storage";
// Cloud storage (production)
const Workflow = gensx.Component("DatabaseWorkflow", ({ input }) => (
));
// Local filesystem storage (development)
const DevWorkflow = gensx.Component("DevDatabaseWorkflow", ({ input }) => (
));
```
## useDatabase
Hook that provides access to a specific SQL database.
### Import
```tsx
import { useDatabase } from "@gensx/storage";
```
### Signature
```tsx
function useDatabase(name: string): Database;
```
### Parameters
| Parameter | Type | Description |
| --------- | -------- | --------------------------- |
| `name` | `string` | The database name to access |
### Returns
Returns a database object with methods to interact with SQL database.
### Example
```tsx
const db = await useDatabase("users");
const result = await db.execute("SELECT * FROM users WHERE id = ?", [
"user-123",
]);
```
## Database methods
The database object returned by `useDatabase` provides these methods:
### execute
Executes a single SQL statement with optional parameters.
```tsx
async execute(sql: string, params?: InArgs): Promise
```
#### Parameters
| Parameter | Type | Description |
| --------- | -------- | ------------------------------------------ |
| `sql` | `string` | SQL statement to execute |
| `params` | `InArgs` | Optional parameters for prepared statement |
> `InArgs` can be provided as an array of values or as a record with named parameters. Values can be primitives, booleans, Uint8Array, or Date objects.
#### Example
```tsx
// Query with parameters
const result = await db.execute("SELECT * FROM users WHERE email = ?", [
"user@example.com",
]);
// Insert data
await db.execute("INSERT INTO users (id, name, email) VALUES (?, ?, ?)", [
"user-123",
"John Doe",
"john@example.com",
]);
// Update data
await db.execute("UPDATE users SET last_login = ? WHERE id = ?", [
new Date().toISOString(),
"user-123",
]);
```
#### Return value
Returns a result object with the following properties:
```tsx
{
columns: string[]; // Column names from result set
rows: unknown[][]; // Array of result rows as arrays
rowsAffected: number; // Number of rows affected by statement
lastInsertId?: number; // ID of last inserted row (for INSERT statements)
}
```
### batch
Executes multiple SQL statements in a single transaction.
```tsx
async batch(statements: DatabaseStatement[]): Promise
```
#### Parameters
| Parameter | Type | Description |
| ------------ | --------------------- | ------------------------------------------------ |
| `statements` | `DatabaseStatement[]` | Array of SQL statements with optional parameters |
#### DatabaseStatement format
```tsx
{
sql: string; // SQL statement
params?: InArgs; // Optional parameters
}
```
#### Example
```tsx
const results = await db.batch([
{
sql: "INSERT INTO users (id, name) VALUES (?, ?)",
params: ["user-123", "John Doe"],
},
{
sql: "INSERT INTO user_preferences (user_id, theme) VALUES (?, ?)",
params: ["user-123", "dark"],
},
]);
```
#### Return value
Returns a result object containing an array of individual results:
```tsx
{
results: [
{
columns: [],
rows: [],
rowsAffected: 1,
lastInsertId: 42
},
{
columns: [],
rows: [],
rowsAffected: 1,
lastInsertId: 43
}
]
}
```
### executeMultiple
Executes multiple SQL statements as a script (without transaction semantics).
```tsx
async executeMultiple(sql: string): Promise
```
#### Parameters
| Parameter | Type | Description |
| --------- | -------- | ----------------------------------------------- |
| `sql` | `string` | Multiple SQL statements separated by semicolons |
#### Example
```tsx
const results = await db.executeMultiple(`
CREATE TABLE IF NOT EXISTS users (
id TEXT PRIMARY KEY,
name TEXT NOT NULL,
created_at TEXT DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX IF NOT EXISTS idx_users_name ON users(name);
`);
```
#### Return value
Returns a result object containing an array of individual results, one for each statement executed.
### migrate
Executes SQL migration statements with foreign keys disabled.
```tsx
async migrate(sql: string): Promise
```
#### Parameters
| Parameter | Type | Description |
| --------- | -------- | ------------------------ |
| `sql` | `string` | SQL migration statements |
#### Example
```tsx
const results = await db.migrate(`
-- Migration v1: Initial schema
CREATE TABLE IF NOT EXISTS users (
id TEXT PRIMARY KEY,
name TEXT NOT NULL,
email TEXT UNIQUE,
created_at TEXT DEFAULT CURRENT_TIMESTAMP
);
-- Migration v2: Add last_login field
ALTER TABLE users ADD COLUMN last_login TEXT;
`);
```
#### Return value
Returns a result object containing an array of individual results, one for each statement executed.
### getInfo
Retrieves metadata about the database.
```tsx
async getInfo(): Promise
```
#### Example
```tsx
const info = await db.getInfo();
console.log(info);
// {
// name: "users",
// size: 12800,
// lastModified: Date("2023-07-15T12:34:56Z"),
// tables: [
// {
// name: "users",
// columns: [
// {
// name: "id",
// type: "TEXT",
// notNull: true,
// primaryKey: true
// },
// {
// name: "name",
// type: "TEXT",
// notNull: true,
// primaryKey: false
// }
// ]
// }
// ]
// }
```
## Database management
Higher-level operations for managing databases are available through the `useDatabaseStorage` hook:
### useDatabaseStorage
Hook that provides access to the database storage instance, which includes higher-level database management functions.
```tsx
import { useDatabaseStorage } from "@gensx/storage";
// Get access to database management functions
const dbStorage = useDatabaseStorage();
```
The database storage object provides these management methods:
### listDatabases
Lists all databases in your project.
```tsx
import { useDatabaseStorage } from "@gensx/storage";
const dbStorage = useDatabaseStorage();
const databases = await dbStorage.listDatabases();
console.log(databases); // ["users", "products", "analytics"]
```
### ensureDatabase
Creates a database if it doesn't exist.
```tsx
const dbStorage = useDatabaseStorage();
const { created } = await dbStorage.ensureDatabase("new-database");
if (created) {
console.log("Database was created");
} else {
console.log("Database already existed");
}
```
### deleteDatabase
Deletes a database and all its data.
```tsx
const dbStorage = useDatabaseStorage();
const { deleted } = await dbStorage.deleteDatabase("old-database");
if (deleted) {
console.log("Database was deleted");
} else {
console.log("Database could not be deleted");
}
```
### hasEnsuredDatabase
Checks if a database has been ensured in the current session.
```tsx
const dbStorage = useDatabaseStorage();
const isEnsured = dbStorage.hasEnsuredDatabase("my-database");
if (isEnsured) {
console.log("Database has been ensured in this session");
} else {
console.log("Database has not been ensured yet");
}
```
### getDatabase
Get a database instance directly (without calling useDatabase).
```tsx
const dbStorage = useDatabaseStorage();
// Get a database directly
// Note: This doesn't ensure the database exists, unlike useDatabase
const db = dbStorage.getDatabase("users");
// You may want to ensure it exists first
await dbStorage.ensureDatabase("users");
const db = dbStorage.getDatabase("users");
```
## DatabaseClient
The `DatabaseClient` class provides a way to interact with GenSX databases outside of the GenSX workflow context, such as from regular Node.js applications or server endpoints.
### Import
```tsx
import { DatabaseClient } from "@gensx/storage";
```
### Constructor
```tsx
constructor(props?: DatabaseProviderProps)
```
#### Parameters
| Parameter | Type | Default | Description |
| --------- | --------------------- | ----------- | ------------------------------------------------ |
| `props` | `DatabaseProviderProps` | `{}` | Optional configuration properties |
#### Example
```tsx
// Default client (uses filesystem locally, cloud in production)
const dbClient = new DatabaseClient();
// Explicitly use filesystem storage
const localClient = new DatabaseClient({
kind: "filesystem",
rootDir: "./my-data"
});
// Explicitly use cloud storage
const cloudClient = new DatabaseClient({ kind: "cloud" });
```
### Methods
#### getDatabase
Get a database instance and ensure it exists first.
```tsx
async getDatabase(name: string): Promise
```
##### Example
```tsx
const db = await dbClient.getDatabase("users");
const results = await db.execute("SELECT * FROM users LIMIT 10");
```
#### ensureDatabase
Create a database if it doesn't exist.
```tsx
async ensureDatabase(name: string): Promise
```
##### Example
```tsx
const { created } = await dbClient.ensureDatabase("analytics");
if (created) {
console.log("Database was created");
}
```
#### listDatabases
List all databases.
```tsx
async listDatabases(): Promise
```
##### Example
```tsx
const databases = await dbClient.listDatabases();
console.log("Available databases:", databases);
```
#### deleteDatabase
Delete a database.
```tsx
async deleteDatabase(name: string): Promise
```
##### Example
```tsx
const { deleted } = await dbClient.deleteDatabase("temp-db");
if (deleted) {
console.log("Database was removed");
}
```
#### databaseExists
Check if a database exists.
```tsx
async databaseExists(name: string): Promise
```
##### Example
```tsx
if (await dbClient.databaseExists("users")) {
console.log("Users database exists");
} else {
console.log("Users database doesn't exist yet");
}
```
### Usage in applications
The DatabaseClient is particularly useful when you need to access GenSX databases from:
- Regular Express.js or Next.js API routes
- Background jobs or workers
- Custom scripts or tools
- Any Node.js application outside the GenSX workflow context
```tsx
// Example: Using DatabaseClient in an Express handler
import express from 'express';
import { DatabaseClient } from '@gensx/storage';
const app = express();
const dbClient = new DatabaseClient();
app.get('/api/users', async (req, res) => {
try {
const db = await dbClient.getDatabase('users');
const result = await db.execute('SELECT * FROM users');
res.json(result.rows);
} catch (error) {
console.error('Database error:', error);
res.status(500).json({ error: 'Database error' });
}
});
app.listen(3000, () => {
console.log('Server running on port 3000');
});
```
# Blob storage reference
API reference for GenSX Cloud blob storage components.
## Installation
```bash
npm install @gensx/storage
```
## BlobProvider
Provides blob storage capabilities to its child components.
### Import
```tsx
import { BlobProvider } from "@gensx/storage";
```
### Props
| Prop | Type | Default | Description |
| --------------- | ------------------------- | -------------- | ------------------------------------- |
| `kind` | `"filesystem" \| "cloud"` | Auto-detected | Storage backend to use. Defaults filesystem when running locally and cloud when deployed to the serverless runtime. |
| `rootDir` | `string` | `.gensx/blobs` | Root directory for filesystem storage |
| `defaultPrefix` | `string` | `undefined` | Optional prefix for all blob keys |
### Example
```tsx
import { BlobProvider } from "@gensx/storage";
const Workflow = gensx.Component("Workflow", ({ input }) => (
));
```
## useBlob
Hook that provides access to blob storage for a specific key.
### Import
```tsx
import { useBlob } from "@gensx/storage";
```
### Signature
```tsx
function useBlob(key: string): Blob;
```
### Parameters
| Parameter | Type | Description |
| --------- | ------------ | -------------------------------- |
| `key` | `string` | The unique key for the blob |
| `T` | Generic type | Type of the JSON data (optional) |
### Returns
Returns a blob object with methods to interact with blob storage.
### Example
```tsx
const blob = useBlob("users/123.json");
const profile = await blob.getJSON();
```
## Blob methods
The blob object returned by `useBlob` provides these methods:
### JSON operations
```tsx
// Get JSON data
const data = await blob.getJSON(); // Returns null if not found
// Save JSON data
await blob.putJSON(data, options); // Returns { etag: string }
```
### String operations
```tsx
// Get string content
const text = await blob.getString(); // Returns null if not found
// Save string content
await blob.putString("Hello world", options); // Returns { etag: string }
```
### Binary operations
```tsx
// Get binary data with metadata
const result = await blob.getRaw(); // Returns null if not found
// Returns { content, contentType, etag, lastModified, size, metadata }
// Save binary data
await blob.putRaw(buffer, options); // Returns { etag: string }
```
### Stream operations
```tsx
// Get data as a stream
const stream = await blob.getStream();
// Save data from a stream
await blob.putStream(readableStream, options); // Returns { etag: string }
```
### Metadata operations
```tsx
// Check if blob exists
const exists = await blob.exists(); // Returns boolean
// Delete blob
await blob.delete();
// Get metadata
const metadata = await blob.getMetadata(); // Returns null if not found
// Update metadata
await blob.updateMetadata({
key1: "value1",
key2: "value2",
});
```
## Options object
Many methods accept an options object with these properties:
```tsx
{
contentType?: string, // MIME type of the content
etag?: string, // For optimistic concurrency control
metadata?: { // Custom metadata key-value pairs
[key: string]: string
}
}
```
## useBlobStorage
Hook that provides direct access to the blob storage instance, allowing you to perform blob operations across multiple keys.
### Import
```tsx
import { useBlobStorage } from "@gensx/storage";
```
### Signature
```tsx
function useBlobStorage(): BlobStorage;
```
### Example
```tsx
const blobStorage = useBlobStorage();
```
The blob storage object provides these methods:
### getBlob
Get a blob object for a specific key.
```tsx
const blobStorage = useBlobStorage();
const userBlob = blobStorage.getBlob("users/123.json");
```
### listBlobs
List all blob keys with an optional prefix filter.
```tsx
const blobStorage = useBlobStorage();
const userBlobKeys = await blobStorage.listBlobs("users/");
console.log(userBlobKeys); // ["users/123.json", "users/456.json"]
```
### blobExists
Check if a blob exists.
```tsx
const blobStorage = useBlobStorage();
const exists = await blobStorage.blobExists("users/123.json");
if (exists) {
console.log("User profile exists");
}
```
### deleteBlob
Delete a blob.
```tsx
const blobStorage = useBlobStorage();
const { deleted } = await blobStorage.deleteBlob("temp/file.json");
if (deleted) {
console.log("Temporary file deleted");
}
```
## BlobClient
The `BlobClient` class provides a way to interact with GenSX blob storage outside of the GenSX workflow context, such as from regular Node.js applications or server endpoints.
### Import
```tsx
import { BlobClient } from "@gensx/storage";
```
### Constructor
```tsx
constructor(props?: BlobProviderProps)
```
#### Parameters
| Parameter | Type | Default | Description |
| --------- | ------------------ | ------- | ------------------------------------- |
| `props` | `BlobProviderProps` | `{}` | Optional configuration properties |
#### Example
```tsx
// Default client (uses filesystem locally, cloud in production)
const blobClient = new BlobClient();
// Explicitly use filesystem storage
const localClient = new BlobClient({
kind: "filesystem",
rootDir: "./my-data"
});
// Explicitly use cloud storage with a prefix
const cloudClient = new BlobClient({
kind: "cloud",
defaultPrefix: "app-data/"
});
```
### Methods
#### getBlob
Get a blob instance for a specific key.
```tsx
getBlob(key: string): Blob
```
##### Example
```tsx
const userBlob = blobClient.getBlob("users/123.json");
const profile = await userBlob.getJSON();
// Update the profile
profile.lastLogin = new Date().toISOString();
await userBlob.putJSON(profile);
```
#### listBlobs
List all blob keys with an optional prefix.
```tsx
async listBlobs(prefix?: string): Promise
```
##### Example
```tsx
const chatHistory = await blobClient.listBlobs("chats/");
console.log("Chat histories:", chatHistory);
```
#### blobExists
Check if a blob exists.
```tsx
async blobExists(key: string): Promise
```
##### Example
```tsx
if (await blobClient.blobExists("settings.json")) {
console.log("Settings file exists");
} else {
console.log("Need to create settings file");
}
```
#### deleteBlob
Delete a blob.
```tsx
async deleteBlob(key: string): Promise
```
##### Example
```tsx
const { deleted } = await blobClient.deleteBlob("temp/cache.json");
if (deleted) {
console.log("Cache file was deleted");
}
```
### Usage in applications
The BlobClient is particularly useful when you need to access blob storage from:
- Express.js or Next.js API routes
- Background jobs or workers
- Custom scripts or tools
- Any Node.js application outside the GenSX workflow context
```tsx
// Example: Using BlobClient in an Express handler
import express from 'express';
import { BlobClient } from '@gensx/storage';
const app = express();
const blobClient = new BlobClient();
// Save user data endpoint
app.post('/api/users/:userId', async (req, res) => {
try {
const { userId } = req.params;
const userBlob = blobClient.getBlob(`users/${userId}.json`);
// Get existing profile or create new one
const existingProfile = await userBlob.getJSON() || {};
// Merge with updated data
const updatedProfile = {
...existingProfile,
...req.body,
updatedAt: new Date().toISOString()
};
// Save the updated profile
await userBlob.putJSON(updatedProfile);
res.json({ success: true });
} catch (error) {
console.error('Error saving user data:', error);
res.status(500).json({ error: 'Failed to save user data' });
}
});
app.listen(3000, () => {
console.log('Server running on port 3000');
});
```
# SQL database
GenSX's SQL database service provides zero-configuration SQLite databases. It enables you to create, query, and manage relational data without worrying about infrastructure or database administration. Because new databases can be provisioned in milliseconds, they are perfect for per-agent or per workflow state.
Cloud databases are powered by [Turso](https://turso.tech), with several properties that make them ideal for AI agents and workflows:
- **Millisecond provisioning**: Databases are created on-demand in milliseconds, making them perfect for ephemeral workloads like parsing and querying user-uploaded CSVs or creating per-agent structured data stores.
- **Strong consistency**: All operations are linearizable, maintaining an ordered history, with writes fully serialized and subsequent writes awaiting transaction completion.
- **Zero configuration**: Like all GenSX storage components, databases work identically in both development and production environments with no setup required.
- **Local development**: Uses libsql locally to enable a fast, isolated development loop without external dependencies.
## Basic usage
To use SQL databases in your GenSX application:
1. Install the storage package:
```bash
npm install @gensx/storage
```
2. Add the `DatabaseProvider` to your workflow:
```tsx
import { DatabaseProvider } from "@gensx/storage";
const Workflow = ({ input }) => (
);
```
3. Access databases within your components using the `useDatabase` hook:
```tsx
import { useDatabase } from "@gensx/storage";
const db = await useDatabase("my-database");
```
### Executing queries
The simplest way to interact with a database is by executing SQL queries:
```tsx
import * as gensx from "@gensx/core";
import { useDatabase } from "@gensx/storage";
const QueryTeamStats = gensx.Component("QueryTeamStats", async ({ team }) => {
// Get access to a database (creates it if it doesn't exist)
const db = await useDatabase("baseball");
// Execute SQL queries with parameters
const result = await db.execute("SELECT * FROM players WHERE team = ?", [
team,
]);
// Access query results
console.log(result.columns); // Column names
console.log(result.rows); // Data rows
console.log(result.rowsAffected); // Number of rows affected
return result.rows;
});
```
### Creating tables and initializing data
You can create database schema and populate it with data:
```tsx
const InitializeDatabase = gensx.Component("InitializeDatabase", async () => {
const db = await useDatabase("baseball");
// Create table if it doesn't exist
await db.execute(`
CREATE TABLE IF NOT EXISTS baseball_stats (
player TEXT,
team TEXT,
position TEXT,
at_bats INTEGER,
hits INTEGER,
runs INTEGER,
home_runs INTEGER,
rbi INTEGER,
batting_avg REAL
)
`);
// Check if data already exists
const result = await db.execute("SELECT COUNT(*) FROM baseball_stats");
const count = result.rows[0][0] as number;
if (count === 0) {
// Insert sample data
await db.execute(`
INSERT INTO baseball_stats (player, team, position, at_bats, hits, runs, home_runs, rbi, batting_avg)
VALUES
('Marcus Bennett', 'Portland Pioneers', '1B', 550, 85, 25, 32, 98, 0.312),
('Ethan Carter', 'San Antonio Stallions', 'SS', 520, 92, 18, 24, 76, 0.298)
`);
}
return "Database initialized";
});
```
## Practical examples
### Text-to-SQL agent
One of the most powerful applications is building a natural language to SQL interface:
```tsx
import * as gensx from "@gensx/core";
import { GSXChatCompletion, GSXTool } from "@gensx/openai";
import { useDatabase } from "@gensx/storage";
import { z } from "zod";
// Create a tool that executes SQL queries
const queryTool = new GSXTool({
name: "execute_query",
description: "Execute a SQL query against the baseball database",
schema: z.object({
query: z.string().describe("The SQL query to execute"),
}),
run: async ({ query }) => {
const db = await useDatabase("baseball");
const result = await db.execute(query);
return JSON.stringify(result, null, 2);
},
});
// SQL Copilot component that answers questions using SQL
const SqlCopilot = gensx.Component("SqlCopilot", ({ question }) => (
{(result) => result.choices[0].message.content}
));
```
### Transactions with batch operations
For operations that need to be performed atomically, you can use batch operations:
```tsx
const TransferFunds = gensx.Component(
"TransferFunds",
async ({ fromAccount, toAccount, amount }) => {
const db = await useDatabase("banking");
try {
// Execute multiple statements as a transaction
const result = await db.batch([
{
sql: "UPDATE accounts SET balance = balance - ? WHERE account_id = ?",
params: [amount, fromAccount],
},
{
sql: "UPDATE accounts SET balance = balance + ? WHERE account_id = ?",
params: [amount, toAccount],
},
]);
return { success: true, rowsAffected: result.rowsAffected };
} catch (error) {
return { success: false, error: error.message };
}
},
);
```
### Multi-statement scripts
For complex database changes, you can execute multiple statements at once:
```tsx
const SetupUserSystem = gensx.Component("SetupUserSystem", async () => {
const db = await useDatabase("users");
// Execute a SQL script with multiple statements
await db.executeMultiple(`
CREATE TABLE IF NOT EXISTS users (
id TEXT PRIMARY KEY,
name TEXT NOT NULL,
email TEXT UNIQUE
);
CREATE TABLE IF NOT EXISTS user_preferences (
user_id TEXT PRIMARY KEY,
theme TEXT DEFAULT 'light',
notifications BOOLEAN DEFAULT 1,
FOREIGN KEY (user_id) REFERENCES users(id)
);
CREATE INDEX IF NOT EXISTS idx_users_email ON users(email);
`);
return "User system set up successfully";
});
```
### Database schema migrations
When you need to update your database schema, use migrations:
```tsx
const MigrateDatabase = gensx.Component(
"MigrateDatabase",
async ({ version }) => {
const db = await useDatabase("app_data");
if (version === "v2") {
// Run migrations with foreign key checks disabled
await db.migrate(`
ALTER TABLE products ADD COLUMN category TEXT;
CREATE TABLE product_categories (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
description TEXT
);
`);
return "Database migrated to v2";
}
return "No migration needed";
},
);
```
## Development vs. production
GenSX SQL databases work identically in both local development and cloud environments:
- **Local development**: Databases are stored as SQLite files in the `.gensx/databases` directory by default
- **Cloud deployment**: Databases are automatically provisioned in the cloud
If you don't specify a "kind" that the framework auto-infers this value for you based on the runtime environment.
No code changes are needed when moving from development to production.
## Use cases
### Data-backed agents
Create agents that can query and update structured data, using the components defined above:
```tsx
const DataAnalyst = gensx.Component("DataAnalyst", async ({ query }) => {
// Initialize the database with the baseball stats
await InitializeDatabase();
// Use the SQL Copilot to answer the question
return ;
});
```
### User data storage
Store user data and preferences in a structured format:
```tsx
const UserPreferences = gensx.Component(
"UserPreferences",
async ({ userId, action, data }) => {
const db = await useDatabase("user_data");
if (action === "get") {
const result = await db.execute(
"SELECT * FROM preferences WHERE user_id = ?",
[userId],
);
return result.rows.length > 0 ? result.rows[0] : null;
} else if (action === "set") {
await db.execute(
"INSERT OR REPLACE INTO preferences (user_id, settings) VALUES (?, ?)",
[userId, JSON.stringify(data)],
);
return { success: true };
}
},
);
```
### Collaborative workflows
Build workflows that share structured data between steps:
```tsx
const DataCollector = gensx.Component("DataCollector", async ({ source }) => {
const db = await useDatabase("workflow_data");
// Collect data from source and store in database
// ...
return { success: true };
});
const DataAnalyzer = gensx.Component("DataAnalyzer", async () => {
const db = await useDatabase("workflow_data");
// Analyze data from database
// ...
return { results: "..." };
});
```
## Reference
See the [database component reference](docs/component-reference/storage-components/database-reference) for full details.
# Search
GenSX's Cloud search service provides full-text and vector search for AI applications. It enables you to store, query, and manage vector embeddings for semantic search, retrieval-augmented generation (RAG), and other AI use cases.
Search is powered by [turbopuffer](https://turbopuffer.com/), fully featured and ready for AI workloads:
- **Combined vector and keyword search**: Perform hybrid searches using both semantic similarity (vectors) and keyword matching (BM25).
- **Millisecond query latency**: Get results quickly, even with large vector collections.
- **Flexible filtering**: Apply metadata filters to narrow down search results based on categories, timestamps, or any custom attributes.
## Basic usage
To use search in your GenSX application:
1. Install the storage package:
```bash
npm install @gensx/storage
```
2. Add the `SearchProvider` to your workflow:
```tsx
import { SearchProvider } from "@gensx/storage";
const Workflow = gensx.Component("SearchWorkflow", { input }) => (
);
```
3. Access search namespaces within your components using the `useSearch` hook:
```tsx
import { useSearch } from "@gensx/storage";
const search = await useSearch("documents");
```
### Storing vector embeddings
The first step in using search is to convert your data into vector embeddings and store them:
```tsx
import * as gensx from "@gensx/core";
import { OpenAIEmbedding } from "@gensx/openai";
import { useSearch } from "@gensx/storage";
const IndexDocuments = gensx.Component(
"IndexDocuments",
async ({ documents }) => {
// Get access to a search namespace
const search = await useSearch("documents");
// Generate embeddings for the documents
const embeddings = await OpenAIEmbedding.run({
model: "text-embedding-3-small",
input: documents.map((doc) => doc.text),
});
// Store the embeddings with original text as metadata
await search.write({
upsertRows: documents.map((doc, index) => ({
id: doc.id,
vector: embeddings.data[index].embedding,
text: doc.text,
category: doc.category,
createdAt: new Date().toISOString(),
})),
distanceMetric: "cosine_distance",
});
return { success: true, count: documents.length };
},
);
```
### Searching for similar documents
Once you've stored embeddings, you can search for semantically similar content:
```tsx
const SearchDocuments = gensx.Component(
"SearchDocuments",
async ({ query, category }) => {
// Get access to the search namespace
const search = await useSearch("documents");
// Generate an embedding for the query
const embedding = await OpenAIEmbedding.run({
model: "text-embedding-3-small",
input: query,
});
// Build query options
const queryOptions = {
vector: embedding.data[0].embedding,
includeAttributes: true,
topK: 5, // Return top 5 results
};
// Add filters if category is specified
if (category) {
queryOptions.filters = {
where: { category: { $eq: category } },
};
}
// Perform the search
const results = await search.query(queryOptions);
// Process and return results
return results.map((result) => ({
id: result.id,
text: result.attributes?.text,
score: result.score,
}));
},
);
```
## Building a RAG application
Retrieval-Augmented Generation (RAG) is one of the most common use cases for vector search. Here's how to build a complete RAG workflow:
### Step 1: Index your documents
First, create a component to prepare and index your documents:
```tsx
const PrepareDocuments = gensx.Component("PrepareDocuments", async () => {
// Sample baseball player data
const documents = [
{
id: "1",
text: "Marcus Bennett is a first baseman for the Portland Pioneers. He has 32 home runs this season.",
category: "player",
},
{
id: "2",
text: "Ethan Carter plays shortstop for the San Antonio Stallions with 24 home runs.",
category: "player",
},
{
id: "3",
text: "The Portland Pioneers are leading the Western Division with a 92-70 record.",
category: "team",
},
];
// Index the documents
return ;
});
```
### Step 2: Create a query tool
Next, create a tool that can access your search index:
```tsx
import { GSXTool } from "@gensx/openai";
import { z } from "zod";
// Define a tool to query the search index
const queryTool = new GSXTool({
name: "query",
description: "Query the baseball knowledge base",
schema: z.object({
query: z.string().describe("The text query to search for"),
}),
run: async ({ query }) => {
// Access search index
const search = await useSearch("baseball");
// Generate query embedding
const embedding = await OpenAIEmbedding.run({
model: "text-embedding-3-small",
input: query,
});
// Search for relevant documents
const results = await search.query({
vector: embedding.data[0].embedding,
includeAttributes: true,
});
// Return formatted results
return JSON.stringify(
results.map((r) => r.attributes?.text),
null,
2,
);
},
});
```
### Step 3: Create the RAG agent
Now, create an agent that uses the query tool to access relevant information:
```tsx
const RagAgent = gensx.Component("RagAgent", ({ question }) => (
{(result) => result.choices[0].message.content}
));
```
### Step 4: Combine Everything in a Workflow
Finally, put it all together in a workflow:
```tsx
const RagWorkflow = gensx.Component(
"RagWorkflow",
async ({ question, shouldReindex }) => {
// Optionally reindex documents
if (shouldReindex) {
await PrepareDocuments();
}
// Use the RAG agent to answer the question
return ;
},
);
```
## Practical examples
### Agent memory system
One powerful application of vector search is creating a long-term memory system for AI agents:
```tsx
import * as gensx from "@gensx/core";
import { OpenAIEmbedding } from "@gensx/openai";
import { useSearch } from "@gensx/storage";
// Component to store a memory
const StoreMemory = gensx.Component(
"StoreMemory",
async ({ userId, memory, importance = "medium" }) => {
const search = await useSearch(`memories-${userId}`);
// Generate embedding for this memory
const embedding = await OpenAIEmbedding.run({
model: "text-embedding-3-small",
input: memory,
});
// Store the memory with metadata
await search.write({
upsertRows: [
{
id: `memory-${Date.now()}`,
vector: embedding.data[0].embedding,
content: memory,
timestamp: new Date().toISOString(),
importance: importance, // "high", "medium", "low"
source: "user-interaction",
},
],
distanceMetric: "cosine_distance",
});
return { success: true };
},
);
// Component to recall relevant memories
const RecallMemories = gensx.Component(
"RecallMemories",
async ({ userId, context, maxResults = 5 }) => {
const search = await useSearch(`memories-${userId}`);
// Generate embedding for the context
const embedding = await OpenAIEmbedding.run({
model: "text-embedding-3-small",
input: context,
});
// Query for relevant memories, prioritizing important ones
const results = await search.query({
vector: embedding.data[0].embedding,
topK: maxResults,
includeAttributes: true,
// Optional: rank by both relevance and importance
rankBy: ["attributes.importance", "asc"],
});
// Format memories for the agent
return results.map((result) => ({
content: result.attributes?.content,
timestamp: result.attributes?.timestamp,
relevance: result.score.toFixed(3),
}));
},
);
// Component that uses memories in a conversation
const MemoryAwareAgent = gensx.Component(
"MemoryAwareAgent",
async ({ userId, userMessage }) => {
// Recall relevant memories based on the current conversation
const memories = await RecallMemories({
userId,
context: userMessage,
maxResults: 3,
});
// Use memories to inform the response
const response = await ChatCompletion.run({
model: "gpt-4o-mini",
messages: [
{
role: "system",
content: `You are an assistant with memory. Consider these relevant memories about this user:
${memories.map((m) => `[${m.timestamp}] ${m.content} (relevance: ${m.relevance})`).join("\n")}`,
},
{ role: "user", content: userMessage },
],
});
// Store this interaction as a new memory
await StoreMemory({
userId,
memory: `User asked: "${userMessage}". I replied: "${response}"`,
importance: "medium",
});
return response;
},
);
```
### Knowledge base search
Another powerful application is a knowledge base with faceted search capabilities:
```tsx
const SearchKnowledgeBase = gensx.Component(
"SearchKnowledgeBase",
async ({ query, filters = {} }) => {
const search = await useSearch("knowledge-base");
// Generate embedding for the query
const embedding = await OpenAIEmbedding.run({
model: "text-embedding-3-small",
input: query,
});
// Build filter conditions from user-provided filters
let filterConditions = ["And", []];
if (filters.category) {
filterConditions[1].push(["category", "Eq", filters.category]);
}
if (filters.dateRange) {
filterConditions[1].push(["publishedDate", "Gte", filters.dateRange.start]);
filterConditions[1].push(["publishedDate", "Lte", filters.dateRange.end]);
}
if (filters.tags && filters.tags.length > 0) {
filterConditions[1].push(["tags", "ContainsAny", filters.tags]);
}
// Perform hybrid search (vector + keyword) with filters
const results = await search.query({
vector: embedding.data[0].embedding,
rankBy: ["text", "BM25", query],
includeAttributes: true,
topK: 10,
filters: filterConditions[1].length > 0 ? filterConditions : undefined,
});
// Return formatted results
return results.map((result) => ({
title: result.attributes?.title,
snippet: result.attributes?.snippet,
url: result.attributes?.url,
category: result.attributes?.category,
tags: result.attributes?.tags,
score: result.score,
}));
},
);
```
## Advanced usage
### Filtering by metadata
Use filters to narrow down search results:
```tsx
const search = await useSearch("articles");
// Search with filters
const results = await search.query({
vector: queryEmbedding,
filters: [
"And",
[
["category", "Eq", "sports"],
["publishDate", "Gte", "2023-01-01"],
["publishDate", "Lt", "2024-01-01"],
["author", "In", ["Alice", "Bob", "Carol"]],
],
],
});
```
### Updating schema
Manage your vector collection's schema:
```tsx
const search = await useSearch("products");
// Get current schema
const currentSchema = await search.getSchema();
// Update schema to add new fields
await search.updateSchema({
...currentSchema,
newField: { type: "number" },
anotherField: { type: "string[]" },
});
```
## Reference
See the [search component reference](docs/component-reference/storage-components/search-reference) for full details.
# Blob storage
Blob storage provides zero-configuration persistent storage for your GenSX applications. It enables you to store JSON, text, or binary data for your agents and workflows without worrying about managing infrastructure.
## Basic usage
To use blob storage in your GenSX application:
1. Install the storage package:
```bash
npm install @gensx/storage
```
2. Add the `BlobProvider` to your workflow:
```tsx
import { BlobProvider } from "@gensx/storage";
const Workflow = ({ input }) => (
);
```
3. Access blobs within your components using the `useBlob` hook:
```tsx
import { useBlob } from "@gensx/storage";
const blob = useBlob("your-key.json");
```
### Reading blobs
The `useBlob` hook provides simple methods to read different types of data:
```tsx
import { useBlob } from "@gensx/storage";
// Read JSON data
const profileBlob = useBlob("users/profile.json");
const profile = await profileBlob.getJSON();
console.log(profile?.name);
// Read text data
const notesBlob = useBlob("notes/meeting.txt");
const notes = await notesBlob.getString();
// Read binary data
const imageBlob = useBlob("images/photo.jpg");
const image = await imageBlob.getRaw();
console.log(image?.contentType); // "image/jpeg"
```
### Writing blobs
You can write data in various formats:
```tsx
import { useBlob } from "@gensx/storage";
// Write JSON data
const profileBlob = useBlob("users/profile.json");
await profileBlob.putJSON({ name: "Alice", preferences: { theme: "dark" } });
// Write text data
const notesBlob = useBlob("notes/meeting.txt");
await notesBlob.putString("Meeting agenda:\n1. Project updates\n2. Action items");
// Write binary data
const imageBlob = useBlob("images/photo.jpg");
await imageBlob.putRaw(imageBuffer, {
contentType: "image/jpeg",
metadata: { originalName: "vacation.jpg" }
});
```
## Practical examples
### Persistent chat threads
One of the most common use cases for blob storage is maintaining conversation history across multiple interactions:
```tsx
import * as gensx from "@gensx/core";
import { ChatCompletion } from "@gensx/openai";
import { useBlob } from "@gensx/storage";
interface ChatMessage {
role: "system" | "user" | "assistant";
content: string;
}
const ChatWithMemory = gensx.Component(
"ChatWithMemory",
async ({ userInput, threadId }) => {
// Get a reference to the thread's storage
const blob = useBlob(`chats/${threadId}.json`);
// Load existing messages or start with a system prompt
const messages = (await blob.getJSON()) ?? [
{
role: "system",
content: "You are a helpful assistant.",
},
];
// Add the new user message
messages.push({ role: "user", content: userInput });
// Generate a response using the full conversation history
const response = await ChatCompletion.run({
model: "gpt-4o-mini",
messages,
});
// Save the assistant's response to the history
messages.push({ role: "assistant", content: response });
await blob.putJSON(messages);
return response;
},
);
```
### Memory for agents
For more complex agents, you can store structured memory:
```tsx
interface AgentMemory {
facts: string[];
tasks: { description: string; completed: boolean }[];
lastUpdated: string;
}
const AgentWithMemory = gensx.Component(
"AgentWithMemory",
async ({ input, agentId }) => {
// Load agent memory
const memoryBlob = useBlob(`agents/${agentId}/memory.json`);
const memory = (await memoryBlob.getJSON()) ?? {
facts: [],
tasks: [],
lastUpdated: new Date().toISOString(),
};
// Process input using memory
// ...
// Update and save memory
memory.facts.push("New fact learned from input");
memory.tasks.push({ description: "Follow up on X", completed: false });
memory.lastUpdated = new Date().toISOString();
await memoryBlob.putJSON(memory);
return "Response that uses memory context";
},
);
```
### Saving files
You can use blob storage to save and retrieve binary files like images:
```tsx
const StoreImage = gensx.Component(
"StoreImage",
async ({ imageBuffer, filename }) => {
const imageBlob = useBlob(`images/${filename}`);
// Save image with metadata
await imageBlob.putRaw(imageBuffer, {
contentType: "image/png",
metadata: {
uploadedAt: new Date().toISOString(),
pixelSize: "800x600",
},
});
return { success: true, path: `images/${filename}` };
},
);
const GetImage = gensx.Component("GetImage", async ({ filename }) => {
const imageBlob = useBlob(`images/${filename}`);
// Check if image exists
const exists = await imageBlob.exists();
if (!exists) {
return { found: false };
}
// Get the image with metadata
const image = await imageBlob.getRaw();
return {
found: true,
data: image?.content,
contentType: image?.contentType,
metadata: image?.metadata,
};
});
```
### Optimistic concurrency control
For scenarios where multiple processes might update the same data, you can use ETags to prevent conflicts:
```tsx
const UpdateCounter = gensx.Component(
"UpdateCounter",
async ({ counterName }) => {
const blob = useBlob(`counters/${counterName}.json`);
// Get current value and metadata
const counter = (await blob.getJSON<{ value: number }>()) ?? { value: 0 };
const metadata = await blob.getMetadata();
// Update counter
counter.value += 1;
try {
// Save with ETag to prevent conflicts
await blob.putJSON(counter, {
etag: metadata?.etag,
});
return { success: true, value: counter.value };
} catch (error) {
if (error.name === "BlobConflictError") {
return {
success: false,
message: "Counter was updated by another process",
};
}
throw error;
}
},
);
```
## Development vs. production
GenSX blob storage works identically in both local development and cloud environments:
- **Local development**: Blobs are stored in the `.gensx/blobs` directory by default
- **Cloud deployment**: Blobs are automatically stored in cloud storage
If you don't specify a "kind" that the framework auto-infers this value for you based on the runtime environment.
No code changes are needed when moving from development to production.
## Reference
See the [blob storage component reference](docs/component-reference/storage-components/blob-reference) for full details.
# gensx env unselect
The `gensx env unselect` command de-selects the currently selected environment for your project. This means subsequent commands will require explicit environment specification.
## Usage
```bash
gensx env unselect [options]
```
## Options
| Option | Description |
| ----------------------- | ---------------------------------------------------------------------------- |
| `-p, --project ` | Project name to unselect the environment in. |
| `-h, --help` | Display help for the command. |
## Examples
```bash
# Unselect the current environment
gensx env unselect
# Unselect environment in a specific project
gensx env unselect --project my-app
```
## Notes
- You must be logged in to GenSX Cloud to unselect environments (`gensx login`)
- Unselecting an environment does not delete it, it just removes the selection.
- You can check if an environment is selected using `gensx env`
- To select a new environment, use `gensx env select`
- After unselecting, you'll need to specify the environment for each command that requires one
# gensx env
The `gensx env` command displays the name of the currently selected environment.
## Usage
```bash
gensx env [options]
```
## Options
| Option | Description |
| ----------------------- | ---------------------------------------------------------------------------- |
| `-p, --project ` | Project name to show environment details for. |
| `-h, --help` | Display help for the command. |
## Examples
```bash
# Show the current environment
gensx env
# Show the current environment for a specific project
gensx env --project my-app
```
## Notes
- You must be logged in to GenSX Cloud to show environment details (`gensx login`)
- You can use this command to verify your current environment before running important operations
- If no environment is selected, the command will indicate this
# gensx env select
The `gensx env select` command sets a specific environment as the active environment for your current project. This environment will be used by default for subsequent commands like `deploy` and `run`.
## Usage
```bash
gensx env select [options]
```
## Arguments
| Argument | Description |
| -------- | ------------------------------------------------------------------------------------------ |
| `` | Name of the environment to select. |
## Options
| Option | Description |
| ----------------------- | ---------------------------------------------------------------------------- |
| `-p, --project ` | Project name to select the environment in. |
| `-h, --help` | Display help for the command. |
## Description
This command:
1. Sets the specified environment as active for your current project
2. Updates your local configuration to remember this selection
3. Makes this environment the default target for subsequent commands
After selecting an environment:
- `gensx deploy` will deploy to this environment by default
- `gensx run` will run workflows in this environment by default
- You can still override the environment for specific commands using the `--env` option
## Examples
```bash
# Select the development environment
gensx env select dev
# Select a production environment in a specific project
gensx env select production --project my-app
```
## Notes
- You must be logged in to GenSX Cloud to select environments (`gensx login`)
- The selected environment persists across CLI sessions
- You can check the currently selected environment using `gensx env show`
- To unselect an environment, use `gensx env unselect`
# gensx env ls
The `gensx env ls` command lists all environments in your GenSX project.
## Usage
```bash
gensx env ls [options]
```
## Options
| Option | Description |
| ----------------------- | ---------------------------------------------------------------------------- |
| `-p, --project ` | Project name to list environments for. |
| `-h, --help` | Display help for the command. |
## Examples
```bash
# List all environments in the current project
gensx env ls
# List environments in a specific project
gensx env ls --project my-app
```
## Notes
- You must be logged in to GenSX Cloud to list environments (`gensx login`)
# gensx env create
The `gensx env create` command creates a new environment in your GenSX project. Environments allow you to manage different deployment configurations (like development, staging, and production) for your workflows.
## Usage
```bash
gensx env create [options]
```
## Arguments
| Argument | Description |
| -------- | ------------------------------------------------------------------------------------------ |
| `` | Name of the environment to create (e.g., "dev", "staging", "production"). |
## Options
| Option | Description |
| ----------------------- | ---------------------------------------------------------------------------- |
| `-p, --project ` | Project name to create the environment in. |
| `-h, --help` | Display help for the command. |
## Examples
```bash
# Create a development environment
gensx env create dev
# Create a production environment in a specific project
gensx env create production --project my-app
```
## Notes
- You must be logged in to GenSX Cloud to create environments (`gensx login`)
- Each project can have multiple environments
- Environment names should be descriptive and follow a consistent naming convention