Skip to Content
Quickstart

Quickstart

In this quickstart, you’ll learn how to get up and running with GenSX. GenSX is a simple typescript framework for building complex LLM applications using JSX. If you haven’t already, check out the basic concepts to learn more about how GenSX works.

Prerequisites

Before getting started, make sure you have the following:

Install the gensx CLI

You can install the gensx CLI using your package manager of choice:

# For a global installation npm i -g gensx # or prefix every command with npx npx gensx

Login to GenSX Cloud (optional)

If you want to be able to visualize your workflows and view traces, you’ll need to login to GenSX Cloud:

blog writing trace

Run the following command to log into GenSX Cloud:

# If you installed the CLI globally gensx login # Using npx npx gensx login

Now traces will automatically be saved to the cloud so you can visualize and debug workflow executions, but this step is optional and you can skip it for now if you prefer.

gensx login

Hit enter and your browser will open the login page.

console login

Once you’ve logged in, you’ll see the following success message:

login success

Now you’re ready to create a new workflow! Return to your terminal for the next steps.

Create a new workflow

To get started, run the following command with your package manager of choice in an empty directory. This will create a new GenSX workflow to get you started.

# If you installed the CLI globally gensx new . # Using npx npx gensx new .

When creating a new project, you’ll be prompted to select IDE rules to add to your project. These rules help AI assistants like Claude, Cursor, Cline, and Windsurf understand your GenSX project better, providing more accurate code suggestions and help.

You can also specify IDE rules directly using the --ide-rules flag:

# Install Cline and Windsurf rules gensx new . --ide-rules cline,windsurf # Skip IDE rule selection gensx new . --skip-ide-rules

You can also install or update these rules later using npx:

# Install Claude.md template npx @gensx/claude-md # Install Cursor rules npx @gensx/cursor-rules # Install Cline rules npx @gensx/cline-rules # Install Windsurf rules npx @gensx/windsurf-rules

This will create a new started project, ready for you to run your first workflow:

create project

In index.tsx, you’ll find a simple OpenAI chat completion component:

import * as gensx from "@gensx/core"; import { OpenAIProvider, ChatCompletion } from "@gensx/openai"; interface RespondProps { userInput: string; } type RespondOutput = string; const Respond = gensx.Component<RespondProps, RespondOutput>( "Respond", async ({ userInput }) => { return ( <ChatCompletion model="gpt-4o-mini" messages={[ { role: "system", content: "You are a helpful assistant. Respond to the user's input.", }, { role: "user", content: userInput }, ]} /> ); }, ); const WorkflowComponent = gensx.Component<{ userInput: string }, string>( "Workflow", ({ userInput }) => ( <OpenAIProvider apiKey={process.env.OPENAI_API_KEY}> <Respond userInput={userInput} /> </OpenAIProvider> ), ); const workflow = gensx.Workflow("MyGSXWorkflow", WorkflowComponent); const result = await workflow.run({ userInput: "Hi there! Say 'Hello, World!' and nothing else.", }); console.log(result);

The component is executed through gensx.Workflow.run(), which processes the JSX tree from top to bottom. In this example:

  1. First, the OpenAIProvider component is initialized with your API key
  2. Then, the Respond component receives the userInput prop
  3. Inside Respond, a ChatCompletion component is created with the specified model and messages
  4. The result flows back up through the tree, ultimately returning the response from gpt-4o-mini.

Components in GenSX are pure functions that take props and return outputs, making them easy to test and compose. The JSX structure makes the data flow clear and explicit - each component’s output can be used by its children through standard TypeScript/JavaScript.

Running the workflow

To run the workflow, you’ll need to set the OPENAI_API_KEY environment variable.

# Set the environment variable export OPENAI_API_KEY=<your-api-key> # Run the project pnpm dev

run project

If you chose to log in, you can now view the trace for this run in GenSX Cloud by clicking the link:

trace

The trace shows a flame graph of your workflow, including every component that executed with inputs and outputs.

Some components will be hidden by default, but you can click the carat to expand them. Clicking on a component will show you details about it’s input props and outputs.

For longer running workflows, this view will update in real time as the workflow executes.

Combining components

The example above is a simple workflow with a single component. In practice, you’ll often want to combine multiple components to create more complex workflows.

Components can be nested to create multi-step workflows with each component’s output being passed through a child function. For example, let’s define two components: a Research component that gathers information about a topic, and a Writer component that uses that information to write a blog post.

// Research component that gathers information const Research = gensx.Component<{ topic: string }, string>( "Research", async ({ topic }) => { return ( <ChatCompletion model="gpt-4o-mini" messages={[ { role: "system", content: "You are a research assistant. Provide key facts about the topic.", }, { role: "user", content: topic }, ]} /> ); }, ); // Writer component that uses research to write content const WriteArticle = gensx.Component< { topic: string; research: string }, string >("WriteArticle", async ({ topic, research }) => { return ( <ChatCompletion model="gpt-4o-mini" messages={[ { role: "system", content: "You are a content writer. Use the research provided to write a blog post about the topic.", }, { role: "user", content: `Topic: ${topic}\nResearch: ${research}` }, ]} /> ); });

Now you can combine these components using a child function:

// Combine components using child functions const ResearchAndWrite = gensx.Component<{ topic: string }, string>( "ResearchAndWrite", ({ topic }) => ( <OpenAIProvider apiKey={process.env.OPENAI_API_KEY}> <Research topic={topic}> {(research) => <WriteArticle topic={topic} research={research} />} </Research> </OpenAIProvider> ), ); const workflow = gensx.Workflow("ResearchAndWriteWorkflow", ResearchAndWrite, { printUrl: true, });

In this example, the Research component gathers information about the topic which then passes the information to the WriteArticle component. The WriteArticle component uses that information to write an article about the topic which is then returned as the result.

Streaming

One common challenge with LLM workflows is handling streaming responses. Any given LLM call can return a response as a string or as a stream of tokens. Typically you’ll want the last component of your workflow to stream the response.

To take advantage of streaming, all you need to do is update the WriteArticle component to use StreamComponent and set the stream prop to true in the ChatCompletion component.

interface WriterProps { topic: string; research: string; stream?: boolean; } const Writer = gensx.StreamComponent<WriterProps>( "Writer", async ({ topic, research, stream }) => { return ( <ChatCompletion stream={stream ?? false} model="gpt-4o-mini" messages={[ { role: "system", content: "You are a content writer. Use the research provided to write a blog post about the topic.", }, { role: "user", content: `Topic: ${topic}\nResearch: ${research}` }, ]} /> ); }, ); const ResearchAndWrite = gensx.Component<{ topic: string }, string>( "ResearchAndWrite", ({ topic }) => ( <OpenAIProvider apiKey={process.env.OPENAI_API_KEY}> <Research topic={topic}> {(research) => ( <Writer topic="latest quantum computing chips" research={research} stream={true} /> )} </Research> </OpenAIProvider> ), ); const workflow = gensx.Workflow("ResearchAndWriteWorkflow", ResearchAndWrite); const stream = await workflow.run( { topic: "latest quantum computing chips" }, { printUrl: true }, ); // Print the streaming response for await (const chunk of stream) { process.stdout.write(chunk); }

Now we can see our research step running before the write step:

streaming

While this is nice, the real power of streaming components comes when you expand or refactor your workflow. Now you could easily add an <EditArticle> component to the workflow that streams the response to the user with minimal changes. There’s no extra plumbing needed besides removing the stream={true} prop on the WriteArticle component.

const ResearchAndWrite = gensx.Component<{ topic: string }, string>( "ResearchAndWrite", ({ topic }) => ( <OpenAIProvider apiKey={process.env.OPENAI_API_KEY}> <Research topic={topic}> {(research) => ( <WriteArticle topic={topic} research={research}> {(content) => <EditArticle content={content} stream={true}/>} </WriteArticle> )} </Research> </OpenAIProvider>, ); const workflow = gensx.Workflow("ResearchAndWriteWorkflow", ResearchAndWrite, { printUrl: true }); const stream = await workflow.run({ topic: "latest quantum computing chips" });

blog writing trace

Running the dev server

Now that you’ve built your workflows, you can easily turn them into REST APIs.

GenSX provides a local development server with local REST APIs that match the shape of workflows deployed to GenSX Cloud. You can run the dev server from the CLI:

# Start the development server gensx start src/workflows.tsx

The development server provides several key features:

  • Hot reloading: Changes to your code are automatically detected and recompiled
  • API endpoints: Each workflow is exposed as a REST endpoint
  • Swagger UI: Interactive documentation for your workflows at http://localhost:1337/swagger-ui
  • Local storage: Built-in support for blob storage and databases

You’ll see something like this when you start the server:

🔍 Starting GenSX Dev Server... Starting development server... Compilation completed Generating schema 📋 Available workflows: - ResearchAndWriteWorkflow: http://localhost:1337/workflows/ResearchAndWriteWorkflow Server is running. Press Ctrl+C to stop.

You can now test your workflow by sending requests to the provided URL using any HTTP client, or using the built-in Swagger UI at http://localhost:1337/swagger-ui.

Deploying your project to GenSX Cloud

Now that you’ve tested your APIs locally, you can deploy them to the cloud. GenSX Cloud provides serverless deployment with zero configuration:

# Deploy your project to GenSX Cloud gensx deploy src/workflows.tsx -ev OPENAI_API_KEY

This command:

  1. Builds your TypeScript code for production
  2. Bundles all dependencies
  3. Uploads the package to GenSX Cloud
  4. Creates REST API endpoints for each workflow
  5. Configures serverless infrastructure

For production deployments, you can target a specific environment:

# Deploy to production environment gensx deploy src/index.ts --env production

Running a workflow from the CLI

Once deployed, you can execute your workflows directly from the command line:

# Run a workflow synchronously gensx run ResearchAndWriteWorkflow --input '{"topic":"quantum computing"}' # Save the output to a file gensx run ResearchAndWriteWorkflow --input '{"topic":"quantum computing"}' --output result.json

The CLI makes it easy to test your workflows and integrate them into scripts or automation.

Running a workflow from the GenSX console

The GenSX Cloud console provides a visual interface for managing and executing your workflows:

  1. Log in to app.gensx.com
  2. Navigate to your project and environment
  3. Select the workflow you want to run
  4. Click the “Run” button and enter your input
  5. View the results directly in the console

Running a workflow in the console

The console also provides API documentation and code snippets for your workflows as well as execution history and tracing for all previous workflow runs.

Adding persistence with storage

Now that you’ve deployed your first workflow, you can add in storage so you can build more sophisticated workflows. GenSX offers three types of builtin storage: blob storage, sql database storage and, full text and vector search storage.

Blob storage

Use the useBlob hook to store and retrieve unstructured data:

import { BlobProvider, useBlob } from "@gensx/storage"; const ChatWithMemory = gensx.Component< { userInput: string; threadId: string }, string >("ChatWithMemory", async ({ userInput, threadId }) => { // Load chat history from cloud storage const blob = useBlob<ChatMessage[]>(`chat-history/${threadId}.json`); const history = (await blob.getJSON()) ?? []; // Add user message and get AI response // ... // Save updated history await blob.putJSON(updatedHistory); return response; }); // Wrap your workflow with BlobProvider const WorkflowComponent = gensx.Component< { userInput: string; threadId: string }, string >("Workflow", ({ userInput, threadId }) => ( <OpenAIProvider apiKey={process.env.OPENAI_API_KEY}> <BlobProvider> <ChatWithMemory userInput={userInput} threadId={threadId} /> </BlobProvider> </OpenAIProvider> ));

SQL database

Use the useDatabase hook for structured data:

import { DatabaseProvider, useDatabase } from "@gensx/storage"; const SqlCopilot = gensx.Component<{ question: string }, string>( "SqlCopilot", async ({ question }) => { const db = await useDatabase("baseball"); // Set up schema if needed await db.executeMultiple(` CREATE TABLE IF NOT EXISTS baseball_stats ( player TEXT PRIMARY KEY, team TEXT, batting_avg REAL ) `); // Execute query const result = await db.execute("SELECT * FROM baseball_stats LIMIT 5"); // Use the result to inform your LLM response // ... return response; }, ); // Wrap your workflow with DatabaseProvider const WorkflowComponent = gensx.Component<{ question: string }, string>( "DatabaseWorkflowComponent", ({ question }) => ( <DatabaseProvider> <OpenAIProvider apiKey={process.env.OPENAI_API_KEY}> <SqlCopilot question={question} /> </OpenAIProvider> </DatabaseProvider> ), );

Use the useSearch hook for semantic search and RAG applications:

import { SearchProvider, useSearch } from "@gensx/storage"; import { OpenAIEmbedding } from "@gensx/openai"; const SemanticSearch = gensx.Component<{ query: string }, any[]>( "SemanticSearch", async ({ query }) => { // Access a vector search namespace const namespace = await useSearch("documents"); // Generate an vector embedding for the query const embedding = await OpenAIEmbedding.run({ model: "text-embedding-3-small", input: query, }); // Search for similar documents const results = await namespace.query({ vector: embedding.data[0].embedding, topK: 5, includeAttributes: true, }); return results; }, ); // Wrap your workflow with SearchProvider const WorkflowComponent = gensx.Component<{ query: string }, any[]>( "SearchWorkflow", ({ query }) => ( <OpenAIProvider apiKey={process.env.OPENAI_API_KEY}> <SearchProvider> <SemanticSearch query={query} /> </SearchProvider> </OpenAIProvider> ), );

Learning more

Explore these resources to dive deeper into GenSX:

Check out these example projects to see GenSX in action:

Last updated on