-
Workflows are visible
Every workflow is a graph you can read, debug, and change. Not buried in application code.
-
Mix any kind of step
Code functions, LLM prompts, autonomous agents, human approval gates — all first-class nodes in the same graph.
-
Swap providers freely
Route each step to OpenAI, Claude, Gemini, Ollama, or OpenRouter. Add fallback chains. The workflow definition doesn't change.
-
Production resilient
Provider fallbacks, tool circuit breakers, response caching, checkpointing, and streaming out of the box.
Get running in 60 seconds¶
dotnet new console -n MyWorkflow && cd MyWorkflow
dotnet add package Spectra
dotnet add package Spectra.Extensions
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using Spectra.Contracts.Execution;
using Spectra.Contracts.State;
using Spectra.Registration;
using Spectra.Workflow;
var host = Host.CreateDefaultBuilder(args)
.ConfigureServices(services =>
{
services.AddSpectra(spectra =>
{
spectra.AddOpenRouter(c =>
{
c.ApiKey = Environment.GetEnvironmentVariable("OPENROUTER_API_KEY")!;
c.Model = "openai/gpt-4o-mini";
});
spectra.AddConsoleEvents();
});
})
.Build();
var workflow = WorkflowBuilder.Create("hello")
.AddAgent("assistant", "openrouter", "openai/gpt-4o-mini", a => a
.WithSystemPrompt("You are a friendly assistant."))
.AddAgentNode("greet", "assistant", n => n
.WithUserPrompt("Say hello to {{inputs.name}} in a creative way.")
.WithMaxIterations(1))
.Build();
var runner = host.Services.GetRequiredService<IWorkflowRunner>();
var state = new WorkflowState();
state.Inputs["name"] = "World";
var result = await runner.RunAsync(workflow, state);
var output = (IDictionary<string, object?>)result.Context["greet"];
Console.WriteLine(output["response"]);
Using a different provider?
Replace AddOpenRouter(...) with AddOpenAI(...), AddAnthropic(...), AddGemini(...), or AddOllama(...). The workflow stays the same.
Chain steps together¶
The real power is connecting steps. Each node writes output to Context under its id, and later nodes reference it with {{Context.greet.response}} in prompt templates.
var workflow = WorkflowBuilder.Create("greet-and-translate")
.AddAgent("assistant", "openrouter", "openai/gpt-4o-mini", a => a
.WithSystemPrompt("You are a helpful assistant."))
.AddAgentNode("greet", "assistant", n => n
.WithUserPrompt("Say hello to {{inputs.name}}.")
.WithMaxIterations(1))
.AddAgentNode("translate", "assistant", n => n
.WithUserPrompt("Translate to French: {{nodes.greet.output}}")
.WithMaxIterations(1))
.AddEdge("greet", "translate")
.Build();
{
"id": "greet-and-translate",
"nodes": [
{
"id": "greet",
"stepType": "agent",
"agentId": "assistant",
"parameters": {
"userPrompt": "Say hello to {{inputs.name}}.",
"maxIterations": 1
}
},
{
"id": "translate",
"stepType": "agent",
"agentId": "assistant",
"parameters": {
"userPrompt": "Translate to French: {{nodes.greet.output}}",
"maxIterations": 1
}
}
],
"edges": [
{ "source": "greet", "target": "translate" }
],
"agents": [
{
"id": "assistant",
"provider": "openrouter",
"model": "openai/gpt-4o-mini",
"systemPrompt": "You are a helpful assistant."
}
]
}
Nodes do work. Edges define flow. State moves through the graph. That's the whole model.
What people build with it¶
Agent pipelines
Autonomous tool-using agents with iteration limits and cost tracking.
Retrieval workflows
Search → reason → validate with conditional branching.
Multi-agent systems
Supervisor, handoff, and delegation patterns.
Human-in-the-loop
Interrupt any step for approval, resume from checkpoint.
Built on .NET¶
Spectra runs on the .NET runtime: parallel branches use real OS threads, state is compile-time typed, and CancellationToken flows through every step. AddSpectra(...) works like any other .NET service registration, and built-in OpenTelemetry tracing exports to your existing observability stack.
MIT licensed. Everything ships free — the engine, all built-in steps, every provider, checkpointing, streaming, multi-agent, and MCP support.
