Skip to main content

What is Simulation?

Simulation is the process where ArkSim runs your pre-built scenarios as live conversations against your agent. Each scenario acts as a simulated user with a defined persona, goal, and prior knowledge who drives a multi-turn interaction with the agent until the user’s goal is achieved or the turn limit is reached. The output is a set of conversation transcripts you can inspect directly or pass into Evaluation.
ArkSim Simulation Workflow

Inputs

Before running a simulation, you need three things in place:
InputDescription
ScenariosA scenarios.json file defining user attributes, goals, knowledge, and scenario metadata. Learn how to write scenarios →
Agent configInline agent_config in your config YAML. Defines how ArkSim connects to your agent. See Agent configuration below.
Config fileconfig.yaml controlling simulation parameters such as number of conversations, max turns, workers, and model settings.

Configuration

# Agent configuration
agent_config:
  agent_type: chat_completions
  agent_name: my-agent
  api_config:
    endpoint: https://api.openai.com/v1/chat/completions
    headers:
      Content-Type: application/json
      Authorization: "Bearer ${OPENAI_API_KEY}"
    body:
      model: gpt-5.1
      messages:
        - role: system
          content: "You are a helpful assistant."

# Scenario input
scenario_file_path: ./scenarios.json

# Simulation parameters
num_conversations_per_scenario: 5    # Number of conversations per scenario
max_turns: 5                         # Maximum turns per conversation
num_workers: 50                      # Parallel workers

# Output
output_file_path: ./simulation.json

# Model configuration (used for the simulated user, not your agent)
model: gpt-5.1
provider: openai

# Optional: custom Jinja2 template for the simulated user prompt
# simulated_user_prompt_template: null
The default num_workers is 50. Set num_workers: auto to automatically parallelize across all conversations, or specify a fixed number to control load on your agent endpoint.

Advanced: Custom simulated user prompt

By default, ArkSim uses a built-in system prompt to drive the simulated user. You can override it by setting simulated_user_prompt_template in your config to a Jinja2 template string. The template is rendered per conversation with these variables: From the scenario file
  • scenario.agent_context: A description of the agent being simulated against (e.g., its role, domain, or business purpose).
  • scenario.goal: The specific task or objective the simulated user is trying to accomplish during the conversation (e.g., “file an insurance claim”).
  • scenario.knowledge: Reference content (e.g., product details, policy documents) that the simulated user can draw on when answering or asking questions.
  • scenario.user_profile: A second-person natural language persona description for the simulated user, used as-is in the prompt.
Use them in your template with {{ scenario.goal }}, {{ scenario.user_profile }}, and so on. If you omit simulated_user_prompt_template, the default prompt is used.

Agent configuration

Provide agent connection by defining agent_config inline in your config YAML.

Configuration fields

FieldTypeRequiredDescription
agent_namestringYesUnique identifier for your agent (e.g. lowercase, no spaces).
agent_typestringYesOne of chat_completions, a2a, or custom. See Connection types.
api_configobjectConditionalRequired for chat_completions and a2a. See Connection types.
custom_configobjectConditionalRequired for custom. See Connection types.

Connection types

ArkSim supports three ways to connect your agent:
TypeWhen to use
Chat CompletionsAny agent that accepts OpenAI-compatible requests (OpenAI or a custom wrapper)
A2AAgents exposed via the Agent-to-Agent protocol
CustomAny Python agent loaded directly as a class — no HTTP server required
Type: chat_completionsConnects to any OpenAI-compatible chat completions endpoint.Required fields: endpoint (API URL), headers (e.g. Content-Type; Authorization optional), body (must include a messages array).Placeholders: ${ENV_VAR} is supported in header values for secrets.Example (YAML):
agent_config:
  agent_type: chat_completions
  agent_name: my-agent
  api_config:
    endpoint: https://api.openai.com/v1/chat/completions
    headers:
      Content-Type: application/json
      Authorization: "Bearer ${OPENAI_API_KEY}"
    body:
      model: gpt-5.1
      messages:
        - role: system
          content: "You are a helpful assistant."

Environment variable support

Both types support ${ENV_VAR} substitution in header values (and in the endpoint URL for chat completions where applicable). At runtime the value is replaced; if unset, it becomes an empty string. You can mix static text and variables (e.g. "Bearer ${API_KEY}").
Security: Keep credentials in environment variables and out of committed config files.

Running a Simulation

1

Set up your config file

Create a config.yaml file with your agent config and simulation parameters:
agent_config:
  agent_type: chat_completions
  agent_name: my-agent
  api_config:
    endpoint: https://api.openai.com/v1/chat/completions
    headers:
      Content-Type: application/json
      Authorization: "Bearer ${OPENAI_API_KEY}"
    body:
      model: gpt-5.1
      messages:
        - role: system
          content: "You are a helpful assistant."

scenario_file_path: ./scenarios.json
num_conversations_per_scenario: 5
max_turns: 5
num_workers: 50
output_file_path: ./simulation.json
model: gpt-5.1
provider: openai
2

Run the simulation

Run a simulation with the default configuration:
arksim simulate config.yaml
You can override config values with CLI flags:
arksim simulate config.yaml --max-turns 10 --num-workers 4
3

Inspect your output

Simulation writes to the path set by output_file_path (default ./simulation.json). The file contains the full transcript of every conversation, ready to inspect or pass into Evaluation.

Output

Simulation writes one file: the path set by output_file_path (default ./simulation.json). It contains the full transcript of every conversation: message history, scenario ID, simulated user prompt (template and variables), and all agent and simulated user messages. For the full structure and field order, see the Schema Reference.

Example output

{
  "schema_version": "v1",
  "simulator_version": "v1",
  "simulation_id": "a3f2c1d4-8e7b-4f9a-b6c2-1d0e5f3a8b7c",
  "generated_at": "2025-04-10T14:32:00Z",
  "conversations": [
    {
      "conversation_id": "00de685e-f76b-4a5f-a6e5-217cad777316",
      "scenario_id": "8f4c2a91-3b7e-4d5f-a9c2-6e1b4d9f2037",
      "conversation_history": [
        {
          "turn_id": 0,
          "message_id": "3c9a7f12-6b4e-4d8a-b2f1-9e5c0a7d41e6",
          "role": "simulated_user",
          "content": "So this whole identity theft thing is stressing me out a bit...."
        },
        {
          "turn_id": 0,
          "message_id": "b7e2c9a4-1f6d-4c83-9a5b-2d8e7f0c4a91",
          "role": "assistant",
          "content": "I understand your concern about identity theft, but rest assured..."
        }
      ],
      "simulated_user_prompt": {
        "simulated_user_prompt_template": "You are a customer interacting with an agent through multiple turns...",
        "variables": {
          "scenario.agent_context": "XYZ Bank Insurance is a Canadian provider of...",
          "scenario.user_profile": "You are Priya Sharma, a 32-year-old from Toronto, ON...",
          "scenario.goal": "You want to find out what documents are needed...",
          "scenario.knowledge": ["Typical timelines: claim setup is usually same day..."]
        }
      }
    }
  ]
}

Next Steps

Once your conversations are simulated, you’re ready to evaluate how well your agent performed.

Evaluation →

Score your agent’s responses against the simulated user’s goals and knowledge.