Simulation is the process where ArkSim runs your pre-built scenarios as live conversations against your agent. Each scenario acts as a simulated user with a defined persona, goal, and prior knowledge who drives a multi-turn interaction with the agent until the user’s goal is achieved or the turn limit is reached.The output is a set of conversation transcripts you can inspect directly or pass into Evaluation.
# Agent configurationagent_config: agent_type: chat_completions agent_name: my-agent api_config: endpoint: https://api.openai.com/v1/chat/completions headers: Content-Type: application/json Authorization: "Bearer ${OPENAI_API_KEY}" body: model: gpt-5.1 messages: - role: system content: "You are a helpful assistant."# Scenario inputscenario_file_path: ./scenarios.json# Simulation parametersnum_conversations_per_scenario: 5 # Number of conversations per scenariomax_turns: 5 # Maximum turns per conversationnum_workers: 50 # Parallel workers# Outputoutput_file_path: ./simulation.json# Model configuration (used for the simulated user, not your agent)model: gpt-5.1provider: openai# Optional: custom Jinja2 template for the simulated user prompt# simulated_user_prompt_template: null
The default num_workers is 50. Set num_workers: auto to automatically parallelize across all conversations, or specify a fixed number to control load on your agent endpoint.
By default, ArkSim uses a built-in system prompt to drive the simulated user. You can override it by setting simulated_user_prompt_template in your config to a Jinja2 template string. The template is rendered per conversation with these variables:From the scenario file
scenario.agent_context: A description of the agent being simulated against (e.g., its role, domain, or business purpose).
scenario.goal: The specific task or objective the simulated user is trying to accomplish during the conversation (e.g., “file an insurance claim”).
scenario.knowledge: Reference content (e.g., product details, policy documents) that the simulated user can draw on when answering or asking questions.
scenario.user_profile: A second-person natural language persona description for the simulated user, used as-is in the prompt.
Use them in your template with {{ scenario.goal }}, {{ scenario.user_profile }}, and so on. If you omit simulated_user_prompt_template, the default prompt is used.
Show Show default prompt template
The built-in template used when simulated_user_prompt_template is not set:
You are a user interacting with an agent through multiple turns.The agent is supplied by the following conversation context:{{ scenario.agent_context }}Your profile is:{{ scenario.user_profile }}You have the following goal when interacting with this agent:{{ scenario.goal }}{% if scenario.knowledge and scenario.knowledge|length == 1 %}Here is the content that you might be interested in and might have questions about:{{ scenario.knowledge[0].content }}{% elif scenario.knowledge and scenario.knowledge|length > 1 %}You will receive reference knowledge in a user message immediately before each of your replies; use it when relevant to achieve your goal.{% endif %}Rules:- Do not give away all the instruction at once. Only provide the information necessary for the current step.- Do not hallucinate information that is not provided in the instruction.- If the instruction goal is satisfied, generate '###STOP###' as a standalone message without anything else.- Do not repeat the exact instruction in the conversation.- Avoid using bullet points or lists.- Keep responses brief and under 50 words.- You are the user and the agent is the assistant. Do not flip the roles.{% if scenario.knowledge and scenario.knowledge|length > 1 %}- Ask only one question per turn.{% endif %}
Any Python agent loaded directly as a class — no HTTP server required
Chat Completions
A2A
Custom
Type:chat_completionsConnects to any OpenAI-compatible chat completions endpoint.Required fields:endpoint (API URL), headers (e.g. Content-Type; Authorization optional), body (must include a messages array).Placeholders:${ENV_VAR} is supported in header values for secrets.Example (YAML):
agent_config: agent_type: chat_completions agent_name: my-agent api_config: endpoint: https://api.openai.com/v1/chat/completions headers: Content-Type: application/json Authorization: "Bearer ${OPENAI_API_KEY}" body: model: gpt-5.1 messages: - role: system content: "You are a helpful assistant."
Type:a2aFor agents that speak the Agent-to-Agent (A2A) protocol.Required fields:endpoint (A2A agent server URL). Optional:headers (e.g. for auth).Example (YAML):
Type:customLoads your agent directly as a Python class — no HTTP server needed. Your agent must subclass BaseAgent and implement get_chat_id() and execute().Required fields:module_path (path to a .py file containing your BaseAgent subclass). Optional:class_name (if the file contains multiple BaseAgent subclasses).Example agent (my_agent.py):
Both types support ${ENV_VAR} substitution in header values (and in the endpoint URL for chat completions where applicable). At runtime the value is replaced; if unset, it becomes an empty string. You can mix static text and variables (e.g. "Bearer ${API_KEY}").
Security: Keep credentials in environment variables and out of committed config files.
Simulation writes to the path set by output_file_path (default ./simulation.json). The file contains the full transcript of every conversation, ready to inspect or pass into Evaluation.
Simulation writes one file: the path set by output_file_path (default ./simulation.json). It contains the full transcript of every conversation: message history, scenario ID, simulated user prompt (template and variables), and all agent and simulated user messages.For the full structure and field order, see the Schema Reference.
{ "schema_version": "v1", "simulator_version": "v1", "simulation_id": "a3f2c1d4-8e7b-4f9a-b6c2-1d0e5f3a8b7c", "generated_at": "2025-04-10T14:32:00Z", "conversations": [ { "conversation_id": "00de685e-f76b-4a5f-a6e5-217cad777316", "scenario_id": "8f4c2a91-3b7e-4d5f-a9c2-6e1b4d9f2037", "conversation_history": [ { "turn_id": 0, "message_id": "3c9a7f12-6b4e-4d8a-b2f1-9e5c0a7d41e6", "role": "simulated_user", "content": "So this whole identity theft thing is stressing me out a bit...." }, { "turn_id": 0, "message_id": "b7e2c9a4-1f6d-4c83-9a5b-2d8e7f0c4a91", "role": "assistant", "content": "I understand your concern about identity theft, but rest assured..." } ], "simulated_user_prompt": { "simulated_user_prompt_template": "You are a customer interacting with an agent through multiple turns...", "variables": { "scenario.agent_context": "XYZ Bank Insurance is a Canadian provider of...", "scenario.user_profile": "You are Priya Sharma, a 32-year-old from Toronto, ON...", "scenario.goal": "You want to find out what documents are needed...", "scenario.knowledge": ["Typical timelines: claim setup is usually same day..."] } } } ]}