Test Scenarios
Define named test cases with fixed inputs and expected outputs. Run them before publishing to catch regressions before they reach live.
What is a Scenario?
A scenario is a named test case for a procedure. It pairs:
- A set of fixed input values — the inputs the procedure receives
- Expected outputs — what the procedure should return when it runs with those inputs
When you run scenarios, Rapidfolio executes the procedure with the scenario's inputs (in sandbox) and compares the actual outputs to your expected outputs. Scenarios pass when outputs match; they fail when they diverge.
Scenarios are the mechanism for validating a specific version of your procedure before publishing it. Before publishing, run all scenarios for that version. If any fail, you've found a regression before it reaches production.
Creating Scenarios
From the Editor
- Open your procedure in the editor
- Click Scenarios in the toolbar (or the left sidebar)
- Click New Scenario
- Give the scenario a descriptive name (e.g., "High-value transfer — requires review", "KYC approved — onboarding completes")
- Fill in the input values — these are the procedure inputs that will be passed when the scenario runs
- Configure expected outputs — what the procedure should return
- Optionally configure expected step behaviors — which nodes you expect to be reached, which branches you expect to take
- Click Save
Naming Conventions
Name scenarios to describe the business case they represent, not the data values. Good scenario names make the test suite self-documenting:
Good: "New customer — amount below review threshold — completes automatically"
Good: "Existing customer — transfer exceeds limit — triggers review gate"
Avoid: "test1"
Avoid: "scenario with amount 9999"
AI-Generated Scenarios
Rapidfolio can suggest scenarios based on your procedure graph. The suggestion engine analyzes:
- The branches in your Condition nodes and generates scenarios for each path
- The thresholds in your expressions and generates scenarios for boundary values
- Your Human Review nodes and generates scenarios for both the approval and rejection paths
To generate suggestions:
- Open Scenarios
- Click Suggest scenarios
- Review the suggested scenarios — each one has a generated name, inputs, and expected outputs
- Accept, edit, or discard each suggestion
AI-generated scenarios are a starting point, not a complete test suite. Review them for business accuracy — the generator understands the graph structure but doesn't know your domain rules.
Running Scenarios
Before Publishing
Running scenarios before publishing is strongly recommended and treated as a promotion gate for production workflows.
- Open your procedure
- Click Scenarios
- Click Run all scenarios
Rapidfolio executes each scenario in sandbox with the configured inputs. Results appear in the scenarios panel as they complete.
Running Individual Scenarios
Click Run on any individual scenario to execute just that one. Use this when iterating on a specific part of the procedure and you want fast feedback on a single path.
Scenario Execution Environment
Scenarios always run in sandbox. They use your sandbox connections and sandbox integration credentials. Scenarios never run in live — they are a testing tool.
Integration responses during scenario runs come from your sandbox integrations (e.g., Stripe test mode, Plaid sandbox). If you need predictable responses regardless of external state, configure mock responses on the relevant Tool Call nodes in the editor.
Scenario Results
After a scenario runs, each one shows a pass or fail status:
Pass
All expected outputs matched the actual outputs. The scenario is green.
Fail
One or more expected outputs did not match actual outputs. The scenario is red and shows a diff view:
Expected:
{
"status": "completed",
"transferId": "txn_123"
}
Actual:
{
"status": "failed",
"error": "Insufficient funds"
}
The diff view highlights which fields diverged. Click into the failed scenario to see the full step log — this tells you exactly which node produced an unexpected result and what it received.
Common Failure Causes
| Cause | How to diagnose |
|---|---|
| Input mapping changed | Check the step log for nodes that received unexpected input values |
| Condition branch changed | Check which branch was taken vs which was expected |
| Integration mock response changed | Check the Tool Call node's sandbox response |
| Expected output updated without running | Update the expected outputs to match the new behavior, if intentional |
Using Scenarios as Regression Tests
The most valuable use of scenarios is as a regression safety net during ongoing procedure development.
The workflow:
- When you first build a procedure, create scenarios for each important path
- Before every publish, run all scenarios
- If any fail, investigate before publishing
- If you intentionally changed behavior, update the expected outputs for the affected scenarios
- Only publish when all scenarios pass (or when you have consciously accepted a specific change)
This creates a test suite that grows with your procedure. As you add new features or fix bugs, add new scenarios to cover the new behavior. Over time, scenarios document what the procedure is supposed to do in concrete, executable terms.
Scenarios and Version History
Scenarios are attached to a specific procedure version. Each version has its own set of scenarios that reflect the inputs, outputs, and expected behavior of that version's graph.
When you create a new version (e.g., by duplicating or branching from an existing one), you start with a clean set of scenarios. Scenarios from a previous version are not automatically carried over — this is intentional, because a new version may have different inputs, outputs, or node structure that would make the old scenarios invalid.
When you view a past version in the dashboard, you can see the scenario results from when that version was tested, giving you a historical record of what was validated before each publish.
Exporting Scenario Results
Scenario run results are available via the API:
GET https://app.rapid.io/api/v1/procedures/:id/scenarios
Authorization: Bearer <api_key>
This returns the list of scenarios with their most recent run results. Use this to integrate scenario results into your CI/CD pipeline or deployment checks — for example, blocking a deployment if the latest scenario run has failures.