Determinism & Trust
How Rapidfolio guarantees that your financial automations are consistent, traceable, and auditable — every time.
Why Determinism Matters in Finance
Automation that touches money needs to be predictable. You need to know that the procedure you tested in sandbox will behave the same way in production. You need to know that a charge that ran at 9am would produce the same result if run again with the same inputs. You need a complete record of what happened and why — one you can hand to an auditor.
Rapidfolio is built around a deterministic execution model. This page explains exactly what that means and the mechanisms that enforce it.
1. Deterministic Execution Model
Given the same inputs and the same integration responses, Rapidfolio produces the same outputs. Every time.
The AI agent's role is strictly graph traversal and data mapping — it follows the procedure you defined, resolves input references, and calls the nodes in order. It does not invent steps. It cannot skip nodes, add new tool calls, or take actions outside the procedure graph.
This is a hard architectural constraint, not a best-effort guideline. The procedure graph is the complete specification of what will happen. The agent cannot deviate from it.
What this means in practice:
- If you test a procedure in sandbox with a given input and it produces output
X, running it in live with the same input and the same integration responses will also produceX - A reviewer inspecting a run record can understand exactly what happened by reading the procedure graph and the step log — there are no hidden decisions
- Deploying a new version of a procedure requires publishing it explicitly; existing triggers and API integrations continue to use the pinned version until you update them
2. Calculations Are Code, Not AI
When your procedure includes financial calculations — computing a fee, applying a threshold, summing line items, calculating a percentage — those calculations are expressed as deterministic code expressions evaluated by the Rapidfolio runtime.
The LLM does not perform arithmetic. Condition nodes evaluate expressions like:
inputs.amount * 0.029 + 30
steps.balanceCheck.result.available >= inputs.transferAmount
steps.kycScore.result.score >= 700 && inputs.amount <= 50000
These are evaluated by the runtime as exact code expressions, using standard floating-point arithmetic (or integer arithmetic where appropriate). The result is always the same for the same inputs. There is no probability, no estimation, no hallucination.
This is critical for financial operations. Fee calculations, eligibility thresholds, credit decisions — these must be exact. Rapidfolio enforces this by keeping calculations in code and out of the language model entirely.
3. Tool Call Traceability
Every tool call — every Stripe charge, every Plaid balance fetch, every SendGrid email, every Slack message — is logged on the run record with:
- The exact parameters sent to the integration
- The exact response received
- The timestamp of the call
- The integration and action name
- The connection used (sandbox or live)
This logging is automatic and cannot be disabled. It is part of the run record for the lifetime of the run.
Example step log entry:
{
"nodeId": "charge-customer",
"nodeType": "tool_call",
"integration": "stripe",
"action": "createPaymentIntent",
"startedAt": "2026-02-15T09:41:02.311Z",
"completedAt": "2026-02-15T09:41:02.891Z",
"input": {
"amount": 50000,
"currency": "usd",
"customerId": "cust_abc123",
"description": "Invoice #1042"
},
"output": {
"id": "pi_3OxKvL2eZvKYlo2C",
"status": "succeeded",
"amount": 50000
}
}
You can retrieve the full step log for any run via the dashboard or the API. This gives you a complete, tamper-evident audit trail of every action your automation took.
4. Human Review Gates
For sensitive financial actions — payments, wire transfers, credit decisions, account closures — you can insert a Human Review node into your procedure graph. When execution reaches that node, the run pauses and a notification goes to your team.
The reviewer sees:
- The full run context and inputs
- The data at the review point (e.g., the proposed payment parameters)
- The procedure step that is waiting
They then approve or reject. Their decision is recorded on the run with:
- Who approved — the reviewer's identity
- When — the exact timestamp
- What they saw — the data that was presented at review time
- Any output overrides — if the reviewer modified values before approving
- Rejection reason — if they rejected, the reason they provided
The approval record is permanent and cannot be altered. For compliance-sensitive workflows, this gives you a clear chain of custody: the system proposed an action, a human reviewed it, and a human authorized it.
See Human Review for the full implementation guide.
5. Test Scenarios
Before publishing a new procedure version to live, you can define scenarios — named test cases, each consisting of:
- A procedure version
- Fixed input data
- Expected outputs
Rapidfolio runs each scenario and compares the actual outputs to your expected outputs. Scenarios pass when the outputs match; they fail when they diverge.
Run scenarios before every publish to catch regressions. Scenarios give you confidence that a change to your procedure didn't break existing behavior — the finance equivalent of a test suite.
Rapidfolio can also suggest scenarios based on your procedure graph, generating test cases that cover common paths and edge cases automatically.
See Scenarios for setup and usage.
6. Idempotency
Review decisions require an idempotency key — a unique string you generate (typically a UUID) and include in the review request:
{
"approved": true,
"idempotencyKey": "a3f2c1d0-8e7b-4a12-9c3d-bf01e2345678"
}
If the same idempotency key is submitted twice (e.g., due to a network retry), Rapidfolio returns the result of the first submission without re-processing. This prevents double-approvals — a critical property for workflows that result in financial transactions.
The idempotency key is stored on the run record alongside the approval decision.
7. Simulation Mode
Before a procedure goes anywhere near live data, you can simulate it in the editor. Simulation is a dry-run mode: the procedure graph executes, data flows through nodes, conditions are evaluated — but no tool calls are sent to real integrations.
Instead, simulation uses the responses you've configured in each node's test data, or the outputs from prior scenario runs.
Use simulation to:
- Verify that your input mappings resolve correctly
- Confirm that condition branches route as expected
- Walk through the full graph before wiring up real connections
Simulation results are shown in the editor's step log panel and are not saved as run records.
8. Version Pinning
Triggers and API integrations can pin to a specific procedure version. When you publish a new version, nothing that has pinned to an older version is affected — it continues using the version it was configured with.
This means:
- Publishing a new procedure version is not a breaking change for existing callers
- You can deploy and test a new version without disrupting live traffic
- Rolling back is trivial — update the pin to point to the previous published version
Version pinning gives teams the ability to iterate on procedure logic without risking unintended changes to live automation.
See Versions for how to manage versions and pins.
Summary
| Property | Mechanism |
|---|---|
| Deterministic outputs | AI agent is graph-traversal only; no improvisation |
| Exact calculations | Expressions evaluated as code, not by the LLM |
| Complete audit trail | Every tool call logged with exact inputs and outputs |
| Authorized financial actions | Human Review gates with logged approver identity and timestamp |
| Regression prevention | Test scenarios run before every publish |
| No double-processing | Idempotency keys on review decisions |
| Safe iteration | Simulation mode for dry-run testing in the editor |
| Stable deployments | Version pinning for triggers and API callers |