Why AI agents need exact calculations
for crypto trading
LLMs are remarkably good at reasoning. They are not good at arithmetic. When an AI trading agent computes position size or liquidation price inline, it is estimating — and in trading, an estimate can mean the difference between a managed stop-out and a margin call.
The problem: LLMs hallucinate math
Ask an LLM to calculate the liquidation price for a 10× leveraged BTC long at $83,000. It will give you a number — usually within 5–15% of the correct answer. Sometimes it will be exact. Sometimes it will be off by thousands of dollars.
The problem is not that LLMs are bad at math in general. It is that:
- They do not always apply the right formula (exchange-specific maintenance margin rates matter)
- They do not always use the correct fee structure
- The same prompt can return different answers on different calls
- There is no way to audit why they produced a specific number
For a casual user asking “roughly what's my liquidation price,” this is acceptable. For an autonomous agent managing live positions, it is not.
A concrete example: position sizing
You are building a trading agent. The user says: “I have $5,000 in my Bybit account and I want to go long BTC at $83,000 with a stop at $81,000. Size me in at 1% risk.”
Here is what happens with each approach:
“To risk 1% of $5,000 = $50. Stop distance = $2,000 ($83,000 − $81,000). Position size ≈ $50 / $2,000 = 0.025 BTC. Notional ≈ $2,075.”
Error: stop distance as % of entry is 2.41%, not $2,000 / $83,000 = 2.41% — the dollar amount approach is wrong when thinking in price-relative terms. Actual correct size: 0.0249 BTC notional $2,067. Close but not auditable — and the formula varies by exchange.
workflow.run_position_sizing({ side: "long", entryPrice: 83000, stopLoss: 81000, riskUsdt: 50, leverage: 5 })
Returns: sizeBase: 0.02494, sizeQuote: 2070.06, margin: 414.01, stopDistPct: 2.41. Exact. Auditable. Same result every call.
The difference looks small in this example. Over 100 automated trades, accumulated sizing errors mean your actual risk per trade deviates from your target. Your risk model is broken — you just do not know it until drawdown tells you.
It is not just about accuracy — it is about auditability
A deterministic calculation gives you more than a correct number. It gives you a reproducible number. That matters for three reasons:
When a trade goes wrong, you can replay the exact calculation that sized the position. With an LLM-computed number, you cannot reproduce it.
Regulated trading systems need to demonstrate how risk was measured. "The LLM said so" is not a defensible answer. A deterministic formula with traceable inputs is.
Multi-agent systems — analyst agent, risk agent, execution agent — must agree on the same numbers. If each calls the LLM inline, they will not. Shared deterministic tools enforce consistency.
What a deterministic computation layer looks like
The right architecture separates reasoning (LLM) from computation (deterministic tool). The LLM reads user intent, selects the appropriate tool, and interprets the result. The tool does the math.
The LLM never touches the arithmetic. It is the interface. The tool is the calculator.
Which calculations need to be exact in trading
Not every calculation in a trading system carries equal weight. These are the ones where approximation is not acceptable:
An agent that underestimates liquidation distance may recommend holding a position that is closer to forced closure than the user thinks. A 5% error on this number can mean the difference between a managed exit and an automated liquidation.
Over-sizing by 20% because the stop distance was mis-calculated means you are risking 1.2% of account when you believe you are risking 1%. Compounded over time, this breaks your risk model.
If an agent tells you your breakeven is $83,050 but it is actually $83,091 (because fees were ignored), you may hold a position longer than needed — or exit too early believing you are at breakeven.
A long-running carry trade or hedge position that fails to account for funding accumulation can turn profitable on paper into a loss in practice. The difference is deterministic math on funding rate × notional × periods.
An agent deciding whether to take a trade needs an exact R:R ratio, not an approximation. A 2.05:1 real R:R versus a 1.8:1 estimate can flip a trade recommendation.
How to add exact math to your agent in 5 minutes
TradingCalc exposes 19 deterministic tools via MCP (Model Context Protocol) — the standard interface for giving AI agents access to external tools. Setup takes one config change.
{
"mcpServers": {
"tradingcalc": {
"command": "npx",
"args": ["-y", "mcp-remote", "https://tradingcalc.io/api/mcp"]
}
}
}curl -X POST https://tradingcalc.io/api/mcp \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0", "id": 1,
"method": "tools/call",
"params": {
"name": "workflow.run_pre_trade_check",
"arguments": {
"side": "long",
"entry_price": 83000,
"stop_loss": 81000,
"account_balance": 5000,
"risk_pct": 1,
"leverage": 5,
"funding_rate": 0.0001,
"hold_hours": 24
}
}
}'Free tier: no signup required. 20 calls/day anonymously. View all 19 tools and pricing →
An agent can verify its own computation layer
One of the MCP tools is system.verify. It runs all 22 canonical test vectors against the live production code and returns pass/fail with expected vs actual values.
An agent starting a trading session can call this first. If it passes, every calculation during that session is guaranteed to use verified formulas. If it fails (e.g. after a deployment), the agent can refuse to trade rather than operating on potentially broken math.
Read the full methodology → · View live verification proof →
TL;DR
- + LLMs reason well. They estimate math poorly and inconsistently.
- + Trading requires exact, reproducible numbers — not estimates.
- + The right pattern: LLM for intent + deterministic tool for computation.
- + TradingCalc provides 19 MCP tools covering position sizing, liquidation, PnL, funding, and risk decisions.
- + Free tier, no signup. One config line to connect.