Skip to main content

Overview

This analysis evaluates ISCL’s resilience against attack vectors from Mahimna Kelkar’s a16z crypto research seminar on novel attacker models, TEE-based encumbrance attacks, and smart contract collusion.
Source: a16z Crypto Research Seminar — “Rethinking Crypto Attacker Models” by Mahimna Kelkar (PhD, Cornell, advised by Ari Juels)
The research identifies five categories of threats to crypto systems. Below we assess how each applies to ISCL’s architecture and what defenses are in place.

Threat categories

The threat: A user locks their private key inside a TEE (e.g., Intel SGX) and splits access — keeping some capabilities (spending) while selling/renting others (voting, reputation). This breaks the “one key = one owner” assumption that many protocols rely on.ISCL’s exposure: Low / Not directly applicableISCL’s threat model is fundamentally different from the protocols Kelkar attacks. His targets are multi-party protocols (DAOs, voting, soulbound tokens) that assume a key represents a single autonomous human. ISCL is a local single-user runtime whose threat model is:
“Protect the user’s keys FROM the AI agent, not from the user themselves.”
Kelkar’s assumption violatedISCL’s stance
One key = one ownerISCL does not care who “owns” the key — it just enforces policy before signing
User has full key accessISCL requires the user to have full access (passphrase to unlock)
User will not split key functionalityNot relevant — ISCL is the user’s own tool, not a multi-party protocol
Where it could matter: If someone deployed ISCL inside a TEE and sold partial signing access (e.g., “sign any transfer < 0.1 ETH”), they would be using ISCL as a tool within an encumbrance scheme. But this is not an attack on ISCL — it is an attack on whatever protocol trusts the resulting signatures. ISCL’s PolicyEngine would still enforce its rules regardless of who triggers the signing.
ISCL’s security does not depend on the “single entity ownership” assumption that encumbrance attacks violate. The trust model is: user is trusted, AI agent is not.
The threat: Colluders deploy a smart contract as a commitment device — depositing funds that get burned if anyone whistleblows. This defeats accountability/whistleblowing mechanisms by making defection financially irrational.ISCL’s exposure: Not applicableISCL is not a multi-party protocol with honesty quorums. There are no:
  • Threshold signatures requiring t-of-n honest parties
  • Whistleblowing mechanisms
  • Collusion-resistant voting
  • Distributed key generation ceremonies
ISCL’s architecture is explicitly centralized at the user level — one user, one keystore, one policy engine. The “collusion” threat in ISCL is between a malicious skill (Domain A) and a compromised component (e.g., RPC endpoint), which is handled with defense-in-depth, not game-theoretic incentives.
The threat: “What happens when AI bots are indistinguishable from humans? Can you generate an army of AI bots to extract more value from a system than you should have?”ISCL’s exposure: Moderate — but well-defendedThis is actually ISCL’s core design scenario — an AI agent trying to perform crypto operations. The defenses:
DefenseHow it helps
Human-in-the-loop approvalEvery fund-affecting tx requires explicit user confirmation via CLI/web/Telegram
Rate limitingmaxTxPerHour policy cap prevents rapid-fire operations
Value capsmaxValueWei, maxApprovalAmount bound maximum damage
AllowlistsToken, contract, recipient, and chain allowlists restrict scope
Single-use approval tokens5-min TTL, consumed on use — no replay
Canonical approval summaryUser sees Core-generated data, not agent-crafted text (anti-spoofing)
The AI agent (Domain A) has zero direct access to keys, RPC, or signing. Even a fully compromised agent can only submit TxIntents that must pass schema validation, PolicyEngine, preflight simulation, and human approval.
Kelkar’s philosophical point: crypto protocols make implicit “soft assumptions” (non-collusion, trusted setup, single-entity ownership) that are more likely to fail than mathematical assumptions (discrete log, factoring).ISCL’s soft assumptions and their robustness:
AssumptionRiskMitigation
User OS is not compromisedMedium (rootkit/keylogger)v0.2 roadmap: hardware wallet, remote signer
User reads approval summaries carefullyMedium (fatigue, social engineering)Risk scoring, color-coded warnings, anomaly detection
Passphrase is strongLow-Mediumscrypt KDF (N=2^18) makes brute-force expensive
RPC endpoints are honestMediumOptional dual-RPC comparison, allowlist
ISCL Core itself is not tampered withLow (trusted by definition)v0.2: TEE/enclave deployment, code signing
Kelkar would correctly note that the “user OS is secure” and “user pays attention to approvals” assumptions are the softest. These are explicitly acknowledged as v0.1 limitations with a roadmap to hardware wallets and session keys.
Kelkar’s new primitive ensures a prover has unencumbered access to a key (not locked in a TEE). This is relevant for protocols that need to verify genuine human control.ISCL does not need this — it is not verifying that the user “really” owns their key in a protocol-theoretic sense. ISCL just needs the passphrase to decrypt the keystore. If the user wants to put their ISCL keystore passphrase in a TEE-based automated system, that is their prerogative — ISCL’s policy engine still enforces all rules.

Summary threat resilience matrix

Threat from TalkRelevant to ISCL?Protected?Notes
TEE key encumbranceNoN/AISCL does not assume single-entity ownership
Vote bribery via TEENoN/AISCL has no voting/governance
Unvested token sellingNoN/ANot a vesting protocol
Smart contract collusionNoN/ANot a multi-party protocol
Whistleblowing circumventionNoN/ANo accountability quorums
AI bot value extractionYesYesCore design scenario — approval + policy + rate limits
Approval summary spoofingYesYesCanonical summary from Core, not from agent
Approval replayYesYesSingle-use tokens, TTL, intentId binding
RPC manipulationYesPartialDual-RPC optional; full multi-RPC in v0.2
OS-level key theftYesNo (v0.1)Hardware wallet planned for v0.2

Verdict

ISCL is well-architected against the threats that actually apply to it. The academic attacks in the talk primarily target multi-party decentralized protocols (DAOs, threshold signing, coercion-resistant voting) where the attacker model includes rational, self-interested participants who may collude. ISCL’s single-user, defense-in-depth model sidesteps most of these by design. The two areas where Kelkar’s broader warnings are most relevant:

Soft Assumptions

The v0.1 reliance on OS security and user attentiveness are real gaps, acknowledged in the threat model with v0.2 mitigations planned (hardware wallet, session keys, remote signer).

AI as Attacker

This is literally the core design scenario, handled with the three-domain architecture plus mandatory human approval for all fund-affecting operations.

Next steps