Architecture
System Overview
┌─────────────────┐
│ AGENT │
│ Proof of │──┐
│ Civitas SDK │ │
└─────────────────┘ │
│ Request LLM Inference:
│ (Message, Model)
▼
┌─────────────────────────────┐
│ THE NETWORK │
│ Proof of Civitas SDK │
└─────────────────────────────┘
│ ▲
│ │
│ Secure │ Inference Response:
│ Processing │ (Message, Proof)
▼ │
┌─────────────────────────────┐
│ SOLANA │
│ On- & Off-Chain Ledger │
└─────────────────────────────┘
▲
│
│ Store: (Message, Proof)
│
┌─────────────────────────────┐
│ AGENT │
│ Read & Verify │
│ (Message, Proof) │
└─────────────────────────────┘The Flow
Agent Request: The agent sends a request containing a message with the desired LLM model to the TEE
Secure Processing: The TEE securely processes the request by calling the LLM API
Return Proof: The TEE sends back {Message, Proof} to the agent
On-Chain Storage: The TEE submits the attestation with {Message, Proof} to Solana
Verification: The Proof of Civitas SDK is used to read the attestation from Solana and verify it with {Message, Proof}. The proof log can be added to the agent website/app
Technical Components
Trusted Execution Environment (TEE)
LLM inference executed inside Amazon Nitro Enclaves
Enclave cannot be accessed from outside to ensure agent security
Includes instructions to verify code running inside TEEs
Proof of Civitas SDK
OpenAI-compatible Python & JavaScript libraries
Makes verifiable LLM inferences within your agent
Supports OpenAI and Claude LLMs, plus fine-tuned models with OpenAI
Logging functionality to retrieve and display verified inferences
Verification logic to validate in code if a proof is correct
Solana Integration
On-chain verifiability with Solana blockchain
Transparent, immutable proof storage
Public verification without centralized trust
Last updated