// COMPARISON

Not an LLM. Not a chatbot. Not a framework.

There are three common confusions about IRIS. Worth clearing them. What follows compares IRIS against each approach in architectural, not commercial, terms.

DimensionStandalone LLMSimple RAGAgent frameworkIRIS SCE
Factual verificationNoLimitedVariableOwn layer
Adversarial challengeNoNoManualBy architecture
Persistent memoryNoNoExternalCore
Audit trailNoPartialVariableComplete and mandatory
Explicit confidence scoreNoNoNoYes, every output
Internal data shieldNoNoNoYes
Deployment sovereigntyNo (external API)MixedVariableFull (private cloud, on-prem, edge)
// FOUR APPROACHES

Why they are not the same

Standalone LLM vs IRIS

An isolated model answers with what it learned in training. It does not verify, remember or challenge. IRIS adds those three as architectural layers.

Simple RAG vs IRIS

RAG adds document retrieval, not deliberation. A retrieved claim is still an unchecked claim. IRIS verifies, challenges and consolidates.

Agent framework vs IRIS

An agent framework is an orchestration library. IRIS is an engine with proprietary cognitive architecture: memory, verification and audit trail belong to the engine, not to user code.

Enterprise chatbot vs IRIS

An enterprise chatbot is a conversational interface over a model. IRIS is the engine that could run behind; conversation is one channel among several.

The difference is the architecture, not the model.

Frequently asked

A standalone LLM does not verify, challenge or remember. IRIS adds adversarial deliberation, persistent memory and audit trail as architectural properties.