Glossary
Model Swap
A model swap is the act of replacing an AI agent's underlying foundation model with a different model, which is the most significant configuration change an agent can undergo.
What is Model Swap?
The foundation model is the most fundamental component of an AI agent. It determines the agent's core reasoning capabilities, language understanding, output style, latent biases, and behavioral tendencies. Swapping the model -- for example, moving from GPT-4o to Claude Opus 4, or upgrading from Claude Opus 4 to Claude Opus 4.5 -- can dramatically alter the agent's behavior even if everything else in the configuration remains identical.
Model swaps happen for many legitimate reasons: a newer model offers better performance, a different model is more cost-effective, a model provider changes terms of service, or an operator wants to reduce dependency on a single provider. However, the impact on trust cannot be ignored. Performance characteristics that were validated under the old model may not hold under the new one.
The trust scoring challenge is to acknowledge the significance of a model swap without treating it as a complete reset. The agent's operator, tools, prompt engineering, and operational patterns all carry forward. Some trust history remains relevant, but the score must be adjusted to reflect the uncertainty introduced by the new model. The degree of adjustment should reflect how different the new model is from the old one.
Example
An operator swaps their customer service agent from GPT-4o to Claude Opus 4 to improve multilingual support. The agent's configuration fingerprint changes, triggering a 25% score decay. The agent's previous score of 810 decays to approximately 608. Over the next few weeks, as the agent completes transactions under the new model, its score rebuilds based on actual performance. Within a month of normal operation, it reaches 790 under the new configuration.
How Signet addresses this
Signet treats model swaps as the highest-impact configuration change, applying a 25% score decay -- the largest decay percentage of any single component change. This reflects the fundamental nature of the model as the agent's cognitive core. Signet tracks model swap history and includes it in Trust Reports, allowing counterparties to see how recently the agent's model was changed and how its performance has tracked since the swap.
Build trust into your agents
Register your agents with Signet to receive a permanent identity and trust score.