Compliance Guide

NIST Artificial Intelligence Risk Management Framework

The NIST AI Risk Management Framework provides voluntary guidance for managing risks associated with AI systems, organized around four core functions: Govern, Map, Measure, and Manage.

Agent-specific requirements

  • Govern: establish AI risk management policies and accountability structures
  • Map: identify and document AI risks across the agent lifecycle
  • Measure: assess and track AI risks using quantitative and qualitative methods
  • Manage: prioritize and act on identified AI risks with appropriate mitigations
  • Trustworthy AI characteristics: valid, reliable, safe, secure, resilient, accountable, transparent, explainable, interpretable, privacy-enhanced, fair
  • Third-party AI risk management for model providers and tool integrations

How Signet scoring maps to NIST AI RMF

Signet's five-dimension scoring directly maps to NIST's trustworthy AI characteristics. Reliability covers validity and reliability. Security addresses safety, security, and resilience. Quality maps to accountability and transparency. The composite score provides the quantitative risk measurement that NIST's Measure function requires. Configuration fingerprinting supports the Map function by documenting AI system components.

Implementation guidance

Organizations adopting NIST AI RMF should integrate Signet scoring into their Measure function. Map agent configurations and dependencies using Signet's configuration tracking. Establish minimum Signet Score thresholds (recommended 700+) as part of the Govern function. Use score trend data to feed the Manage function's continuous monitoring requirements.

NIST AI RMF-ready agents

Register your agents and get compliance-mapped trust scoring for NIST AI RMF.