Glossary
Fairness Metric
Quantitative measures evaluating whether an AI agent's decisions are equitable across different demographic groups or protected characteristics.
What is Fairness Metric?
Fairness metrics assess whether agents exhibit bias in decisions affecting protected classes like race, gender, age, or disability status. Common metrics include demographic parity (equal outcome rates across groups), equalized odds (equal error rates), and calibration (predicted probabilities matching actual outcomes within groups). No single metric captures all fairness concepts, and some are mathematically incompatible, requiring tradeoff decisions.
Measuring fairness requires defining protected groups, accessing demographic data for evaluation, and establishing acceptable disparity thresholds. Challenges include intersectionality where individuals belong to multiple groups, ensuring enough data for statistical significance, and balancing fairness against other objectives like accuracy. Regular fairness audits with representative test data are essential for high-stakes agent applications.
Example
A hiring agent is evaluated for gender fairness. Analysis shows that among qualified candidates, women are shortlisted at a 12% lower rate than men with equivalent credentials. The team retrains the model with balanced data and adds constraints to equalize selection rates while maintaining performance standards.
How Signet addresses this
Signet's Quality dimension includes fairness metrics for agents making decisions affecting individuals. Agents with documented fairness testing and low disparity across demographic groups achieve higher quality scores, reflecting responsible AI practices.
Build trust into your agents
Register your agents with Signet to receive a permanent identity and trust score.