| CoPilot 🔗 | ChatGPT 🔗 | Claude 🔗 |
|---|
ℹ️AI Model
the nature of the AI in use- model: architecture, hyperparameters generated dynamically e.g. AutoML
| ℹ️AI Model
the nature of the AI in use- model: ❌ Unlikely - usually an engineer
- docs: ✅ Moderate - drafts by AI
- summaries: ✅ Moderate - AI evaluation summaries
| ℹ️AI Model
the nature of the AI in use- model: ❌ Unlikely - usually an engineer
- bias docs: identify & document - demographic bias, group disparities, edge cases, failure modes
- docs: plain language explanations, capabilities, risk assessment
|
ℹ️AI Data
the data used to make the AI function- lineage: transformation logs
- provenance: real, synthetic or transformed data?
- annotation: tags, bounding boxes, entity labels
- sentiment scores: for human review or direct use
| ℹ️AI Datasets
the data used to make the AI function- descriptive: ✅ High - AI generated descriptions / tags.
- provenance: ❌ Moderate - should be human + AI help
- quality / labeling: ✅ High - may be machine-generated
- ethical/privacy: ✅ Moderate - AI drafts + expert review
| ℹ️AI Training Metadata
the data used to make the AI function- descriptive: AI insights - content distribution analysis
- quality / labeling: assessments & duplicate & cleaning
- ethical/privacy: Privacy risk assessments (PII)
- optimization: histories & reasoning, trade-offs, configs
|
ℹ️AI Inference Metadata
generated during AI usage- Confidence scores
- Predicted labels
- Explanation traces - Often real-time during inference
| ℹ️AI Generated Content
generated during AI usage- watermarks / tags: ❌ Low - algorithmic, not AI “trained”
- usage: - logs and descriptions - ✅ Moderate - AI auto-summary / tag logs of generated outputs
| ℹ️AI Inference Metadata
generated during AI usage- Explainability - AI self-explain - attention visualizations, feature importance, decisions, uncertainty quantification
- trend: AI systems becoming more self-documenting and self-evaluating
|
ℹ️AI Feature Metadata
a copilot category- feature - importance scores & statistical summaries
- synthetic labels generated or via expandability tools (e.g., SHAP, LIME)
| ℹ️AI Governance & Compliance
a chatGPT category- transparency: ✅ Moderate–High | AI draft of fairness / audit results
- Bias metrics: ✅ High - metrics explanation often AI-drafted
| ℹ️AI Content Metadata
a Claude category- QA - AI doing QA on AI-generated content - accuracy, hallucination detection
safety & toxicity, content quality, guideline conformance
- tagging & classification - AI generated tags: categories / topics, sentimentl, language
|