# Accuracy **Definition**: The proportion of total correct predictions. ### Confusion Matrix | | Predicted Positive | Predicted Negative | |---------------|--------------------|--------------------| | **Actual Positive** | True Positive (TP) | False Negative (FN) | | **Actual Negative** | False Positive (FP) | True Negative (TN) | ### Formula > Accuracy = (TP + TN) / (TP + FP + FN + TN) ### Key Points | Feature | Explanation | |----------------------|--------------------------------------------------| | **Type** | Overall correctness | | **Range** | 0.0 ~ 1.0 | | **Ideal Value** | 1.0 | | **Limitation** | Misleading in imbalanced datasets | | **Example** | If 95% of data is negative, predicting all negative gives 95% accuracy (but it's a bad model) | --- ## F1-Score **Definition**: The harmonic mean of **precision** and **recall**, giving a balanced metric when class distribution is uneven. ### Related Metrics | Metric | Formula | What It Means | |------------|--------------------------|------------------------------------------| | Precision | TP / (TP + FP) | Out of all predicted positives, how many are correct | | Recall | TP / (TP + FN) | Out of all actual positives, how many did we catch? | | F1-Score | 2 × (Precision × Recall) / (Precision + Recall) | Balances precision and recall | ### Key Points | Feature | Explanation | |----------------------|--------------------------------------------------| | **Type** | Balance between precision & recall | | **Range** | 0.0 ~ 1.0 | | **Ideal Value** | 1.0 | | **Use Case** | Imbalanced classification tasks (e.g. anomaly, disease detection) | | **Limitation** | Ignores true negatives | --- ## AUC (Area Under ROC Curve) **Definition**: Measures the model’s ability to distinguish between positive and negative classes across all possible thresholds. ### ROC Curve Explanation | Term | Formula | Interpretation | |-----------------------|--------------------------|-----------------------------------------| | TPR (Recall) | TP / (TP + FN) | True Positive Rate (sensitivity) | | FPR | FP / (FP + TN) | False Positive Rate | | AUC | Area under the ROC curve | 1.0 means perfect, 0.5 means random | ### Key Points | Feature | Explanation | |----------------------|--------------------------------------------------| | **Type** | Threshold-independent probability ranking | | **Range** | 0.0 ~ 1.0 | | **Ideal Value** | 1.0 | | **Use Case** | Probabilistic classifiers, risk scoring models | | **Benefit** | Does not require picking a threshold | --- ## Summary Comparison | Metric | Focus Area | Ideal Value | Threshold-Dependent | Sensitive to Imbalance? | |------------|----------------------|-------------|----------------------|--------------------------| | Accuracy | Overall correctness | 1.0 | Yes | Yes | | F1-Score | Precision + Recall | 1.0 | Yes | No | | AUC | Class ranking | 1.0 | No | No | --- ## 📝 When to Use What? | Scenario | Recommended Metric(s) | |--------------------------------------|----------------------------| | Balanced dataset | Accuracy | | Imbalanced dataset (e.g. 1:10 ratio) | F1-Score, AUC | | Need to rank predictions | AUC | | Medical diagnosis | F1-Score (catch positives), AUC | | Binary classification with class overlap | AUC, F1-Score |
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up