Cyber Security Adversarial Resistance for AI/ML Models, Explainability and Bias Detection
ML01:2023 Input Manipulation Attack
ML02:2023 Data Poisoning Attack
ML03:2023 Model Inversion Attack
ML04:2023 Membership Inference Attack
ML05:2023 Model Theft
ML06:2023 AI Supply Chain Attacks
ML07:2023 Transfer Learning Attack
ML08:2023 Model Skewing
ML09:2023 Output Integrity Attack
ML10:2023 Model Poisoning
AI Model Risk Management starts by integrating your models with Tumeryk AttackGuard to keep them SAFE:
Secure
Accurate
Fair
Explainable
Fairness assessment is a crucial aspect of evaluating machine learning models, especially when dealing with sensitive attributes such as race, gender, or age. Selection plots and count plots are valuable tools for understanding the impact of these models on different subgroups and ensuring equitable predictions.
KS (Kolmogorov-Smirnov) curves are used to assess the separation between the positive and negative classes in predicted probabilities. These curves provide valuable insights into how well a machine learning model distinguishes between different classes.
Feature Importance curves are crucial tools for understanding the contribution of individual features in a machine learning model. These curves offer valuable insights into the relevance and impact of different features on the model's predictions.
ROC (Receiver Operating Characteristic) curves visualize the trade-off between True Positive Rate (sensitivity) and False Positive Rate (1-specificity) across different classification thresholds. These curves help assess a model's ability to discriminate between positive and negative classes.