User Tools

Site Tools


ai_bias_auditor

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
ai_bias_auditor [2025/05/24 15:54] – [1. Defining Bias Metrics] eagleeyenebulaai_bias_auditor [2025/05/25 03:35] (current) – [AI Bias Auditor] eagleeyenebula
Line 3: Line 3:
 The **AI Bias Auditor** is a Python-based framework that identifies and evaluates potential biases in machine learning (ML) models. It provides a structured mechanism to analyze protected features (e.g., gender, race) and their relationship to model performance metrics, such as prediction accuracy. By quantifying fairness gaps and classifying outcomes as biased or unbiased, this tool enables responsible and ethical AI development. The **AI Bias Auditor** is a Python-based framework that identifies and evaluates potential biases in machine learning (ML) models. It provides a structured mechanism to analyze protected features (e.g., gender, race) and their relationship to model performance metrics, such as prediction accuracy. By quantifying fairness gaps and classifying outcomes as biased or unbiased, this tool enables responsible and ethical AI development.
  
 +{{youtube>fruyhZUDY54?large}}
 +
 +-------------------------------------------------------------
 ===== Overview ===== ===== Overview =====
  
Line 37: Line 40:
  
 **Example Bias Report**: **Example Bias Report**:
-```json+<code> 
 +json
 { {
     "gender": {     "gender": {
Line 50: Line 54:
     }     }
 } }
-``` +</code>
- +
----+
  
 ===== Implementation Details ===== ===== Implementation Details =====
Line 63: Line 65:
  
 **Class Constructor**: **Class Constructor**:
-```python+<code> 
 +python
 class BiasAuditor: class BiasAuditor:
     def __init__(self, protected_features: List[str], outcome_feature: str):     def __init__(self, protected_features: List[str], outcome_feature: str):
Line 70: Line 73:
         :param outcome_feature: Target feature measuring fairness (e.g., accuracy).         :param outcome_feature: Target feature measuring fairness (e.g., accuracy).
         """         """
-```+</code>
  
 **Core Method: evaluate_bias()** **Core Method: evaluate_bias()**
-```python+<code> 
 +python
 def evaluate_bias(self, data: pd.DataFrame) -> dict: def evaluate_bias(self, data: pd.DataFrame) -> dict:
     """     """
Line 92: Line 96:
         }         }
     return report     return report
-``` +</code>
- +
----+
  
 ===== Examples ===== ===== Examples =====
Line 111: Line 113:
  
 **Code Example**: **Code Example**:
-```python+<code> 
 +python
 import pandas as pd import pandas as pd
  
Line 124: Line 127:
  
 print("Bias Report:", bias_report) print("Bias Report:", bias_report)
-```+</code>
  
 **Output**: **Output**:
-```+<code>
 Bias Report: { 'gender': { 'group_stats': {'male': 0.873333, 'female': 0.816667}, 'fairness_gap': 0.056666, 'is_biased': False }, 'race': { 'group_stats': {'white': 0.875, 'black': 0.75, 'asian': 0.91}, 'fairness_gap': 0.16, 'is_biased': True } } Bias Report: { 'gender': { 'group_stats': {'male': 0.873333, 'female': 0.816667}, 'fairness_gap': 0.056666, 'is_biased': False }, 'race': { 'group_stats': {'white': 0.875, 'black': 0.75, 'asian': 0.91}, 'fairness_gap': 0.16, 'is_biased': True } }
-```  +</code>
- +
----+
  
 ==== 2. Advanced Example: Custom Bias Threshold ==== ==== 2. Advanced Example: Custom Bias Threshold ====
  
 Modify the bias threshold using a derived class: Modify the bias threshold using a derived class:
-```python+<code> 
 +python
 class CustomBiasAuditor(BiasAuditor): class CustomBiasAuditor(BiasAuditor):
     def __init__(self, protected_features, outcome_feature, bias_threshold=0.1):     def __init__(self, protected_features, outcome_feature, bias_threshold=0.1):
Line 147: Line 149:
             details["is_biased"] = details["fairness_gap"] > self.bias_threshold             details["is_biased"] = details["fairness_gap"] > self.bias_threshold
         return report         return report
 +</code>
 # Custom threshold example # Custom threshold example
 +<code>
 auditor = CustomBiasAuditor(protected_features=["gender"], outcome_feature="prediction_accuracy", bias_threshold=0.05) auditor = CustomBiasAuditor(protected_features=["gender"], outcome_feature="prediction_accuracy", bias_threshold=0.05)
 bias_report = auditor.evaluate_bias(data) bias_report = auditor.evaluate_bias(data)
  
 print("Bias Report with Custom Threshold:", bias_report) print("Bias Report with Custom Threshold:", bias_report)
-``` +</code>
- +
----+
  
 ==== 3. Visualizing Bias Using Matplotlib ==== ==== 3. Visualizing Bias Using Matplotlib ====
  
 Add visual insights by plotting group statistics and fairness gaps: Add visual insights by plotting group statistics and fairness gaps:
-```python+<code> 
 +python
 import matplotlib.pyplot as plt import matplotlib.pyplot as plt
  
Line 178: Line 180:
  
 plot_bias_report(bias_report) plot_bias_report(bias_report)
-``` +</code>
- +
----+
  
 ===== Advanced Usage ===== ===== Advanced Usage =====
Line 187: Line 187:
  
 Use a loop to audit multiple datasets efficiently: Use a loop to audit multiple datasets efficiently:
-```python+<code> 
 +python
 data_files = ["dataset1.csv", "dataset2.csv", "dataset3.csv"] data_files = ["dataset1.csv", "dataset2.csv", "dataset3.csv"]
  
Line 195: Line 196:
     bias_report = auditor.evaluate_bias(data)     bias_report = auditor.evaluate_bias(data)
     print(f"Bias Report for {file}:\n", bias_report)     print(f"Bias Report for {file}:\n", bias_report)
-``` +</code>
- +
----+
  
 ==== 2. Integration with ML Pipelines ==== ==== 2. Integration with ML Pipelines ====
  
 Audit machine learning models during validation: Audit machine learning models during validation:
-```python+<code> 
 +python
 from sklearn.model_selection import train_test_split from sklearn.model_selection import train_test_split
 from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import RandomForestClassifier
 import pandas as pd import pandas as pd
 +</code>
 # Simulated dataset # Simulated dataset
 +<code>
 data = pd.DataFrame({ data = pd.DataFrame({
     "gender": ["male", "female", "male", "female", "male", "female"],     "gender": ["male", "female", "male", "female", "male", "female"],
Line 214: Line 215:
     "loan_approval": [1, 0, 0, 1, 1, 1]     "loan_approval": [1, 0, 0, 1, 1, 1]
 }) })
 +</code>
  
 +<code>
 X = data[["gender", "income", "race"]] X = data[["gender", "income", "race"]]
 y = data["loan_approval"] y = data["loan_approval"]
 +</code>
  
 # Model training # Model training
 +
 +<code>
 X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
 clf = RandomForestClassifier() clf = RandomForestClassifier()
 clf.fit(pd.get_dummies(X_train), y_train) clf.fit(pd.get_dummies(X_train), y_train)
 +</code>
  
 # Evaluate bias on predictions # Evaluate bias on predictions
 +<code>
 +
 X_test["prediction_accuracy"] = clf.predict(pd.get_dummies(X_test)) X_test["prediction_accuracy"] = clf.predict(pd.get_dummies(X_test))
 auditor = BiasAuditor(protected_features=["gender", "race"], outcome_feature="prediction_accuracy") auditor = BiasAuditor(protected_features=["gender", "race"], outcome_feature="prediction_accuracy")
Line 229: Line 238:
  
 print("Bias Report:", bias_report) print("Bias Report:", bias_report)
-``` +</code>
- +
----+
  
 ===== Applications ===== ===== Applications =====
Line 243: Line 250:
 **3. Business Insights**: **3. Business Insights**:
   Detect unintended biases in decision-making systems, such as loan approvals or hiring tools.   Detect unintended biases in decision-making systems, such as loan approvals or hiring tools.
- 
---- 
  
 ===== Best Practices ===== ===== Best Practices =====
Line 251: Line 256:
 2. **Custom Thresholds**: Adjust fairness thresholds to fit domain-specific fairness guidelines. 2. **Custom Thresholds**: Adjust fairness thresholds to fit domain-specific fairness guidelines.
 3. **Visualize Results**: Use visualization tools to make bias reports more interpretable. 3. **Visualize Results**: Use visualization tools to make bias reports more interpretable.
- 
---- 
- 
 ===== Conclusion ===== ===== Conclusion =====
  
 The **AI Bias Auditor** empowers users to evaluate the fairness of ML models in a structured and interpretable way. Its customizable threshold, extensibility, and integration into ML pipelines make it ideal for building responsible AI systems. By identifying biases early in the development cycle, this tool promotes transparency and accountability in AI. The **AI Bias Auditor** empowers users to evaluate the fairness of ML models in a structured and interpretable way. Its customizable threshold, extensibility, and integration into ML pipelines make it ideal for building responsible AI systems. By identifying biases early in the development cycle, this tool promotes transparency and accountability in AI.
ai_bias_auditor.1748102091.txt.gz · Last modified: 2025/05/24 15:54 by eagleeyenebula