User Tools

Site Tools


ai_data_detection

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
ai_data_detection [2025/05/25 15:03] – [Basic Example] eagleeyenebulaai_data_detection [2025/05/25 15:09] (current) – [Best Practices] eagleeyenebula
Line 42: Line 42:
  
   * **Missing Value Detection:**   * **Missing Value Detection:**
-    - Identifies any `NaNor `Nullvalues present in the dataset.+    - Identifies any **NaN** or **Null** values present in the dataset.
  
   * **Duplicate Row Detection:**   * **Duplicate Row Detection:**
Line 155: Line 155:
  
 === 1. Customizing Data Checks === === 1. Customizing Data Checks ===
-You can extend the `DataDetectionclass to add checks for other data quality metrics, such as outliers or invalid values.+You can extend the **DataDetection** class to add checks for other data quality metrics, such as outliers or invalid values.
  
 **Example: Adding Outlier Detection** **Example: Adding Outlier Detection**
-```python+<code> 
 +python
 import numpy as np import numpy as np
  
Line 184: Line 185:
 if extended_detector.has_outliers(data): if extended_detector.has_outliers(data):
     print("Outliers detected in the dataset.")     print("Outliers detected in the dataset.")
-```+</code>
  
 **Output:** **Output:**
-```plaintext+<code> 
 +plaintext
 WARNING:root:Outliers detected in dataset. WARNING:root:Outliers detected in dataset.
 Outliers detected in the dataset. Outliers detected in the dataset.
-```+</code>
  
 --- ---
Line 196: Line 198:
 === 2. Integrating DataDetection into a Pipeline === === 2. Integrating DataDetection into a Pipeline ===
 This module can be integrated as part of a Scikit-learn pipeline for preprocessing. This module can be integrated as part of a Scikit-learn pipeline for preprocessing.
- +<code> 
-```python+python
 from sklearn.pipeline import Pipeline from sklearn.pipeline import Pipeline
  
Line 212: Line 214:
     ('model', LogisticRegression())     ('model', LogisticRegression())
 ]) ])
-``` +</code>
 --- ---
  
 === 3. Handling Large Datasets === === 3. Handling Large Datasets ===
 For large datasets, optimize checks using chunk-based processing in Pandas: For large datasets, optimize checks using chunk-based processing in Pandas:
-```python+<code> 
 +python
 def has_issues_in_chunks(file_path, chunk_size=1000): def has_issues_in_chunks(file_path, chunk_size=1000):
     detector = DataDetection()     detector = DataDetection()
Line 225: Line 227:
             return True             return True
     return False     return False
-```+</code>
  
 ---- ----
Line 231: Line 233:
 ===== Best Practices ===== ===== Best Practices =====
 1. **Use Incremental Checks:** Perform quality checks at different stages of the pipeline (e.g., after loading raw data and after preprocessing steps). 1. **Use Incremental Checks:** Perform quality checks at different stages of the pipeline (e.g., after loading raw data and after preprocessing steps).
 +
 2. **Automate Logging:** Set up centralized logging for tracking data issues across multiple datasets. 2. **Automate Logging:** Set up centralized logging for tracking data issues across multiple datasets.
 +
 3. **Adapt Custom Methods:** Extend the module for domain-specific checks, such as outlier detection, range checks, or invalid category detection. 3. **Adapt Custom Methods:** Extend the module for domain-specific checks, such as outlier detection, range checks, or invalid category detection.
 +
 4. **Handle Issues Early:** Address identified data issues before training machine learning models. 4. **Handle Issues Early:** Address identified data issues before training machine learning models.
  
Line 244: Line 249:
  
 **Example: Adding Invalid Category Detection** **Example: Adding Invalid Category Detection**
-```python+<code> 
 +python
 def has_invalid_categories(data, valid_categories): def has_invalid_categories(data, valid_categories):
     for col in data.select_dtypes(include=['object']):     for col in data.select_dtypes(include=['object']):
Line 252: Line 258:
             return True             return True
     return False     return False
-```+</code>
  
 ---- ----
ai_data_detection.1748185402.txt.gz · Last modified: 2025/05/25 15:03 by eagleeyenebula