ai_performance_profiler
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| ai_performance_profiler [2025/05/29 03:05] – [Workflow] eagleeyenebula | ai_performance_profiler [2025/05/29 03:21] (current) – [Key Features] eagleeyenebula | ||
|---|---|---|---|
| Line 22: | Line 22: | ||
| ===== Key Features ===== | ===== Key Features ===== | ||
| - | 1. **Execution Time Profiling (`profile_stage`)** | + | 1. **Execution Time Profiling |
| * Dynamically analyze the time spent by functions or code blocks. | * Dynamically analyze the time spent by functions or code blocks. | ||
| * Logs all execution times to a log file for a clear historical performance record. | * Logs all execution times to a log file for a clear historical performance record. | ||
| - | 2. **Caching (`cache_step`)** | + | 2. **Caching |
| * Built-in caching system to store computationally expensive function results. | * Built-in caching system to store computationally expensive function results. | ||
| - | * Uses `functools.lru_cache` with a configurable cache size for efficient memory utilization. | + | * Uses **functools.lru_cache** with a configurable cache size for efficient memory utilization. |
| 3. **Integrated Logging** | 3. **Integrated Logging** | ||
| - | * Automatically logs key performance metrics, such as execution time, to a file (`performance.log` by default). | + | * Automatically logs key performance metrics, such as execution time, to a file (**performance.log** by default). |
| 4. **Modularity** | 4. **Modularity** | ||
| Line 37: | Line 37: | ||
| 5. **Lightweight and Extensible** | 5. **Lightweight and Extensible** | ||
| - | * Minimal overhead compared to traditional profiling tools like cProfile or line_profiler. | + | * Minimal overhead compared to traditional profiling tools like **cProfile** or **line_profiler**. |
| ===== How It Works ===== | ===== How It Works ===== | ||
| Line 54: | Line 54: | ||
| 1. **Initialization: | 1. **Initialization: | ||
| + | * Instantiate the **PerformanceProfiler** class. Optionally specify the log file's name. | ||
| < | < | ||
| - | | ||
| - | |||
| | | ||
| | | ||
| - | </ | + | </ |
| - | 2. **Profiling a Stage: | + | 2. **Profiling a Stage: |
| + | * Use **profile_stage** to wrap critical tasks, functions, or pipeline stages, while passing the required arguments. | ||
| < | < | ||
| - | Use profile_stage to wrap critical tasks, functions, or pipeline stages, while passing the required arguments. | ||
| - | |||
| | | ||
| | | ||
| </ | </ | ||
| 3. **Caching Expensive Steps:** | 3. **Caching Expensive Steps:** | ||
| + | * Use **cache_step** as a decorator on computationally heavy functions to automatically cache results for repeated calls. | ||
| < | < | ||
| - | Use cache_step as a decorator on computationally heavy functions to automatically cache results for repeated calls. | ||
| | | ||
| | | ||
| Line 76: | Line 74: | ||
| | | ||
| | | ||
| - | |||
| </ | </ | ||
| 4. **Analyze Logs:** | 4. **Analyze Logs:** | ||
| - | |||
| * Review the generated log file (**performance.log** or user-specified file) to track profiling results. | * Review the generated log file (**performance.log** or user-specified file) to track profiling results. | ||
| Line 85: | Line 81: | ||
| Below are advanced usage scenarios of the Performance Profiler: | Below are advanced usage scenarios of the Performance Profiler: | ||
| - | |||
| - | --- | ||
| - | |||
| ==== Example 1: Profiling Multiple Pipeline Stages ==== | ==== Example 1: Profiling Multiple Pipeline Stages ==== | ||
| Profile a workflow consisting of multiple pipeline stages, such as data preprocessing, | Profile a workflow consisting of multiple pipeline stages, such as data preprocessing, | ||
| - | + | < | |
| - | ```python | + | python |
| from ai_performance_profiler import PerformanceProfiler | from ai_performance_profiler import PerformanceProfiler | ||
| Line 108: | Line 101: | ||
| time.sleep(1) | time.sleep(1) | ||
| return " | return " | ||
| - | + | </ | |
| - | # Profiling each stage | + | **Profiling each stage** |
| + | < | ||
| data = profiler.profile_stage(" | data = profiler.profile_stage(" | ||
| model = profiler.profile_stage(" | model = profiler.profile_stage(" | ||
| metrics = profiler.profile_stage(" | metrics = profiler.profile_stage(" | ||
| - | ``` | + | </ |
| **Log Output:** | **Log Output:** | ||
| - | ``` | + | < |
| INFO: Stage ' | INFO: Stage ' | ||
| - | ``` | + | </ |
| - | + | ||
| - | --- | + | |
| ==== Example 2: Implementing Custom Caching Logic ==== | ==== Example 2: Implementing Custom Caching Logic ==== | ||
| Use caching to optimize a computation-heavy function like determining Fibonacci numbers. | Use caching to optimize a computation-heavy function like determining Fibonacci numbers. | ||
| - | + | < | |
| - | ```python | + | python |
| @profiler.cache_step | @profiler.cache_step | ||
| def calculate_fibonacci(n): | def calculate_fibonacci(n): | ||
| Line 132: | Line 123: | ||
| return n | return n | ||
| return calculate_fibonacci(n - 1) + calculate_fibonacci(n - 2) | return calculate_fibonacci(n - 1) + calculate_fibonacci(n - 2) | ||
| - | + | </ | |
| - | # First call will compute the result | + | **First call will compute the result** |
| + | < | ||
| result = calculate_fibonacci(35) | result = calculate_fibonacci(35) | ||
| - | + | </ | |
| - | # Subsequent calls with the same input will use the cache | + | **Subsequent calls with the same input will use the cache** |
| + | < | ||
| cached_result = calculate_fibonacci(35) | cached_result = calculate_fibonacci(35) | ||
| - | ``` | + | </ |
| - | + | ||
| - | --- | + | |
| ==== Example 3: Profiling with Dynamic Arguments ==== | ==== Example 3: Profiling with Dynamic Arguments ==== | ||
| - | Profile a function with dynamic arguments using `**kwargs`. | + | Profile a function with dynamic arguments using < |
| - | + | < | |
| - | ```python | + | python |
| def dynamic_function(a, | def dynamic_function(a, | ||
| time.sleep(1) | time.sleep(1) | ||
| return a + b + sum(kwargs.values()) | return a + b + sum(kwargs.values()) | ||
| - | + | </ | |
| - | # Profiling with keyword arguments | + | **Profiling with keyword arguments** |
| + | < | ||
| result = profiler.profile_stage(" | result = profiler.profile_stage(" | ||
| - | ``` | + | </ |
| - | + | ||
| - | --- | + | |
| ==== Example 4: Combining Profiling and Caching ==== | ==== Example 4: Combining Profiling and Caching ==== | ||
| Simultaneously cache and profile a function. | Simultaneously cache and profile a function. | ||
| - | + | < | |
| - | ```python | + | python |
| @profiler.cache_step | @profiler.cache_step | ||
| def simulate_heavy_task(x): | def simulate_heavy_task(x): | ||
| time.sleep(1) | time.sleep(1) | ||
| return x * 2 | return x * 2 | ||
| - | + | </ | |
| - | # Profile the cached function | + | **Profile the cached function** |
| + | < | ||
| result = profiler.profile_stage(" | result = profiler.profile_stage(" | ||
| - | ``` | + | </ |
| - | + | ||
| - | --- | + | |
| ===== Extending the Framework ===== | ===== Extending the Framework ===== | ||
| Line 177: | Line 165: | ||
| The Performance Profiler framework can be extended and customized for broader functionality: | The Performance Profiler framework can be extended and customized for broader functionality: | ||
| - | 1. **Disk-Based Caching:** Utilize | + | 1. **Disk-Based Caching:** Utilize |
| - | | + | < |
| - | ```python | + | |
| from diskcache import Cache | from diskcache import Cache | ||
| Line 188: | Line 176: | ||
| | | ||
| | | ||
| - | ``` | + | </ |
| 2. **Integration with Observability Platforms: | 2. **Integration with Observability Platforms: | ||
| - | | + | * Export logs in **JSON** format and feed metrics into monitoring systems like **Grafana** or Prometheus. |
| - | 3. **Enhanced Logging:** Implement advanced logging techniques, such as log rotation and JSON-based formatted logs. | + | 3. **Enhanced Logging:** Implement advanced logging techniques, such as log rotation and **JSON-based** formatted logs. |
| - | + | < | |
| - | ```python | + | |
| | | ||
| from logging.handlers import RotatingFileHandler | from logging.handlers import RotatingFileHandler | ||
| Line 202: | Line 190: | ||
| | | ||
| | | ||
| - | ``` | + | </ |
| - | + | ||
| - | --- | + | |
| ===== Best Practices ===== | ===== Best Practices ===== | ||
| - | 1. Focus profiling on **critical pipeline sections** to avoid introducing unnecessary profiling overhead. | + | 1. Focus profiling on **critical pipeline sections** to avoid introducing unnecessary profiling overhead. |
| - | 2. Use `cache_step` exclusively for deterministic functions where input-output relationships don’t change. | + | |
| - | 3. Limit `lru_cache` size with the `maxsize` parameter to ensure memory-efficient caching. | + | 2. Use **cache_step** exclusively for deterministic functions where input-output relationships don’t change. |
| - | --- | + | 3. Limit **lru_cache** size with the **maxsize** parameter to ensure memory-efficient caching. |
| ===== Conclusion ===== | ===== Conclusion ===== | ||
| The **PerformanceProfiler** is a lightweight, | The **PerformanceProfiler** is a lightweight, | ||
| + | |||
| + | Designed with flexibility in mind, the **PerformanceProfiler** can be easily integrated into diverse environments, | ||
ai_performance_profiler.1748487941.txt.gz · Last modified: 2025/05/29 03:05 by eagleeyenebula
