ai_performance_profiler
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| ai_performance_profiler [2025/05/29 03:16] – [Key Features] eagleeyenebula | ai_performance_profiler [2025/05/29 03:21] (current) – [Key Features] eagleeyenebula | ||
|---|---|---|---|
| Line 22: | Line 22: | ||
| ===== Key Features ===== | ===== Key Features ===== | ||
| - | 1. **Execution Time Profiling | + | 1. **Execution Time Profiling **(profile_stage)**** |
| * Dynamically analyze the time spent by functions or code blocks. | * Dynamically analyze the time spent by functions or code blocks. | ||
| * Logs all execution times to a log file for a clear historical performance record. | * Logs all execution times to a log file for a clear historical performance record. | ||
| - | 2. **Caching | + | 2. **Caching **(cache_step)**** |
| * Built-in caching system to store computationally expensive function results. | * Built-in caching system to store computationally expensive function results. | ||
| * Uses **functools.lru_cache** with a configurable cache size for efficient memory utilization. | * Uses **functools.lru_cache** with a configurable cache size for efficient memory utilization. | ||
| Line 54: | Line 54: | ||
| 1. **Initialization: | 1. **Initialization: | ||
| - | * Instantiate the PerformanceProfiler class. Optionally specify the log file's name. | + | * Instantiate the **PerformanceProfiler** class. Optionally specify the log file's name. |
| < | < | ||
| | | ||
| Line 61: | Line 61: | ||
| 2. **Profiling a Stage: | 2. **Profiling a Stage: | ||
| - | * Use profile_stage to wrap critical tasks, functions, or pipeline stages, while passing the required arguments. | + | * Use **profile_stage** to wrap critical tasks, functions, or pipeline stages, while passing the required arguments. |
| < | < | ||
| | | ||
| Line 67: | Line 67: | ||
| </ | </ | ||
| 3. **Caching Expensive Steps:** | 3. **Caching Expensive Steps:** | ||
| - | * Use cache_step as a decorator on computationally heavy functions to automatically cache results for repeated calls. | + | * Use **cache_step** as a decorator on computationally heavy functions to automatically cache results for repeated calls. |
| < | < | ||
| | | ||
| Line 194: | Line 194: | ||
| ===== Best Practices ===== | ===== Best Practices ===== | ||
| - | 1. Focus profiling on **critical pipeline sections** to avoid introducing unnecessary profiling overhead. | + | 1. Focus profiling on **critical pipeline sections** to avoid introducing unnecessary profiling overhead. |
| + | |||
| 2. Use **cache_step** exclusively for deterministic functions where input-output relationships don’t change. | 2. Use **cache_step** exclusively for deterministic functions where input-output relationships don’t change. | ||
| + | |||
| 3. Limit **lru_cache** size with the **maxsize** parameter to ensure memory-efficient caching. | 3. Limit **lru_cache** size with the **maxsize** parameter to ensure memory-efficient caching. | ||
ai_performance_profiler.1748488590.txt.gz · Last modified: 2025/05/29 03:16 by eagleeyenebula
