The improved KAI test runner features enhanced logging capabilities with a structured format designed to make test output analysis easier.
Each module has its own log file:
Logs/core_diagnostic.log- Core testsLogs/pi_diagnostic.log- Pi language testsLogs/rho_diagnostic.log- Rho language testsLogs/tau_diagnostic.log- Tau language testsLogs/network_diagnostic.log- Network testsLogs/general_diagnostic.log- General testsLogs/summary.log- Test suite summary
Each log file follows a consistent format:
[Test Module Name] Diagnostic Log
Run Date: [Timestamp]
MODULE: [MODULE_NAME]
----------------------------------------
## TEST: [Test Binary Path] ##
## MODULE: [MODULE_NAME] ##
## Begin Output ##
[Test Output Content]
## End Output ##
----------------------------------------
The summary log (Logs/summary.log) provides a comprehensive overview of the test run:
Test Suite Summary - [Timestamp]
====================================================
Status: [SUCCESS|FAILURE]
TEST COUNTS
----------------------------------------------------
Total tests: [Total Test Count]
Passed tests: [Passed Test Count] ([Pass Percentage]%)
Failed tests: [Failed Test Count]
MODULE DETAILS
----------------------------------------------------
Core: [Core Test Count] tests [PASS|FAIL]
Rho: [Rho Test Count] tests [PASS|FAIL]
Pi: [Pi Test Count] tests [PASS|FAIL]
Tau: [Tau Test Count] tests [PASS|FAIL]
Network: [Network Test Count] tests [PASS|FAIL]
LOG FILES
----------------------------------------------------
Core: Logs/core_diagnostic.log
Rho: Logs/rho_diagnostic.log
Pi: Logs/pi_diagnostic.log
Tau: Logs/tau_diagnostic.log
Network: Logs/network_diagnostic.log
====================================================
Generated: [Timestamp]
- Structured Organization - Separating logs by module makes it easier to focus on specific parts of the system
- Timestamp Tracking - Every log includes the date and time of execution for tracking test runs
- Clear Module Identification - Module names are prominently displayed for quick identification
- Consistent Formatting - All logs follow the same format for easier parsing and analysis
- Test Counts by Module - The summary provides a breakdown of tests by module for better understanding of test coverage
- Success Percentage - Pass rate percentage calculation helps track overall test health
- Pass/Fail Indicators - Each module's status is clearly marked as PASS or FAIL
- Organized Sections - Summary is divided into clear sections (Counts, Details, Log Files)
- Overall Status - Top-level SUCCESS/FAILURE indicator provides immediate test health assessment
- Standardized Output Markers - Begin/End markers in logs make parsing and automated analysis easier
To view the logs:
# View the summary
cat Logs/summary.log
# View specific module logs
cat Logs/rho_diagnostic.log
cat Logs/pi_diagnostic.log
# etc.For filtering log content, standard Unix tools work well:
# Find test failures
grep -n "FAIL" Logs/rho_diagnostic.log
# View only test results
grep -n "\[ RUN \]\\|\[ OK \]\\|\[ FAIL \]" Logs/rho_diagnostic.log
# Extract just the content between BEGIN and END markers
sed -n '/## Begin Output ##/,/## End Output ##/p' Logs/rho_diagnostic.log
# Compare module passing rates
grep -h "tests \[PASS\]" Logs/summary.log | sort
# Find all passing modules
grep "PASS" Logs/summary.logThe test runner automatically clears previous logs at the start of each test run. To preserve logs from a particular run, you can use the built-in archiving feature:
# Run the current full suite
./run_all_tests.shIf you want archived logs, copy the log files into Logs/archive/ after the run or use a dedicated wrapper script.
If you want to manually preserve specific logs, copy them to a different location before running tests again:
# Manually create an archive directory with custom name
mkdir -p Logs/archive/important_milestone
cp Logs/*.log Logs/archive/important_milestone/The KAI test suite includes tools for analyzing historical test data. After you've archived some test runs, you can use the included analysis script:
# Analyze historical test data
./Scripts/analyze_test_history.shThis script provides:
- A list of recent test runs with pass rates
- Overall test health metrics including average pass rates
- Trend analysis (improving, declining, or stable)
- Module-by-module health breakdown
For more advanced analysis, the structured format of the logs enables various approaches:
You can extract summary data for trend analysis:
# Extract test counts and dates
grep -h "Total tests:" Logs/archive/*/summary.log | awk '{print $3}' > total_tests_history.txt
grep -h "Timestamp" Logs/archive/*/summary.log | awk '{print $4 " " $5}' > test_dates.txt
# Extract pass percentages
grep -h "Passed tests:" Logs/archive/*/summary.log | sed 's/.*(\(.*\)%)/\1/' > pass_percentage_history.txtIf you have Gnuplot installed, you can create simple visualizations:
# Create a script for plotting pass percentage over time
cat > plot_history.gnuplot << EOF
set terminal png
set output "test_history.png"
set title "KAI Test Suite Pass Rate Over Time"
set xlabel "Test Run"
set ylabel "Pass Percentage (%)"
set yrange [0:100]
plot "pass_percentage_history.txt" with linespoints title "Pass Rate"
EOF
# Run gnuplot
gnuplot plot_history.gnuplotThis produces a simple graph showing test health trends over time.