Outputs
Markdown Output Format
Introduction
Sailfish can generate comprehensive markdown files containing both individual test results and method comparison data using the [WriteToMarkdown] attribute. These files are GitHub-compatible and perfect for documentation, code reviews, and performance tracking.
Basic Usage
Apply the [WriteToMarkdown] attribute to any test class:
[WriteToMarkdown][Sailfish(SampleSize = 100)]public class PerformanceTest{ [SailfishMethod] [SailfishComparison("Algorithms")] public void BubbleSort() { /* implementation */ }
[SailfishMethod] [SailfishComparison("Algorithms")] public void QuickSort() { /* implementation */ }
[SailfishMethod] public void RegularMethod() { /* implementation */ }}Markdown Structure
The generated markdown files use a well-organized, multi-section format:
Section 1: Session Metadata
# Performance Test Results**Session ID:** abc12345**Timestamp:** 2025-08-03T10:30:00Z**Total Classes:** 1**Total Tests:** 3Fields:
- Session ID: Unique identifier for the test session
- Timestamp: When the test session completed (UTC)
- Total Classes: Number of test classes with
[WriteToMarkdown]in the session - Total Tests: Total number of test methods executed
Section 2: Individual Test Results
PerformanceTest
| Method | Mean (ms) | Median (ms) | StdDev (N=100) | CI95 MOE | CI99 MOE | Status |
|---|---|---|---|---|---|---|
| BubbleSort | 45.2000 | 44.1000 | 3.1000 | ±1.2345 | ±1.6789 | ✅ Success |
| QuickSort | 2.1000 | 2.0000 | 0.3000 | ±0.1234 | ±0.2345 | ✅ Success |
| RegularMethod | 1.0000 | 1.0000 | 0.1000 | ±0.0500 | ±0.0800 | ✅ Success |
Columns:
- Method: Name of the test method
- Mean (ms): Average execution time in milliseconds
- Median (ms): Median execution time in milliseconds
- StdDev (N=X): Standard deviation with sample size indicator
- CI95 MOE: Margin of error at 95% confidence (±ms)
- CI99 MOE: Margin of error at 99% confidence (±ms)
- Status: Test execution status with emoji indicator
Section 3: Method Comparison Matrices
Comparison Group: Algorithms
| Method 1 | Method 2 | Mean 1 (ms) | Mean 2 (ms) | Ratio | 95% CI | q-value (FDR) | Label |
|---|---|---|---|---|---|---|---|
| BubbleSort | QuickSort | 45.2000 | 2.1000 | 21.5 | [18.3, 24.9] | 0.000 | Slower |
Columns:
- Method 1 / Method 2: Methods being compared
- Mean 1 / Mean 2: Mean execution times for each method
- Ratio:
Mean1 / Mean2(unitless). Values > 1 indicate Method 1 is slower; values < 1 indicate it is faster. - 95% CI: Confidence interval for the ratio computed on the log scale. If the interval crosses 1.0, the label is "Similar".
- q-value (FDR): Benjamini–Hochberg adjusted p-value accounting for multiple comparisons within the group.
- Label: One of Improved, Similar, or Slower (consolidated outputs use "Slower" rather than "Regressed").
Session-Based Consolidation
Markdown files use session-based consolidation, meaning:
- Single file per session: All test classes with
[WriteToMarkdown]contribute to one file - Cross-class comparisons: Method comparisons work across different test classes
- Unique naming: Files use session IDs and timestamps to prevent conflicts
- Complete data: All test results from the entire session are included
Example filename: TestSession_abc12345_Results_20250803_103000.md
🏥 Environment Health Section (when enabled)
- When the Environment Health Check is enabled, the consolidated session file includes a "🏥 Environment Health Check" section near the top showing the score and the top few entries.
- Learn more: /docs/1/environment-health
🧭 Reproducibility Summary (when available)
A short summary of environment details and a link to
Manifest_*.jsonis included near the top of the consolidated file when Run Settings and the manifest provider are available.When seeded randomized run order is enabled, the summary includes the Randomization Seed to support reproducible reruns.
Learn more: /docs/1/reproducibility-manifest
⏱️ Timer Calibration (when enabled)
A short header summarizes the timer:
- Stopwatch Frequency (Hz) and Effective Resolution (ns)
- BaselineOverheadTicks (no‑op call baseline)
- JitterScore (0–100) and RSD%
The section is included once per session. Disable via RunSettingsBuilder.WithTimerCalibration(false).
GitHub Integration
The markdown format is designed for seamless GitHub integration:
1. Commit to Repository
git add TestSession_*.mdgit commit -m "Add performance test results"git push2. View in Pull Requests
- Rendered tables: GitHub automatically renders markdown tables
- Emoji support: Status indicators display correctly
- Diff-friendly: Changes between test runs are easy to spot
- Searchable: Full-text search across all results
3. Link in Documentation
See [latest performance results](./TestSession_abc12345_Results_20250803_103000.md)Advanced Features
Multiple Comparison Groups
When you have multiple comparison groups, each generates its own comparison matrix:
Comparison Group: StringOperations
| Method 1 | Method 2 | Mean 1 (ms) | Mean 2 (ms) | Ratio | 95% CI | q-value (FDR) | Label |
|---|---|---|---|---|---|---|---|
| StringConcat | StringBuilder | 15.2000 | 8.1000 | 1.9 | [1.7, 2.2] | 0.000 | Slower |
| StringConcat | StringInterpolation | 15.2000 | 12.3000 | 1.2 | [1.1, 1.4] | 0.023 | Slower |
| StringBuilder | StringInterpolation | 8.1000 | 12.3000 | 0.66 | [0.60, 0.72] | 0.001 | Improved |
Comparison Group: Collections
| Method 1 | Method 2 | Mean 1 (ms) | Mean 2 (ms) | Ratio | 95% CI | q-value (FDR) | Label |
|---|---|---|---|---|---|---|---|
| ListIteration | ArrayIteration | 5.4000 | 3.2000 | 1.7 | [1.5, 1.9] | 0.000 | Slower |
N×N Comparison Matrices
For groups with multiple methods, all pairwise comparisons are included:
- 2 methods: 1 comparison
- 3 methods: 3 comparisons (A vs B, A vs C, B vs C)
- 4 methods: 6 comparisons
- N methods: N×(N-1)/2 comparisons
Adaptive Precision Formatting
Multiple comparisons correction
Sailfish applies the Benjamini–Hochberg False Discovery Rate (FDR) procedure to the set of p-values within each comparison group. Consolidated outputs include the adjusted q-value alongside the 95% ratio confidence interval.
Sailfish uses adaptive precision to ensure readability:
- Large values (>1ms): 4 decimal places (e.g., 45.2000)
- Small values (<1ms): 6 decimal places (e.g., 0.123456)
- Tiny values (<0.001ms): 8 decimal places (e.g., 0.00012345)
- Zero values: Simple "0"
Mixed Test Types
The markdown includes both comparison and regular methods:
MyTest
| Method | Mean (ms) | Median (ms) | StdDev (N=100) | CI95 MOE | CI99 MOE | Status |
|---|---|---|---|---|---|---|
| ComparisonMethod1 | 10.5000 | 9.8000 | 1.2000 | ±0.4567 | ±0.6789 | ✅ Success |
| ComparisonMethod2 | 8.3000 | 8.1000 | 0.9000 | ±0.3456 | ±0.5123 | ✅ Success |
| RegularMethod | 1.0000 | 1.0000 | 0.1000 | ±0.0500 | ±0.0800 | ✅ Success |
| AnotherRegularMethod | 1.1000 | 1.0000 | 0.1000 | ±0.0500 | ±0.0800 | ✅ Success |
Best Practices
1. Organize Your Tests
Use meaningful test class and method names since they appear in the markdown:
[WriteToMarkdown]public class DatabaseQueryPerformance // Clear class name{ [SailfishMethod] [SailfishComparison("QueryTypes")] public void SimpleSelect() { } // Descriptive method name
[SailfishMethod] [SailfishComparison("QueryTypes")] public void ComplexJoin() { } // Descriptive method name}2. Use Descriptive Comparison Groups
Choose comparison group names that clearly indicate what's being compared:
[SailfishComparison("DatabaseQueries")] // Good[SailfishComparison("SerializationMethods")] // Good[SailfishComparison("Group1")] // Poor3. Configure Output Directory
Set a consistent output directory for organized results:
var runner = SailfishRunner.CreateBuilder() .WithRunSettings(settings => settings .WithLocalOutputDirectory("./performance-results")) .Build();4. Combine with CSV
Use both output formats for comprehensive reporting:
[WriteToMarkdown] // Human-readable reports[WriteToCsv] // Data analysis[Sailfish]public class ComprehensiveTest { }5. Version Control Integration
Add markdown files to version control for historical tracking:
# .gitignore - Include performance results!TestSession_*.mdTroubleshooting
Empty Markdown Files
If markdown files are empty or missing:
- Check attribute placement: Ensure
[WriteToMarkdown]is on the test class, not methods - Verify test execution: Markdown is only generated after successful test completion
- Check output directory: Verify the configured output directory exists and is writable
Missing Comparisons
If method comparisons are missing from the markdown:
- Verify group names: Ensure methods use identical group names (case-sensitive)
- Check method count: Need at least 2 methods in a group for comparisons
- Confirm attributes: Both
[SailfishMethod]and[SailfishComparison]required
GitHub Rendering Issues
If GitHub doesn't render the markdown correctly:
- Check file encoding: Ensure markdown is saved as UTF-8
- Verify table syntax: Ensure proper pipe (
|) alignment - Test locally: Preview markdown in VS Code or other editor first
Integration Examples
CI/CD Pipeline
- name: Run Performance Tests run: dotnet test --logger "console;verbosity=detailed"
- name: Upload Markdown Results uses: actions/upload-artifact@v3 with: name: performance-results path: "**/TestSession_*.md"
- name: Comment on PR uses: actions/github-script@v6 with: script: | const fs = require('fs'); const markdown = fs.readFileSync('TestSession_latest.md', 'utf8'); github.rest.issues.createComment({ issue_number: context.issue.number, owner: context.repo.owner, repo: context.repo.repo, body: markdown });Performance Tracking
// Compare current results with baselinevar currentResults = File.ReadAllText("TestSession_current.md");var baselineResults = File.ReadAllText("TestSession_baseline.md");
// Parse and analyze differencesif (HasPerformanceRegression(currentResults, baselineResults)){ SendAlert("Performance regression detected!");}Documentation Generation
// Automatically update documentation with latest resultsvar latestResults = Directory.GetFiles("./performance-results", "TestSession_*.md") .OrderByDescending(f => File.GetCreationTime(f)) .First();
File.Copy(latestResults, "./docs/performance/latest-results.md", overwrite: true);