After a session completes, IronBee automatically runs analysis — an LLM-powered pass over session data. Analysis turns raw telemetry into actionable insights without requiring you to read through every tool call.Documentation Index
Fetch the complete documentation index at: https://docs.ironbee.ai/llms.txt
Use this file to discover all available pages before exploring further.
What analysis produces
Analysis generates two types of output:- Findings — specific observations about quality, efficiency, patterns, or behavior
- Recommendations — concrete directives the agent should follow to improve future sessions
Analysis types
IronBee runs five analysis types, all LLM-driven:| Type | Scope | Focus |
|---|---|---|
| Quality | Per project | Verification thoroughness, fix effectiveness, recurring failures, behavioral patterns |
| Cost | Per project | Token spend, cache hit rate, expensive sessions, cost-reduction opportunities |
| Account cost | Cross-project rollup | Account-wide spend trends, cost distribution by project and user |
| Session insights | Per project | Session patterns, tool usage, context pressure, interaction style, multi-session trends |
| Account session insights | Cross-project rollup | Account-wide behavioral patterns across all projects |
How analysis works
- The session completes (verdict submitted or retry limit reached)
- IronBee aggregates session metrics into a structured data packet
- An LLM analyst processes the packet and produces findings and recommendations
- Results are written and the Analysis tab in the console updates
The analysis view
The Analysis tab shows findings grouped by area and type. Each analysis run supersedes the previous one — you always see the freshest view. Quality analysis covers:- Time distribution across coding, verification, and fix phases
- First-pass success rate and retry patterns
- Fix effectiveness (did fixes actually reduce issues?)
- Recurring files and blockers
- Session highlight — a standout session with notable characteristics
- At-a-glance summary (sessions, hours, cost, cache efficiency)
- Interaction patterns, tool diversity, multi-session trends
- Friction points and features to try
Running local analysis from the CLI
You can also run a lightweight analysis locally at any time:--detailed to include raw verdict text (checks, issues, fixes) for LLM-powered semantic analysis — this pairs with the /ironbee-analyze agent command.
Output is printed to stdout. The full cloud analysis is in the Console.
Related pages
Findings
Specific observations identified by the analyzer.
Recommendations
Concrete directives to improve agent behavior and quality.