Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.ironbee.ai/llms.txt

Use this file to discover all available pages before exploring further.

After a session completes, IronBee automatically runs analysis — an LLM-powered pass over session data. Analysis turns raw telemetry into actionable insights without requiring you to read through every tool call.

What analysis produces

Analysis generates two types of output:
  • Findings — specific observations about quality, efficiency, patterns, or behavior
  • Recommendations — concrete directives the agent should follow to improve future sessions
Recommendations are automatically injected into the AI agent’s context on subsequent sessions, so the agent adapts its behavior based on what it learned from previous work.

Analysis types

IronBee runs five analysis types, all LLM-driven:
TypeScopeFocus
QualityPer projectVerification thoroughness, fix effectiveness, recurring failures, behavioral patterns
CostPer projectToken spend, cache hit rate, expensive sessions, cost-reduction opportunities
Account costCross-project rollupAccount-wide spend trends, cost distribution by project and user
Session insightsPer projectSession patterns, tool usage, context pressure, interaction style, multi-session trends
Account session insightsCross-project rollupAccount-wide behavioral patterns across all projects

How analysis works

  1. The session completes (verdict submitted or retry limit reached)
  2. IronBee aggregates session metrics into a structured data packet
  3. An LLM analyst processes the packet and produces findings and recommendations
  4. Results are written and the Analysis tab in the console updates
Analysis typically completes within a minute of session end.

The analysis view

The Analysis tab shows findings grouped by area and type. Each analysis run supersedes the previous one — you always see the freshest view. Quality analysis covers:
  • Time distribution across coding, verification, and fix phases
  • First-pass success rate and retry patterns
  • Fix effectiveness (did fixes actually reduce issues?)
  • Recurring files and blockers
Session insights covers:
  • Session highlight — a standout session with notable characteristics
  • At-a-glance summary (sessions, hours, cost, cache efficiency)
  • Interaction patterns, tool diversity, multi-session trends
  • Friction points and features to try

Running local analysis from the CLI

You can also run a lightweight analysis locally at any time:
ironbee analyze [session-id]
Omit the session ID to analyze all sessions in the current project:
ironbee analyze
Add --detailed to include raw verdict text (checks, issues, fixes) for LLM-powered semantic analysis — this pairs with the /ironbee-analyze agent command. Output is printed to stdout. The full cloud analysis is in the Console.

Findings

Specific observations identified by the analyzer.

Recommendations

Concrete directives to improve agent behavior and quality.