Case Study: Public Earnings Analysis

A multi-agent mesh that reads transcripts like an analyst, debates like an investment committee, and exports structured features you can score, backtest, and reuse across names and quarters.

Goals & Setup

For this case study, we built a catalog of 145 liquid public tickers and ran meshes on their 2025 earnings events to answer two questions:

  1. Qualitative: Does a multi-agent mesh surface deeper or different qualitative insights than you'd get from the usual "web consensus" (news, broker notes, blogs, etc.)?
  2. Quantitative: Do mesh-derived scores carry real, testable signal about short-term returns—and do they beat a baseline model that scores the same brief without the mesh deliberation?

Off-Platform: Event Monitoring & Context Assembly

We wrote an orchestration layer that sits outside the Mesh Platform to:

This orchestration is not part of the core platform—it's case-study-specific scaffolding we built to automate the workflow. In a production setting, clients would build their own event triggers and context pipelines tailored to their data sources and research processes.

On-Platform: Mesh Execution & Output Generation

Once a case version is submitted, the platform:

What You Get for Each Earnings Event

For every case version (e.g., AAPL FY2025 Q2), the platform produces:

Because all four outputs are generated from the same mesh, your human readers and your models are working off a single, coherent view.

Using Mesh Scores as Features in a 5-Day Excess Return Model

The mesh isn't just for reading—it produces structured scores that can be used as features in predictive models.

To validate that those scores actually contain signal, we built two models that predict 5-day excess return after the print:

Model Setup

Results: Mesh Model vs Context-Only Baseline

The mesh model significantly outperformed the context-only baseline:

What This Tells You