Recursive Multi-Agent Systems: agent communication in latent space

Stanford University

Research official + media 2 src. ~1 min

RecursiveMAS replaces text exchange between agents with communication via latent representations connected by a lightweight RecursiveLink module, and trains the whole system jointly using a dedicated optimization algorithm. Across 9 benchmarks (math, science, medicine, search, code) the authors report +8.3% average accuracy, a 1.2–2.4x speedup in end-to-end inference, and a 34.6–75.6% reduction in token consumption versus text-based multi-agent baselines.

Why it matters

176 upvotes on HF Daily. The text interface between agents is a bottleneck both in latency and in tokens; latent communication plus joint training is an attempt to move MAS out of the «several LLMs glued together with prompts» mode into a unified system.

Importance: 3/5

Notable paper; HF Daily 176 upvotes (>100, +1 to base 2).

Sources