Building at the frontier of agentic AI
We build AI products and publish what we learn along the way. Research, tools, and open knowledge from the cutting edge.
What we're exploring
Our research focus
We build products and conduct research across several areas of agentic AI. Here's what we're working on.
Autonomous Agents
Agents that reason, plan, and execute multi-step tasks with minimal human oversight. We study how to make them reliable enough for production.
Multi-Agent Orchestration
Coordinating teams of specialized AI agents that collaborate, debate, and synthesize — the architecture behind Kapwa's Symphony Mode.
Long-Horizon Research
Deep research agents that work over days and weeks, building on their own findings to produce continuously evolving analysis.
Applied AI Engineering
Turning research into shipped products. Streaming architectures, semantic memory, tool use, and the engineering that makes AI systems work.
See agentic AI analysis
in action
Describe any work or business scenario and get a complete build-vs-buy analysis — custom agent architectures, off-the-shelf product recommendations, and a practical comparison to help you decide.
Our Products
Kapwa
Our flagship AI product — an advisor platform where users select from hundreds of specialized personas — historical figures, domain experts, and fictional strategists — for multi-perspective conversations powered by ensemble AI orchestration.
What's next
Long-horizon deep research
We're building a research agent that doesn't stop after one answer. It produces a report, then continues working — running deeper analysis, finding new connections, and updating its findings daily. Research that compounds over time.
Continuous research reports
Imagine a research report that updates itself. The agent performs an initial deep dive, delivers findings, then keeps working in the background — running increasingly complex analyses that build on previous results. Each day, the report gets deeper and more nuanced.
Reading List
What we're reading
Weekly Gen AI headlines for builders, plus the papers that define the field.
Anthropic refuses Pentagon's demand for unfettered AI access, gets labeled a 'supply chain risk' — OpenAI swoops in hours later
Anthropic held firm on two red lines — no mass domestic surveillance, no autonomous lethal weapons — and was designated a national security supply chain risk by the Trump administration. OpenAI struck a Pentagon deal the same day, though its contract includes substantially similar safety provisions.
Agentic Reasoning for Large Language Models
Comprehensive survey organizing agentic reasoning into three layers: foundational (planning, tool-use, search), self-evolving (adaptation through feedback and memory), and collective (multi-agent coordination and role specialization). Bridges in-context reasoning with post-training approaches across science, robotics, healthcare, and mathematics applications. Accompanied by an actively maintained Awesome-Agentic-Reasoning GitHub repository.
From Fluent to Verifiable: Claim-Level Auditability for Deep Research Agents
Identifies the 'Mirage of Synthesis' problem in deep research agents, where strong surface-level fluency and citation alignment can obscure factual and reasoning defects in AI-generated reports. Proposes claim-level auditability as the evaluation standard, revealing that agents exhibit goal drift scores ranging from 0.25 to 0.93 when exposed to competing objectives. Essential reading for builders deploying research automation.
Learn
What we've learned
Notes, frameworks, and explanations from our research and product work. Written to be useful, not to sell.
What is AI (Without the Jargon)
A plain-English explanation of AI, machine learning, and large language models — no jargon, no hype.
Read moreWhat are AI Agents
From chatbots to autonomous agents — what makes an agent different and why it matters.
Read moreThe Agentic AI Framework
A structured approach to identifying where AI agents create the most value in any operation.
Read more