Deep research as a commodity.

Faster, cheaper, traceable research — produced by a competitive swarm of miners on Bittensor SN67.

Deep costs dollars. Fast loses depth.

Best-in-class engines deliver quality but take minutes and cost dollars per query. Basic search APIs are fast but offer no synthesis, no citations, no coverage guarantees.  Web search vs deep research: Even providers distinguish these as fundamentally different tiers — OpenAI classifies web search as “fast and ideal for quick lookups” while describing deep research as a long-running workflow requiring background mode and webhooks. OpenAI Web Search GuideOpenAI Deep Research API docs The middle ground is empty — and agents need both.

GPT 5.2 Pro

$1.83

per query · 5–30 min Existing deep research pricing/latency: OpenAI GPT 5.2 PRO ~$1.83/query, 5–30min; Gemini DR ~$3.68; Perplexity DR ~$1.54. Native APIs charge per token; figures represent average cost per completed workflow. Parallel Web Systems Benchmarks (Dec 2025)Parallel DeepSearchQA blog (Dec 2025)

Gemini DR

$3.68

per query · variable Existing deep research pricing/latency: OpenAI GPT 5.2 PRO ~$1.83/query, 5–30min; Gemini DR ~$3.68; Perplexity DR ~$1.54. Native APIs charge per token; figures represent average cost per completed workflow. Parallel Web Systems Benchmarks (Dec 2025)Parallel DeepSearchQA blog (Dec 2025)

Harnyx — Target

10x lower

per query · faster

When agents act without human review, shallow research doesn’t just misinform — it causes damage.

Not search. Completed analytical work.

A frontier model retrieves and responds. Deep research cross-checks, follows threads, and synthesizes.

Synthesis

A unified judgment across sources. Not fragments of information, but a completed analysis you can act on.

Comprehensiveness

Not the first three links, but the kind of persistent, systematic investigation that surfaces what a surface-level search would miss.

Traceability

Every claim linked to its origin, not “according to sources.”

One API call. Structured research back.

Stop stitching together search calls, extraction scripts, and verification steps.

import harnyx

result = harnyx.research(
  query="Q4 cloud provider earnings",
  output_format="structured",
  citations=True
)
Python code example showing Harnyx API usage
{
  "summary": "AWS guided Q4 at $28.8B (+19% YoY),
    citing AI workload migration. Azure: $26.3B
    (+22%), noting GPU margin pressure. GCP:
    $11.4B (+28%), fastest growth, driven by
    Gemini API enterprise adoption.",

  "sources": [
    { "title": "AWS Q4 2025 Earnings Transcript",
      "url": "https://..." },
    { "title": "Microsoft FY26 Q2 Release",
      "url": "https://..." },
    { "title": "Alphabet Q4 Supplemental Data",
      "url": "https://..." }
  ]
}
JSON response example showing Harnyx API output

Works with any agent stack

  • OpenClaw
  • LangChain
  • CrewAI
  • n8n

The best pipeline is a moving target.

New models, retrieval methods, and cost structures emerge constantly. How fast a provider adapts is the defining competitive axis.

Read our vision statement

Big Model Labs

OpenAI DR, Gemini DR

Locked to their own model, even when others excel at specific stages. Harness optimization competes for bandwidth with model training, product features, and safety.

Centralized Harness

Perplexity Sonar

Decouples the harness from any one model — can swap faster. But every optimization still flows through one team’s bandwidth and release cycle.

Open-source Harness

Perplexica, etc.

Open development, community contributions. But no economic incentive for continuous optimization — adaptation is voluntary and sporadic.

Bittensor Subnet

Harnyx — SN67

Decoupled harness + open development + persistent economic incentives. Miners adapt in real time because the market rewards whoever finds a better approach first.

The standard is stable. The execution is adaptive.

Parity first. Then lead.

Harnyx starts by matching the best deep research available — then the competitive pressure of the network takes it further.

Current
Phase 1

Reach Parity

Research output at industry benchmark quality — DRACO DRACO (Deep Research Accuracy, Completeness, and Objectivity): Open benchmark for deep research agents — 10 domains, 100 tasks, expert-grounded rubrics. Released by Perplexity. arXiv 2602.11685 (Feb 2026)HuggingFace dataset and DeepSearchQA DeepSearchQA: 900-prompt benchmark for multi-step information-seeking tasks — 17 domains, causal-chain structure. Released by Google DeepMind. DeepSearchQA paper (Dec 2025)Kaggle leaderboard standards — delivered at meaningfully lower cost. External API for agent builders. Drop-in integrations for major agent frameworks.

Phase 2

Lead and Define

Miners compete head-to-head instead of against fixed reference answers. When the goal shifts from matching a known target to outperforming each other, miners surface sources and reasoning paths others missed — novel synthesis emerges from competitive pressure, not from a predefined ceiling.

Phase 3

Enterprise Readiness

Confidential compute via TEE. Zero data retention. SLA guarantees. Entry into finance, legal, and consulting — verticals where research is high-value infrastructure.

Deep enough to trust. Cheap enough to scale.

Run the network

A unified API routes queries across a competitive swarm of miners, ranked by outcome-focused validators. Three roles drive the system.

Miners

Build research pipelines

Compete to produce the highest-quality deep research. Build your own pipeline — choose models, retrieval strategies, synthesis methods. The best results earn the most emissions.

Miner Guide

Validators

Secure the network

Grade miner submissions against reference answers and distribute rewards based on quality.

Validator Guide

Get early access

We’ll email you once when API access is ready.

No spam. One email when your slot opens.

Stay in the loop.

Sources