Deep research as a commodity.

Faster, cheaper, traceable research — produced by a competitive swarm of miners on Bittensor SN67.

Deep costs dollars. Fast loses depth.

Best-in-class engines deliver quality but take minutes and cost dollars per query. Basic search APIs are fast but offer no synthesis, no citations, no coverage guarantees.  Web search vs deep research: Even providers distinguish these as fundamentally different tiers — OpenAI classifies web search as “fast and ideal for quick lookups” while describing deep research as a long-running workflow requiring background mode and webhooks. OpenAI Web Search GuideOpenAI Deep Research API docs Agents need both. Right now, neither option delivers.

GPT 5.2 Pro

$1.83

per query · 5–30 min Existing deep research pricing/latency: OpenAI GPT 5.2 PRO ~$1.83/query, 5–30min; Gemini DR ~$3.68; Perplexity DR ~$1.54. Native APIs charge per token; figures represent average cost per completed workflow. Parallel Web Systems Benchmarks (Dec 2025)Parallel DeepSearchQA blog (Dec 2025)

Gemini DR

$3.68

per query · variable Existing deep research pricing/latency: OpenAI GPT 5.2 PRO ~$1.83/query, 5–30min; Gemini DR ~$3.68; Perplexity DR ~$1.54. Native APIs charge per token; figures represent average cost per completed workflow. Parallel Web Systems Benchmarks (Dec 2025)Parallel DeepSearchQA blog (Dec 2025)

Harnyx — Target

10x lower

per query · faster

Not fragments. Completed analytical work.

Builders wire in search APIs because the alternative costs dollars per query. The agent gets fragments and acts anyway — it has no judgment to fill the gap.  Agent scaling barriers: 46% of organizations cite integration with existing systems and 42% cite data access/quality as the biggest barriers to scaling AI agents. The 2026 State of AI Agents (Orbislabs.ai, Jan 2026)

Comprehensiveness

Coverage guarantees that surface what a surface-level search would miss.

Synthesis

Not fragments from multiple sources, but a single analytical judgment you can act on.

Traceability

Every claim linked to its origin, not “according to sources.”

Agents don’t pause for a second opinion.
The quality of the research is the quality of the outcome.

One API call. Structured research back.

Stop stitching together search calls, extraction scripts, and verification steps.

$
curl https://api.harnyx.ai/v1/research \
  -d '{
    "input": "Compare Salesforce vs HubSpot for mid-market CRM in 2025.",
    "format": { "winner": "string", "price_per_seat_usd": "number", "key_reason": "string" }
  }'
Initializing research pipeline...
Retrieving from 42 sources...
Cross-validating claims...
Synthesizing structured output...
{
  "report": {
    "winner": "HubSpot",
    "price_per_seat_usd": 45,
    "key_reason": "HubSpot's bundled AI tier undercuts Salesforce
      Einstein by $50/seat [1], with faster onboarding cited by
      60% of churned accounts [2]..."
  },
  "citations": [
    { "index": 1, "title": "HubSpot Q2 Earnings Call", "url": "..." },
    { "index": 2, "title": "Forrester Wave: Sales Force Automation", "url": "..." }
  ]
}
curl request to Harnyx API with animated processing steps and structured JSON response

Works with any agent stack

  • OpenClaw
  • LangChain
  • CrewAI
  • n8n

The best pipeline is a moving target.

New models, retrieval methods, and cost structures emerge constantly. How fast a provider adapts is the defining competitive axis.

Read our vision statement

Big Model Labs

OpenAI DR, Gemini DR

Locked to their own model, even when others excel at specific stages. Harness optimization competes for bandwidth with model training, product features, and safety.

Centralized Harness

Perplexity Sonar

Decouples the harness from any one model — can swap faster. But every optimization still flows through one team’s bandwidth and release cycle.

Open-source Harness

Perplexica

Open development, community contributions. But no economic incentive for continuous optimization — adaptation is voluntary and sporadic.

Bittensor Subnet

Harnyx — SN67

Decoupled harness + open development + persistent economic incentives. Miners adapt in real time because the market rewards whoever finds a better approach first.

The standard is stable.
The execution is adaptive.

Parity first. Then lead.

Harnyx starts by matching the best deep research available — then the competitive pressure of the network takes it further.

Phase 1

Reach Parity

Current

Research output at industry benchmark quality — DRACO DRACO (Deep Research Accuracy, Completeness, and Objectivity): Open benchmark for deep research agents — 10 domains, 100 tasks, expert-grounded rubrics. Released by Perplexity. arXiv 2602.11685 (Feb 2026)HuggingFace dataset and DeepSearchQA DeepSearchQA: 900-prompt benchmark for multi-step information-seeking tasks — 17 domains, causal-chain structure. Released by Google DeepMind. DeepSearchQA paper (Dec 2025)Kaggle leaderboard standards — delivered at meaningfully lower cost. External API for agent builders. Drop-in integrations for major agent frameworks.

Phase 2

Lead and Define

Miners compete head-to-head instead of against fixed reference answers. When the goal shifts from matching a known target to outperforming each other, miners surface sources and reasoning paths others missed — novel synthesis emerges from competitive pressure, not from a predefined ceiling.

Phase 3

Enterprise Readiness

Confidential compute via TEE. Zero data retention. SLA guarantees. The infrastructure that lets agent builders serve finance, legal, and consulting — verticals where research is high-value and the cost of a wrong answer is measured in dollars.

Deep enough to trust.
Cheap enough to scale.

Get early access

We’ll email you once when API access is ready.

No spam. One email when your slot opens.

Stay in the loop.

Backed by

Run the network

A unified API routes queries across a competitive swarm of miners, ranked by outcome-focused validators.

Miners

Build research pipelines

Compete to produce the highest-quality deep research. Build your own pipeline — choose models, retrieval strategies, synthesis methods. The best results earn the most emissions.

Miner Guide

Validators

Secure the network

Grade miner submissions against reference answers and distribute rewards based on quality.

Validator Guide

Sources