TPToolPazar
Ana Sayfa/Rehberler/How To Set Up A Hyperspace Pod

How To Set Up A Hyperspace Pod

📖 Bu rehber ToolPazar ekibi tarafından hazırlanmıştır. Tüm araçlarımız ücretsiz ve reklamsızdır.

What a Pod actually is

Hyperspace Pods lets a small group of people — a family, a startup, a few friends — pool the laptops and desktops they already own into one shared AI cluster. Everyone installs the CLI, someone creates a pod, shares an invite link, and the machines form a peer-to-peer mesh. Models that need more memory than any single laptop has — Qwen 3.5 32B, GLM-5 Turbo — get sharded across the group automatically.

How automatic sharding works

The picture above is the part most people don't believe at first: there really is no middleman. A prompt leaves your machine, hops between your pod members' machines, and the response comes back the same way. The coordinator picks the routing, but the data plane is direct.

Setting up a pod (4 commands)

Two things are worth noticing. First, machines with more VRAM carry more layers, so a beefy desktop and a thin laptop coexist gracefully — the desktop just pulls more weight. Second, because inference is pipelined, each machine is doing work on a different token at the same time; throughput rises with the slowest machine in the ring, not the fastest.

Step 1 — Install the CLI

The same install works on macOS, Linux, and Windows under WSL2. Bring a machine with at least 16 GB of unified memory or 8 GB of dedicated VRAM to the table; smaller machines can still join, they'll just get fewer layers.

Step 2 — Create the pod

The invite link is short-lived and bound to the pod's identity, not to a public URL. Members behind home routers don't need port forwarding; the network handles NAT traversal automatically.

Step 3 — Invite the others

A team of five paying for cloud AI typically burns $500–$2,000 a month on API calls. The same team's existing machines can serve Qwen 3.5 (competitive on SWE-bench) and GLM-5 Turbo (#1 on BrowseComp for tool-calling and web research) at the marginal cost of electricity. For day-to-day work — code review, refactors, research, drafting — local models handle it and nobody gets billed.

Step 4 — Point your tools at it

Most pods we see in the wild settle on the same three models, each doing the job it's best at:

Why this changes the cost math

All three load simultaneously on a five-machine pod with mixed hardware. The coordinator routes each request to the model the caller asked for; no juggling.

A practical three-model setup

The treasury is a shared balance that funds the rare cloud-fallback query when no local model is good enough. Any member can top it up; every spend is replicated to every member's ledger, so there are no surprise bills. When the pod is idle — overnight, weekends, while everyone's at lunch — you can rent its compute out on the Hyperspace marketplace and credit the treasury, with fine-grained permissions controlling who can use what.

What makes this different

Treasury and the compute marketplace

Common mistakes

When a Pod isn't the right answer