High-performance GPUs on your terms.

Run pay-per-use AI, ML, and batch workloads on distributed GPU compute through Ocean Network. Pick your GPU resources, then launch directly from your editor with Ocean Orchestrator.

Pay-per-use escrow payments
Pick resources before you run, with no forced bundles
Results saved locally
Available GPUs
H200 440 GB · East Asia $2.16/hr
$ ocean gpu select H200 --region east-asia
✓ Environment selected · escrow funded
Ready to run · job starts on dispatch
H200 · East Asia · $2.16/hr
Code-to-Node
Pay-per-use
Escrow Protected
Run from your editor
No idle costs
GPU compute
H200 SXM5
Results saved locally
One click launch
Ocean Network
Code-to-Node
Pay-per-use
Escrow Protected
Run from your editor
No idle costs
GPU compute
H200 SXM5
Results saved locally
One click launch
Ocean Network
Pricing

GPU pricing and availability

Select the resources that fit your run

Node Envs
GPU
RAM
CPU
Disk Space
Price *starting from
Run a Job
Env 1
H200 SXM5
440 GB
40 vCPUs
1000 GB
$2.16/hr
Env 2
H200 SXM5
440 GB
40 vCPUs
1000 GB
$2.16/hr
Env 3
H200 SXM5
440 GB
40 vCPUs
1000 GB
$2.16/hr
Env 4
H200 SXM5
440 GB
40 vCPUs
1000 GB
$2.16/hr

Dashboard → Editor → Outputs

Dashboard

Choose environment and resources

Editor

Ocean Orchestrator, Start compute job, view logs

Outputs

Results and logs saved to your folder

Why This Matters

Stop losing time before the run

GPU compute workflows often slow you down before the job even starts. Comparing options, cloud setup, switching dashboards, and paying for idle infrastructure kills momentum.

You want a direct path: choose your GPUs, run a job, pay only for execution time.

i
No manual infrastructure setup. No cloud console hopping.
Traditional cloud workflow With Ocean Orchestrator + Ocean Network
Manual setup before any job runs Start in the dashboard, choose resources, then launch from your editor
You pay for idle infrastructure even when nothing is running Pay-per-use escrow mechanism that charges you upon successful completion of the job
No cost visibility until the bill arrives at the end of the month See resource options and estimated cost before launch
Switching GPU types requires re-provisioning from scratch Choose the resources you need before each run without doing the whole flow
Results require manual retrieval from remote storage Outputs and logs are saved automatically to your local results folder
How It Works

How it works

01

Pick environment and resources in the dashboard

No forced bundles, choose exactly what your workload needs.

02

Fund your escrow wallet

Funds are held securely and released only after your job completes.

03

Choose your editor and open Ocean Orchestrator

Works with VS Code, Cursor, Windsurf, and Antigravity.

04

Start compute job, monitor logs

Launch with one click and track progress live in your editor.

05

Outputs land back in your local folder

Results and logs are saved automatically — no manual retrieval.

No server setup. No cloud console hoppin
Ocean Orchestrator — Terminal
$ ocean gpu list --type h200
✓ 4 environments available
Env-1 · H200 · Amsterdam · $2.16/hr
Env-2 · H200 · Frankfurt · $2.16/hr
$ ocean job run --env Env-1 --budget 10.00
Escrow funded: $10.00
✓ Job dispatched — job_a8f2c1
[██████░░░░] 62% · 6m 44s
✓ Complete · $4.32 charged · results saved
$
HOW IT WORKS

How it works

Ocean Orchestrator — Terminal
Payments

Pay-per-use, escrow-protected payments

Run a Job

Funds locked in escrow

Budget is set before the job starts. Funds held securely in the escrow contract.

Job executes

Your containerized workload runs on the selected node. Progress tracked live.

Job marked as completed

Completion is verified before any payment is processed.

Payment released

Exact runtime cost released. Any remaining balance returns to your wallet.

Use Cases

Built for real GPU workflows

01

Batch runs that finish

Run inference over large input sets and pull back clean outputs and logs.

02

Embeddings and data prep

Turn raw data into artifacts your app can ship with.

03

Training and eval loops

Rerun the same job with different resources and iterate faster.

FAQ

GPU compute,simplified.

Curious about how to run compute jobs? Get your answers and start building your projects with pay-per-use high-performance GPU compute.

How do I run my first job?

The quickest way to run your first compute job is to use the Ocean Network dashboard: select the resources you need, then push them to Ocean Orchestrator (you’ll be prompted to install it if you haven’t). Ocean Orchestrator works in VS Code, Cursor, Antigravity, and Windsurf.If you prefer CLI, you can use ocean-cli to submit the job and pull results when processing is complete.

What types of workloads can I run on an Ocean Node?

You can run containerized compute jobs like embeddings, model inference jobs, data cleanup, batch processing, and fine-tune model workloads that finish within the job window and produce outputs you can download.

What happens if a node fails during my job?

Ocean Nodes are designed to manage failures locally, keeping compute job execution predictable and controlled. If a node goes down mid-run, the job can be restarted on the same node once it becomes available again. Funds are released from escrow only when the node explicitly marks a job as successful.Rerouting is handled by the user, in line with the Ocean Network ethos of giving users full control over which resources are used.This differs when a failure is caused by the algorithm itself. In that case, the job is treated as unsuccessful because the execution failed, not because the node was unavailable. See the dedicated FAQ for algorithm failures.

Do I need to set up servers as a compute user?

No. This is no server setup and serverless GPU compute: you choose a preferred Ocean Node with the resources you need, submit a containerized job, and get results back without managing servers or infrastructure

Is there free compute available?

Yes, we have 3 types of free compute. Test CPU compute enviroemnts offered by Ocean Network, Complimentary Credits which gives you access to GPU compute and lets you use resources for $100 USD. The third type of test compute enviroments is the one that can be offered by node operators for the purpose of showcasing their resources for users to experiment with.

How does job scheduling work on Ocean Network?

Ocean Network does not auto-assign your job to random machines. You pick the node or compute environment you want based on resource price limits and current availability, then submit the job to that specific resource. If it is busy, your job waits until that node becomes available. The dashboard shows availability and max duration per environment, so you can choose predictable compute up front

Run your next GPU job

Still unsure about your algorithm? Start with free CPU compute to validate your workflow, then move to pay-per-use GPU jobs.

Run a Job