FAQ

Questions about running compute jobs with ON? You’ll find the answers here.
Don’t see yours? Reach out!
What happens if a node fails during my job?
Ocean Nodes are designed to manage failures locally, keeping compute job execution predictable and controlled. If a node goes down mid-run, the job can be restarted on the same node once it becomes available again. Funds are released from escrow only when the node explicitly marks a job as successful. Rerouting is handled by the user, in line with the Ocean Network ethos of giving users full control over which resources are used. This differs when a failure is caused by the algorithm itself. In that case, the job is treated as unsuccessful because the execution failed, not because the node was unavailable. See the dedicated FAQ for algorithm failures.
Does Ocean Network use a pay-per-use pricing?
We use a pay-per-use model with an escrow protection mechanism, meaning you pay for compute only when jobs are actually running, not for idle capacity. What this means for you: only good things. With Ocean Network, there’s no stress about being conservative with time estimates. You can safely overestimate your job duration or escrow cap, because you’ll only ever be charged for jobs that complete successfully. No wasted budget, no penalty for playing it safe.
How do paid compute jobs work?
Paid jobs charge per run based on resource time and environment selection
Do I need to set up servers as a compute user?
No. This is no server setup and serverless GPU compute: you choose a preferred Ocean Node with the resources you need, submit a containerized job, and get results back without managing servers or infrastructure
Do I need to set up servers as a node operator?
Yes. As a node operator, you run your own infrastructure. You can install an Ocean Node using Docker and the ocean-node-quickstart.sh script, or use the Ocean Network Run a Node page to automate much of the setup. You also need to: -Configure networking and open required ports -Set up your compute environment -Set up your EVM private key and admin wallet Your node must be publicly accessible to pass monitor checks for uptime, validity, and performance benchmarks. Rewards need to be claimed, which means you need to have funds for gas fees
Can I deploy a long-running inference server on an Ocean Node?
Not as the default pattern. Ocean Nodes are built for batch inference and GPU inference jobs that run within a runtime window, not a long-running inference server that stays on indefinitely.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.