FAQ

Questions about running compute jobs with ON? You’ll find the answers here.
Don’t see yours? Reach out!
Do compute jobs have a maximum runtime limit?
Yes. Every environment has a maximum job duration, which keeps execution predictable for node operators. Each environment also shows its max job duration and current resource availability in the Ocean Network dashboard, so you can pick what fits your run. For bigger workloads that are not available inside the dashboard, use multi-stage pipelines so long runs become multiple separate jobs.
How do I check resource availability in the Ocean Network dashboard?
The Ocean Network dashboard shows live resource availability per node and environment, so you can pick where to run your compute job. You can see what capacity is free versus in use, like GPU, CPU, memory, and storage, plus environment limits like max job duration and runtime details. Use it as a quick reality check before you submit so your job matches the resources you selected.
How can I become a node operator and earn rewards?
Install an Ocean Node using Docker and the ocean-node-quickstart.sh script, or use the Ocean Network Run a node page, which automates much of the setup. Configure your networking (open ports) and compute environment, and set up your EVM private key and admin wallet. Your node must be publicly accessible to pass the monitor checks that verify uptime, validity, and performance benchmarks (CPU, GPU, bandwidth). Rewards are paid to your node address for successful participation and completed compute work. Read more...
Is there currently a trial program available?
Yes. We have Complimentary Credits, which offer access to high-performance GPU workloads. They are delivered as $100 worth of Complimentary Credits, so you can validate and experiment before moving to paid compute and topping out the escrow
What is Test Compute?
Test Compute lets users run small CPU test jobs without paying upfront and is meant for onboarding, not sustained production usage.
How does Ocean Network reduce the risk of low-quality providers?
GPU benchmarking, node reputation, monitoring, and queueing policies work together to improve compute reliability over time. Benchmarked nodes receive a verified badge on the environments selection page and leaderboard tables, which increases demand; unreliable nodes are filtered out, and orchestration avoids poor environments selection choices.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.