Documentation Index
Fetch the complete documentation index at: https://chainstack-mintlify-flesh-empty-pages.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
This page outlines the hardware, software, and network requirements for running Chainstack Self-Hosted.
Looking to evaluate the software without dedicated hardware? See Evaluation setup for minimal requirements.
Overview
Chainstack Self-Hosted consists of two main components with different resource requirements:
- Control Panel — the management platform that handles deployments, authentication, and orchestration
- Blockchain nodes — the actual blockchain clients that you deploy and manage
Control Panel requirements
The Control Panel runs inside a Kubernetes cluster and includes the web interface, API services, database, and workflow orchestration.
Hardware requirements
| Resource | Minimum | Recommended |
|---|
| CPU | 5 cores | 8 cores |
| RAM | 6 GiB | 16 GiB |
| Storage | 15 GB | 20 GB |
These requirements are for the Control Panel only. Blockchain nodes require additional resources as specified below.
Software requirements
| Component | Requirement | Notes |
|---|
| Kubernetes | Any recent version | k3s, k8s, EKS, GKE, AKS supported |
| Helm | v3.x or later | Required for installation |
| yq | v4.x or later | mikefarah/yq required |
| openssl | Any recent version | Required for certificate generation |
| kubectl | Compatible with cluster | Required for cluster management |
Operating systems
The Control Panel is distributed as an umbrella Helm chart and runs on any Kubernetes-compatible environment. The underlying host operating system should be a modern Linux distribution capable of running Kubernetes.
Tested configurations:
- Ubuntu 22.04 LTS with k3s
- Ubuntu 24.04 LTS with k3s
Kubernetes storage requirements
A storage class with dynamic provisioning is required for persistent volumes. The installer creates PersistentVolumeClaims for the PostgreSQL database.
Tips for storage classes:
- k3s — local-path (default) or TopoLVM for multi-disk setups
- Cloud providers — use the default storage class (gp2/gp3 for AWS, standard for GCP, managed-premium for Azure)
- On-premises — any CSI-compatible storage provisioner
Blockchain node requirements
Resource requirements vary by protocol and node configuration. Below are the requirements for currently supported configurations.
Ethereum Mainnet full node (Reth + Prysm)
| Resource | Minimum | Recommended |
|---|
| CPU | 4 cores | 6+ cores |
| RAM | 16 GB | 32 GB |
| Storage (steady state) | 2 TB NVMe SSD | 4 TB NVMe SSD |
| Storage (during deploy) | 4 TB NVMe SSD | 4 TB NVMe SSD |
Ethereum Sepolia testnet full node (Reth + Prysm)
| Resource | Minimum | Recommended |
|---|
| CPU | 4 cores | 6+ cores |
| RAM | 16 GB | 32 GB |
| Storage (steady state) | 1.5 TB NVMe SSD | 2 TB NVMe SSD |
| Storage (during deploy) | 3 TB NVMe SSD | 3 TB NVMe SSD |
Ethereum Hoodi testnet full node (Reth + Prysm)
| Resource | Minimum | Recommended |
|---|
| CPU | 4 cores | 6+ cores |
| RAM | 16 GB | 32 GB |
| Storage (steady state) | 250 GB NVMe SSD | 500 GB NVMe SSD |
| Storage (during deploy) | 500 GB NVMe SSD | 500 GB NVMe SSD |
The during deploy row reflects the 2× peak required by snapshot bootstrap: the node downloads the snapshot archive and then extracts it into the chain data directory, so both copies coexist on disk until extraction completes. Provision the node’s persistent volume at this peak; you can keep it sized for headroom afterward.
Storage considerations
- SSD type — TLC NVMe drives are strongly recommended. Avoid QLC drives due to lower write endurance and performance
- Storage growth — the Ethereum chain grows approximately 1 TB per year. Plan for 4 TB to ensure several years of headroom
- IOPS — high random read/write performance is critical. Target drives with sustained write speeds of 1+ GB/s
For a deep dive on SSD selection for Ethereum nodes, see yorickdowne’s SSD guide.
CPU considerations
- Clock speed matters more than core count for blockchain nodes
- Higher single-thread performance improves block processing
- AMD Ryzen 7000/9000 series or Intel Core 12th gen+ recommended
Initial sync time
Nodes bootstrap from a pre-built chain snapshot rather than syncing from genesis. Initial bootstrap completes in minutes to hours depending on network conditions; the node then catches up to the chain head in the background. See Initial sync times for the breakdown.
Network requirements
Internal communication (within Kubernetes cluster)
All Control Panel services communicate internally within the Kubernetes cluster. The following ports are used for internal service-to-service communication:
| Service | Port | Protocol | Purpose |
|---|
| cp-ui | 80 | TCP | Web interface |
| cp-auth | 8080, 9090 | TCP | Authentication service |
| cp-deployments-api | 8080, 9090 | TCP | Deployment management API |
| keycloak | 80, 8080 | TCP | Identity management |
| PostgreSQL | 5432 | TCP | Database |
| Temporal | 7233–7246 | TCP | Workflow orchestration |
External access
For browser-based UI access, you need to expose both the UI service (cp-cp-ui) and the deployments API service (cp-cp-deployments-api). The browser loads the UI page from cp-cp-ui and then calls cp-cp-deployments-api directly using the --backend-url you set at install time.
| Service | Default external port | Purpose |
|---|
cp-cp-ui | 80 | Web interface |
cp-cp-deployments-api | 8081 (mapped to internal 8080) | Backend API used by the UI |
Either service can be exposed using:
| Method | Use case |
|---|
| LoadBalancer | Production deployments with external load balancer |
| NodePort | Direct access via node IP and port |
| Ingress | Production deployments with ingress controller |
| Port forward | Development and testing |
Firewall rules
The UI service port (default 80) and the deployments API port (default 8081) must both be reachable from clients that will access the Control Panel through a browser. All other Control Panel services communicate internally inside the cluster.
For blockchain nodes, the required ports depend on the protocol:
Ethereum nodes
| Port | Protocol | Purpose |
|---|
| 8546 | TCP/UDP | Execution client WSS |
| 8545 | TCP | Execution client HTTP |
| 5052 | TCP/UDP | Consensus client HTTP |
Combined infrastructure example
For a complete deployment running the Control Panel plus one Ethereum Mainnet full node (Light preset):
| Resource | Control Panel | Ethereum Node (Light) | Total |
|---|
| CPU | 5 cores | 4 cores | 9 cores |
| RAM | 6 GiB | 16 GiB | 22 GiB |
| Storage (steady state) | 15 GB | 2 TB | ~2 TB |
| Storage (during deploy) | 15 GB | 4 TB | ~4 TB |
For the Mainnet standard preset (8 cores / 32 GiB per node), the combined total is 13 cores / 38 GiB / ~2 TB steady state (~4 TB during deploy).
Example server configurations
Budget configuration (single Ethereum Mainnet node)
- CPU — AMD Ryzen 5 7600 or Intel Core i5-13400
- RAM — 32 GB DDR5
- Storage — 4 TB NVMe SSD
Testnet configuration (2 Ethereum testnet nodes)
- CPU — AMD Ryzen 5 7600 or Intel Core i5-13400
- RAM — 32 GB DDR5
- Storage — 3 TB NVMe SSDs
Trusted infrastructure partners
Chainstack Self-Hosted works on any compatible infrastructure you provision yourself. To simplify hardware procurement, you can use one of our trusted partners that offer servers meeting the requirements above.
Next steps
Once you’ve verified your system meets these requirements:
- Environment setup — Install Kubernetes and required tools
- Quick start guide — Complete walkthrough from zero to running
- Installation guide — Detailed installation instructions