Skip to main content
ALL CHAINS

Hive Node Deployment

This guide covers deploying a Hive Node, the Python/Flask service that powers Elastos Hive personal data vaults, backed by MongoDB for structured data and IPFS for file storage. Commands and defaults align with the upstream repository and its published Docker flow.

info

Repository: The maintained codebase is elastos/Elastos.Hive.Node (Flask application under src/, orchestrated with manage.py and run.sh). The latest release is v2.9.1. For the official deployment guide, see the deployment instructions.

What is a Hive Node?

A Hive Node is a personal data vault server that combines:

  • MongoDB for structured application data and vault metadata.
  • IPFS for blob storage; files are pinned on an IPFS daemon co-located with (or reachable by) the Hive service.

End users authenticate with their Elastos DID. The service isolates data so that each user and application combination receives a dedicated MongoDB database namespace, while large objects are stored on the paired IPFS node. The HTTP API (default port 5000) implements vault lifecycle, scripting, and file operations consumed by Hive client SDKs.

tip

For application development, prefer the official Hive client SDKs (Java, Swift, and others) instead of calling REST endpoints directly; the SDKs handle DID auth, tokens, and protocol details.

Requirements

ItemNotes
OSUbuntu 22.04 LTS is recommended and commonly used for deployment.
DockerDocker Engine and Docker Compose (for Method 1).
PythonPython 3.9+ (see requirements.txt in the source tree for the exact version).
HardwareMinimum 2 CPU cores, 4 GB RAM, 50 GB disk; increase disk for heavy IPFS use.
ProductionA DNS name and TLS certificate (e.g. via nginx or another reverse proxy).
IdentityAn Elastos DID for the node operator / service configuration (credentials via .env; see reference).
warning

Sizing: IPFS datastore growth is unbounded in practice. Plan monitoring and disk expansion; MongoDB also grows with vault usage. Under-provisioned disks are a common production failure mode.

Upstream documents an installer script that pulls images and wires MongoDB, IPFS, and the Hive API container.

Step 1: Install Docker

curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker $USER

Log out and back in (or start a new session) so the docker group membership applies.

Step 2: Download Hive Node

Clone the repository and check out the latest release tag:

git clone https://github.com/elastos/Elastos.Hive.Node.git
cd Elastos.Hive.Node
git checkout release-v2.9.1

Step 3: Run Installation

# Follow the setup instructions in the repository README:
# https://github.com/elastos/Elastos.Hive.Node
docker compose up -d

This script prepares the environment expected by the published Docker workflow.

Step 4: Verify

docker ps

You should see the hive-node API container and MongoDB. Many current Docker Compose layouts also run a dedicated IPFS container (hive-ipfs), so three running containers is normal for a full stack.

info

Compose layout: The canonical docker-compose.yaml in the repo defines mongodb, ipfs, and hive-node services. Older docs that mention only two containers refer to the API and database; IPFS is still required for file vault features.

Step 5: Run Tests

From the same project directory:

./run.sh test

Method 2: Direct Deployment

Use this path when you need to run processes on the host without Docker (advanced). You must provide MongoDB and IPFS yourself and match URLs in configuration.

Step 1: Install Dependencies

sudo apt update && sudo apt install -y python3 python3-pip python3-venv
# Install MongoDB using distro packages or MongoDB’s official repo for your Ubuntu version.
sudo apt install -y mongodb
python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
warning

MongoDB on Ubuntu: Package names and versions differ by release. For production, install a supported mongodb-org release from MongoDB’s documentation and enable authentication; do not expose an unauthenticated MongoDB to the network.

Step 2: Install IPFS

Install Kubo (formerly go-ipfs). Use the latest stable release:

wget https://dist.ipfs.tech/kubo/v0.40.1/kubo_v0.40.1_linux-amd64.tar.gz
tar -xvzf kubo_v0.40.1_linux-amd64.tar.gz
cd kubo && sudo bash install.sh
ipfs init
info

go-ipfs was renamed to Kubo in 2022. Adjust the version number to the latest release.

Start the IPFS daemon so the API is available (default API port 5001).

Step 3: Configure

Configuration is driven by HiveSetting in src/settings.py, which loads a .env file (default /etc/hive/.env, overridable with the HIVE_CONFIG environment variable). Set at least:

ConcernWhat to set
Data rootDATA_STORE_PATH: vault and DID cache layout under this directory (default relative ./data).
MongoDBMONGODB_URL: e.g. mongodb://localhost:27017 on the host; Docker defaults use service hostnames like hive-mongo.
IPFS APIIPFS_NODE_URL: e.g. http://127.0.0.1:5001 when IPFS runs locally.
DID resolutionEID_RESOLVER_URL: e.g. https://api.elastos.io/eid for mainnet DID document verification.
Token lifetimesAUTH_CHALLENGE_EXPIRED and ACCESS_TOKEN_EXPIRED are implemented as properties in src/settings.py (defaults 180 seconds and 604800 seconds). Change them in code if you need different values; they are not environment toggles in the stock file.
tip

Service DID: Production nodes typically set SERVICE_DID_PRIVATE_KEY, NODE_CREDENTIAL, and related fields in .env so the vault presents a proper service identity. Never commit real keys to git.

Step 4: Start

The Flask app is exposed via manage.py (port 5000 by default):

source .venv/bin/activate
python3 manage.py runserver

For development with CORS enabled as described in the README:

python3 manage.py -c dev runserver
info

Health checks: After startup, you can POST to /api/v1/echo or open /api/v1/hive/version as documented in the upstream README to confirm the service responds.

Configuration Reference

Values below reflect HiveSetting defaults in src/settings.py for a generic checkout; Docker Compose overrides several URLs to use internal service names.

SettingTypical defaultDescription
DATA_STORE_PATH./dataRoot for vaults, DID cache, and related files.
MONGODB_URLmongodb://hive-mongo:27017MongoDB connection string (use localhost on bare metal).
IPFS_NODE_URLhttp://hive-ipfs:5001IPFS HTTP API endpoint.
IPFS_GATEWAY_URLhttp://hive-ipfs:8080IPFS gateway (when used).
EID_RESOLVER_URLhttps://api.elastos.io/eidResolver for Elastos DID documents.
AUTH_CHALLENGE_EXPIRED180Authentication challenge lifetime (seconds).
ACCESS_TOKEN_EXPIRED604800Access token lifetime (seconds).

Production Checklist

  • Terminate TLS at a reverse proxy (nginx, Caddy, etc.) and only expose HTTPS (commonly 443) to clients.
  • Firewall so only required ports are reachable; the Hive API should not be wide-open without auth layers you trust.
  • Enable MongoDB authentication and network isolation between Hive and the database.
  • Backups: schedule mongodump (or equivalent) and track IPFS pin lists / critical CIDs your users rely on.
  • Monitor disk, especially the IPFS repo and DATA_STORE_PATH.
  • Register or publish your node’s service DID and endpoint according to your product’s requirements so clients can discover and trust the vault.
warning

**Never run a production Hive Node with default passwords, open MongoDB, or unsigned HTTP without a threat model. Treat operator keys and .env files as secrets.