Skip to main content

Deployment Guide

...About 25 min

Deployment Guide

Stand-Alone Deployment

This short guide will walk you through the basic process of using IoTDB. For a more-complete guide, please visit our website's User Guide.

Prerequisites

To use IoTDB, you need to have:

  1. Java >= 1.8 (Please make sure the environment path has been set)
  2. Set the max open files num as 65535 to avoid "too many open files" problem.

Installation

IoTDB provides you three installation methods, you can refer to the following suggestions, choose one of them:

  • Installation from source code. If you need to modify the code yourself, you can use this method.
  • Installation from binary files. Download the binary files from the official website. This is the recommended method, in which you will get a binary released package which is out-of-the-box.
  • Using Docker:The path to the dockerfile is githubopen in new window

Download

You can download the binary file from:
Download Pageopen in new window

Configurations

Configuration files are under "conf" folder

  • environment config module (datanode-env.bat, datanode-env.sh),
  • system config module (iotdb-datanode.properties)
  • log config module (logback.xml).

For more, see Config in detail.

Start

You can go through the following step to test the installation, if there is no error after execution, the installation is completed.

Start IoTDB

IoTDB is a database based on distributed system. To launch IoTDB, you can first start standalone mode (i.e. 1 ConfigNode and 1 DataNode) to check.

Users can start IoTDB standalone mode by the start-standalone script under the sbin folder.

# Unix/OS X
> bash sbin/start-standalone.sh
# Windows
> sbin\start-standalone.bat

Note: Currently, To run standalone mode, you need to ensure that all addresses are set to 127.0.0.1, If you need to access the IoTDB from a machine different from the one where the IoTDB is located, please change the configuration item dn_rpc_address to the IP of the machine where the IoTDB lives. And replication factors set to 1, which is by now the default setting.
Besides, it's recommended to use SimpleConsensus in this mode, since it brings additional efficiency.

Cluster deployment(Cluster management tool)

The IoTDB cluster management tool is an easy-to-use operation and maintenance tool (enterprise version tool).
It is designed to solve the operation and maintenance problems of multiple nodes in the IoTDB distributed system.
It mainly includes cluster deployment, cluster start and stop, elastic expansion, configuration update, data export and other functions, thereby realizing one-click command issuance for complex database clusters, which greatly Reduce management difficulty.
This document will explain how to remotely deploy, configure, start and stop IoTDB cluster instances with cluster management tools.

Environment dependence

This tool is a supporting tool for TimechoDB(Enterprise Edition based on IoTDB). You can contact your sales representative to obtain the tool download method.

The machine where IoTDB is to be deployed needs to rely on jdk 8 and above, lsof, netstat, and unzip functions. If not, please install them yourself. You can refer to the installation commands required for the environment in the last section of the document.

Tip: The IoTDB cluster management tool requires an account with root privileges

Deployment method

Download and install

This tool is a supporting tool for TimechoDB(Enterprise Edition based on IoTDB). You can contact your salesperson to obtain the tool download method.

Note: Since the binary package only supports GLIBC2.17 and above, the minimum version is Centos7.

  • After entering the following commands in the iotd directory:
bash install-iotd.sh

The iotd keyword can be activated in the subsequent shell, such as checking the environment instructions required before deployment as follows:

iotd cluster check example
  • You can also directly use <iotd absolute path>/sbin/iotd without activating iotd to execute commands, such as checking the environment required before deployment:
<iotd absolute path>/sbin/iotd cluster check example

Introduction to cluster configuration files

  • There is a cluster configuration yaml file in the iotd/config directory. The yaml file name is the cluster name. There can be multiple yaml files. In order to facilitate users to configure yaml files, a default_cluster.yaml example is provided under the iotd/config directory.
  • The yaml file configuration consists of five major parts: global, confignode_servers, datanode_servers, grafana_server, and prometheus_server
  • global is a general configuration that mainly configures machine username and password, IoTDB local installation files, Jdk configuration, etc. A default_cluster.yaml sample data is provided in the iotd/config directory,
    Users can copy and modify it to their own cluster name and refer to the instructions inside to configure the IoTDB cluster. In the default_cluster.yaml sample, all uncommented items are required, and those that have been commented are non-required.

例如要执行default_cluster.yaml检查命令则需要执行命令iotd cluster check default_cluster即可,
更多详细命令请参考下面命令列表。

parameter nameparameter describerequired
iotdb_zip_dirIoTDB deployment distribution directory, if the value is empty, it will be downloaded from the address specified by iotdb_download_urlNO
iotdb_download_urlIoTDB download address, if iotdb_zip_dir has no value, download from the specified addressNO
jdk_tar_dirjdk local directory, you can use this jdk path to upload and deploy to the target node.NO
jdk_deploy_dirjdk remote machine deployment directory, jdk will be deployed to this directory, and the following jdk_dir_name parameter forms a complete jdk deployment directory, that is, <jdk_deploy_dir>/<jdk_dir_name>NO
jdk_dir_nameThe directory name after jdk decompression defaults to jdk_iotdbNO
iotdb_lib_dirThe IoTDB lib directory or the IoTDB lib compressed package only supports .zip format and is only used for IoTDB upgrade. It is in the comment state by default. If you need to upgrade, please open the comment and modify the path. If you use a zip file, please use the zip command to compress the iotdb/lib directory, such as zip -r lib.zip apache-iotdb-1.2.0/lib/* dNO
userUser name for ssh login deployment machineYES
passwordThe password for ssh login. If the password does not specify the use of pkey to log in, please ensure that the ssh login between nodes has been configured without a key.NO
pkeyKey login: If password has a value, password is used first, otherwise pkey is used to log in.NO
ssh_portssh portYES
deploy_dirIoTDB deployment directory, IoTDB will be deployed to this directory and the following iotdb_dir_name parameter will form a complete IoTDB deployment directory, that is, <deploy_dir>/<iotdb_dir_name>YES
iotdb_dir_nameThe directory name after decompression of IoTDB is iotdb by default.NO
datanode-env.shopen in new windowCorresponding to iotdb/config/datanode-env.sh, when global and confignode_servers are configured at the same time, the value in confignode_servers is used firstNO
confignode-env.shopen in new windowCorresponding to iotdb/config/confignode-env.sh, the value in datanode_servers is used first when global and datanode_servers are configured at the same timeNO
iotdb-common.propertiesCorresponds to <iotdb path>/config/iotdb-common.propertiesNO
cn_target_config_node_listThe cluster configuration address points to the surviving ConfigNode, and it points to confignode_x by default. When global and confignode_servers are configured at the same time, the value in confignode_servers is used first, corresponding to cn_target_config_node_list in iotdb/config/iotdb-confignode.propertiesYES
dn_target_config_node_listThe cluster configuration address points to the surviving ConfigNode, and points to confignode_x by default. When configuring values for global and datanode_servers at the same time, the value in datanode_servers is used first, corresponding to dn_target_config_node_list in iotdb/config/iotdb-datanode.propertiesYES

Among them, datanode-env.shopen in new window and confignode-env.shopen in new window can be configured with extra parameters extra_opts. When this parameter is configured, corresponding values will be appended after datanode-env.shopen in new window and confignode-env.shopen in new window. Refer to default_cluster.yaml for configuration examples as follows:
datanode-env.shopen in new window:
extra_opts: |
IOTDB_JMX_OPTS="$IOTDB_JMX_OPTS -XX:+UseG1GC"
IOTDB_JMX_OPTS="$IOTDB_JMX_OPTS -XX:MaxGCPauseMillis=200"

  • confignode_servers is the configuration for deploying IoTDB Confignodes, in which multiple Confignodes can be configured
    By default, the first started ConfigNode node node1 is regarded as the Seed-ConfigNode
parameter nameparameter describerequired
nameConfignode nameYES
deploy_dirIoTDB config node deployment directoryYES
iotdb-confignode.propertiesCorresponding to iotdb/config/iotdb-confignode.properties, please refer to the iotdb-confignode.properties file description for more details.NO
cn_internal_addressCorresponds to iotdb/internal communication address, corresponding to cn_internal_address in iotdb/config/iotdb-confignode.propertiesYES
cn_target_config_node_listThe cluster configuration address points to the surviving ConfigNode, and it points to confignode_x by default. When global and confignode_servers are configured at the same time, the value in confignode_servers is used first, corresponding to cn_target_config_node_list in iotdb/config/iotdb-confignode.propertiesYES
cn_internal_portInternal communication port, corresponding to cn_internal_port in iotdb/config/iotdb-confignode.propertiesYES
cn_consensus_portCorresponds to cn_consensus_port in iotdb/config/iotdb-confignode.propertiesNO
cn_data_dirCorresponds to cn_consensus_port in iotdb/config/iotdb-confignode.properties Corresponds to cn_data_dir in iotdb/config/iotdb-confignode.propertiesYES
iotdb-common.propertiesCorresponding to iotdb/config/iotdb-common.properties, when configuring values in global and confignode_servers at the same time, the value in confignode_servers will be used first.NO
  • datanode_servers 是部署IoTDB Datanodes配置,里面可以配置多个Datanode
parameter nameparameter describerequired
nameDatanode nameYES
deploy_dirIoTDB data node deployment directoryYES
iotdb-datanode.propertiesCorresponding to iotdb/config/iotdb-datanode.properties, please refer to the iotdb-datanode.properties file description for more details.NO
dn_rpc_addressThe datanode rpc address corresponds to dn_rpc_address in iotdb/config/iotdb-datanode.propertiesYES
dn_internal_addressInternal communication address, corresponding to dn_internal_address in iotdb/config/iotdb-datanode.propertiesYES
dn_target_config_node_listThe cluster configuration address points to the surviving ConfigNode, and points to confignode_x by default. When configuring values for global and datanode_servers at the same time, the value in datanode_servers is used first, corresponding to dn_target_config_node_list in iotdb/config/iotdb-datanode.properties.YES
dn_rpc_portDatanode rpc port address, corresponding to dn_rpc_port in iotdb/config/iotdb-datanode.propertiesYES
dn_internal_portInternal communication port, corresponding to dn_internal_port in iotdb/config/iotdb-datanode.propertiesYES
iotdb-common.propertiesCorresponding to iotdb/config/iotdb-common.properties, when configuring values in global and datanode_servers at the same time, the value in datanode_servers will be used first.NO
  • grafana_server is the configuration related to deploying Grafana
parameter nameparameter describerequired
grafana_dir_nameGrafana decompression directory name(default grafana_iotdb)NO
hostServer ip deployed by grafanaYES
grafana_portThe port of grafana deployment machine, default 3000NO
deploy_dirgrafana deployment server directoryYES
grafana_tar_dirGrafana compressed package locationYES
dashboardsdashboards directoryNO
  • prometheus_server 是部署Prometheus 相关配置
parameter nameparameter describerequired
prometheus_dir_nameprometheus decompression directory name, default prometheus_iotdbNO
hostServer IP deployed by prometheusYES
prometheus_portThe port of prometheus deployment machine, default 9090NO
deploy_dirprometheus deployment server directoryYES
prometheus_tar_dirprometheus compressed package pathYES
storage_tsdb_retention_timeThe number of days to save data is 15 days by defaultNO
storage_tsdb_retention_sizeThe data size that can be saved by the specified block defaults to 512M. Please note the units are KB, MB, GB, TB, PB, and EB.NO

If metrics are configured in iotdb-datanode.properties and iotdb-confignode.properties of config/xxx.yaml, the configuration will be automatically put into promethues without manual modification.

Note: How to configure the value corresponding to the yaml key to contain special characters such as: etc. It is recommended to use double quotes for the entire value, and do not use paths containing spaces in the corresponding file paths to prevent abnormal recognition problems.

scenes to be used

Clean data

  • Cleaning up the cluster data scenario will delete the data directory in the IoTDB cluster and cn_system_dir, cn_consensus_dir, cn_consensus_dir configured in the yaml file
    dn_data_dirs, dn_consensus_dir, dn_system_dir, logs and ext directories.
  • First execute the stop cluster command, and then execute the cluster cleanup command.
iotd cluster stop default_cluster
iotd cluster clean default_cluster

Cluster destruction

  • The cluster destruction scenario will delete data, cn_system_dir, cn_consensus_dir, in the IoTDB cluster
    dn_data_dirs, dn_consensus_dir, dn_system_dir, logs, ext, IoTDB deployment directory,
    grafana deployment directory and prometheus deployment directory.
  • First execute the stop cluster command, and then execute the cluster destruction command.
iotd cluster stop default_cluster
iotd cluster destroy default_cluster

Cluster upgrade

  • To upgrade the cluster, you first need to configure iotdb_lib_dir in config/xxx.yaml as the directory path where the jar to be uploaded to the server is located (for example, iotdb/lib).
  • If you use zip files to upload, please use the zip command to compress the iotdb/lib directory, such as zip -r lib.zip apache-iotdb-1.2.0/lib/*
  • Execute the upload command and then execute the restart IoTDB cluster command to complete the cluster upgrade.
iotd cluster upgrade default_cluster
iotd cluster restart default_cluster

hot deployment

  • First modify the configuration in config/xxx.yaml.
  • Execute the distribution command, and then execute the hot deployment command to complete the hot deployment of the cluster configuration
iotd cluster distribute default_cluster
iotd cluster reload default_cluster

Cluster expansion

  • First modify and add a datanode or confignode node in config/xxx.yaml.
  • Execute the cluster expansion command
iotd cluster scaleout default_cluster

Cluster scaling

  • First find the node name or ip+port to shrink in config/xxx.yaml (where confignode port is cn_internal_port, datanode port is rpc_port)
  • Execute cluster shrink command
iotd cluster scalein default_cluster

Using cluster management tools to manipulate existing IoTDB clusters

  • Configure the server's user, passwod or pkey, ssh_port
  • Modify the IoTDB deployment path in config/xxx.yaml, deploy_dir (IoTDB deployment directory), iotdb_dir_name (IoTDB decompression directory name, the default is iotdb)
    For example, if the full path of IoTDB deployment is /home/data/apache-iotdb-1.1.1, you need to modify the yaml files deploy_dir:/home/data/ and iotdb_dir_name:apache-iotdb-1.1.1
  • If the server is not using java_home, modify jdk_deploy_dir (jdk deployment directory) and jdk_dir_name (the directory name after jdk decompression, the default is jdk_iotdb). If java_home is used, there is no need to modify the configuration.
    For example, the full path of jdk deployment is /home/data/jdk_1.8.2, you need to modify the yaml files jdk_deploy_dir:/home/data/, jdk_dir_name:jdk_1.8.2
  • Configure cn_target_config_node_list, dn_target_config_node_list
  • Configure cn_internal_address, cn_internal_port, cn_consensus_port, cn_system_dir, in iotdb-confignode.properties in confignode_servers
    If the values in cn_consensus_dir and iotdb-common.properties are not the default for IoTDB, they need to be configured, otherwise there is no need to configure them.
  • Configure dn_rpc_address, dn_internal_address, dn_data_dirs, dn_consensus_dir, dn_system_dir and iotdb-common.properties in iotdb-datanode.properties in datanode_servers
  • Execute initialization command
iotd cluster init default_cluster

Deploy IoTDB, Grafana and Prometheus

  • Configure iotdb-datanode.properties, iotdb-confignode.properties to open the metrics interface
  • Configure the Grafana configuration. If there are multiple dashboards, separate them with commas. The names cannot be repeated or they will be overwritten.
  • Configure the Prometheus configuration. If the IoTDB cluster is configured with metrics, there is no need to manually modify the Prometheus configuration. The Prometheus configuration will be automatically modified according to which node is configured with metrics.
  • Start the cluster
iotd cluster start default_cluster

For more detailed parameters, please refer to the cluster configuration file introduction above

Command

The basic usage of this tool is:

iotd cluster <key> <cluster name> [params (Optional)]
  • key indicates a specific command.

  • cluster name indicates the cluster name (that is, the name of the yaml file in the iotd/config file).

  • params indicates the required parameters of the command (optional).

  • For example, the command format to deploy the default_cluster cluster is:

iotd cluster deploy default_cluster
  • The functions and parameters of the cluster are listed as follows:
commanddescriptionparameter
checkcheck whether the cluster can be deployedCluster name list
cleancleanup-clustercluster-name
deploydeploy clusterCluster name, -N, module name (optional for iotdb, grafana, prometheus), -op force (optional)
listcluster status listNone
startstart clusterCluster name, -N, node name (nodename, grafana, prometheus optional)
stopstop clusterCluster name, -N, node name (nodename, grafana, prometheus optional), -op force (nodename, grafana, prometheus optional)
restartrestart clusterCluster name, -N, node name (nodename, grafana, prometheus optional), -op force (nodename, grafana, prometheus optional)
showview cluster information. The details field indicates the details of the cluster information.Cluster name, details (optional)
destroydestroy clusterCluster name, -N, module name (iotdb, grafana, prometheus optional)
scaleoutcluster expansionCluster name
scaleincluster shrinkCluster name, -N, cluster node name or cluster node ip+port
reloadhot loading of cluster configuration filesCluster name
distributecluster configuration file distributionCluster name
dumplogBack up specified cluster logsCluster name, -N, cluster node name -h Back up to target machine ip -pw Back up to target machine password -p Back up to target machine port -path Backup directory -startdate Start time -enddate End time -loglevel Log type -l transfer speed
dumpdataBackup cluster dataCluster name, -h backup to target machine ip -pw backup to target machine password -p backup to target machine port -path backup directory -startdate start time -enddate end time -l transmission speed
upgradelib package upgradeCluster name
initWhen an existing cluster uses the cluster deployment tool, initialize the cluster configurationCluster name
statusView process statusCluster name

Detailed command execution process

The following commands are executed using default_cluster.yaml as an example, and users can modify them to their own cluster files to execute

Check cluster deployment environment commands

iotd cluster check default_cluster
  • Find the yaml file in the default location according to cluster-name and obtain the configuration information of confignode_servers and datanode_servers

  • Verify that the target node is able to log in via SSH

  • Verify whether the JDK version on the corresponding node meets IoTDB jdk1.8 and above, and whether the server is installed with unzip, lsof, and netstat.

  • If you see the following prompt Info:example check successfully!, it proves that the server has already met the installation requirements.
    If Error:example check fail! is output, it proves that some conditions do not meet the requirements. You can check the Error log output above (for example: Error:Server (ip:172.20.31.76) iotdb port(10713) is listening) to make repairs. ,
    If the jdk check does not meet the requirements, we can configure a jdk1.8 or above version in the yaml file ourselves for deployment without affecting subsequent use.
    If checking lsof, netstat or unzip does not meet the requirements, you need to install it on the server yourself.

Deploy cluster command

iotd cluster deploy default_cluster
  • Find the yaml file in the default location according to cluster-name and obtain the configuration information of confignode_servers and datanode_servers

  • Upload IoTDB compressed package and jdk compressed package according to the node information in confignode_servers and datanode_servers (if jdk_tar_dir and jdk_deploy_dir values ​​are configured in yaml)

  • Generate and upload iotdb-common.properties, iotdb-confignode.properties, iotdb-datanode.properties according to the yaml file node configuration information

iotd cluster deploy default_cluster -op force

Note: This command will force the deployment, and the specific process will delete the existing deployment directory and redeploy

deploy a single module

# Deploy grafana module
iotd cluster deploy default_cluster -N grafana
# Deploy the prometheus module
iotd cluster deploy default_cluster -N prometheus
# Deploy the iotdb module
iotd cluster deploy default_cluster -N iotdb

Start cluster command

iotd cluster start default_cluster
  • Find the yaml file in the default location according to cluster-name and obtain the configuration information of confignode_servers and datanode_servers

  • Start confignode, start sequentially according to the order in confignode_servers in the yaml configuration file and check whether the confignode is normal according to the process id, the first confignode is seek config

  • Start the datanode in sequence according to the order in datanode_servers in the yaml configuration file and check whether the datanode is normal according to the process id.

  • After checking the existence of the process according to the process id, check whether each service in the cluster list is normal through the cli. If the cli link fails, retry every 10s until it succeeds and retry up to 5 times

Start a single node command*

#Start according to the IoTDB node name
iotd cluster start default_cluster -N datanode_1
#Start according to IoTDB cluster ip+port, where port corresponds to cn_internal_port of confignode and rpc_port of datanode.
iotd cluster start default_cluster -N 192.168.1.5:6667
#Start grafana
iotd cluster start default_cluster -N grafana
#Start prometheus
iotd cluster start default_cluster -N prometheus
  • Find the yaml file in the default location based on cluster-name

  • Find the node location information based on the provided node name or ip:port. If the started node is data_node, the ip uses dn_rpc_address in the yaml file, and the port uses dn_rpc_port in datanode_servers in the yaml file.
    If the started node is config_node, the ip uses cn_internal_address in confignode_servers in the yaml file, and the port uses cn_internal_port

  • start the node

Note: Since the cluster deployment tool only calls the start-confignode.shopen in new window and start-datanode.shopen in new window scripts in the IoTDB cluster,
When the actual output result fails, it may be that the cluster has not started normally. It is recommended to use the status command to check the current cluster status (iotd cluster status xxx)

View IoTDB cluster status command

iotd cluster show default_cluster
#View IoTDB cluster details
iotd cluster show default_cluster details
  • Find the yaml file in the default location according to cluster-name and obtain the configuration information of confignode_servers and datanode_servers

  • Execute show cluster details through cli on datanode in turn. If one node is executed successfully, it will not continue to execute cli on subsequent nodes and return the result directly.

Stop cluster command

iotd cluster stop default_cluster
  • Find the yaml file in the default location according to cluster-name and obtain the configuration information of confignode_servers and datanode_servers

  • According to the datanode node information in datanode_servers, stop the datanode nodes in order according to the configuration.

  • Based on the confignode node information in confignode_servers, stop the confignode nodes in sequence according to the configuration

force stop cluster command

iotd cluster stop default_cluster -op force

Will directly execute the kill -9 pid command to forcibly stop the cluster

Stop single node command

#Stop by IoTDB node name
iotd cluster stop default_cluster -N datanode_1
#Stop according to IoTDB cluster ip+port (ip+port is to get the only node according to ip+dn_rpc_port in datanode or ip+cn_internal_port in confignode to get the only node)
iotd cluster stop default_cluster -N 192.168.1.5:6667
#Stop grafana
iotd cluster stop default_cluster -N grafana
#Stop prometheus
iotd cluster stop default_cluster -N prometheus
  • Find the yaml file in the default location based on cluster-name

  • Find the corresponding node location information based on the provided node name or ip:port. If the stopped node is data_node, the ip uses dn_rpc_address in the yaml file, and the port uses dn_rpc_port in datanode_servers in the yaml file.
    If the stopped node is config_node, the ip uses cn_internal_address in confignode_servers in the yaml file, and the port uses cn_internal_port

  • stop the node

Note: Since the cluster deployment tool only calls the stop-confignode.shopen in new window and stop-datanode.shopen in new window scripts in the IoTDB cluster, in some cases the iotdb cluster may not be stopped.

Clean cluster data command

iotd cluster clean default_cluster
  • Find the yaml file in the default location according to cluster-name and obtain the configuration information of confignode_servers and datanode_servers

  • Based on the information in confignode_servers and datanode_servers, check whether there are still services running,
    If any service is running, the cleanup command will not be executed.

  • Delete the data directory in the IoTDB cluster and the cn_system_dir, cn_consensus_dir, configured in the yaml file
    dn_data_dirs, dn_consensus_dir, dn_system_dir, logs and ext directories.

Restart cluster command

iotd cluster restart default_cluster
  • Find the yaml file in the default location according to cluster-name and obtain the configuration information of confignode_servers, datanode_servers, grafana and prometheus

  • Execute the above stop cluster command (stop), and then execute the start cluster command (start). For details, refer to the above start and stop commands.

Force restart cluster command

iotd cluster restart default_cluster -op force

Will directly execute the kill -9 pid command to force stop the cluster, and then start the cluster

Restart a single node command

#Restart datanode_1 according to the IoTDB node name
iotd cluster restart default_cluster -N datanode_1
#Restart confignode_1 according to the IoTDB node name
iotd cluster restart default_cluster -N confignode_1
#Restart grafana
iotd cluster restart default_cluster -N grafana
#Restart prometheus
iotd cluster restart default_cluster -N prometheus

Cluster shrink command

#Scale down by node name
iotd cluster scalein default_cluster -N nodename
#Scale down according to ip+port (ip+port obtains the only node according to ip+dn_rpc_port in datanode, and obtains the only node according to ip+cn_internal_port in confignode)
iotd cluster scalein default_cluster -N ip:port
  • Find the yaml file in the default location according to cluster-name and obtain the configuration information of confignode_servers and datanode_servers

  • Determine whether there is only one confignode node and datanode to be reduced. If there is only one left, the reduction cannot be performed.

  • Then get the node information to shrink according to ip:port or nodename, execute the shrink command, and then destroy the node directory. If the shrink node is data_node, use dn_rpc_address in the yaml file for ip, and use dn_rpc_address in the port. dn_rpc_port in datanode_servers in yaml file.
    If the shrinking node is config_node, the ip uses cn_internal_address in confignode_servers in the yaml file, and the port uses cn_internal_port

Tip: Currently, only one node scaling is supported at a time

Cluster expansion command

iotd cluster scaleout default_cluster
  • Modify the config/xxx.yaml file to add a datanode node or confignode node

  • Find the yaml file in the default location according to cluster-name and obtain the configuration information of confignode_servers and datanode_servers

  • Find the node to be expanded, upload the IoTDB compressed package and jdb package (if the jdk_tar_dir and jdk_deploy_dir values ​​are configured in yaml) and decompress it

  • Generate and upload iotdb-common.properties, iotdb-confignode.properties or iotdb-datanode.properties according to the yaml file node configuration information

  • Execute the command to start the node and verify whether the node is started successfully

Tip: Currently, only one node expansion is supported at a time

destroy cluster command

iotd cluster destroy default_cluster
  • cluster-name finds the yaml file in the default location

  • Check whether the node is still running based on the node node information in confignode_servers, datanode_servers, grafana, and prometheus.
    Stop the destroy command if any node is running

  • Delete data in the IoTDB cluster and cn_system_dir, cn_consensus_dir configured in the yaml file
    dn_data_dirs, dn_consensus_dir, dn_system_dir, logs, ext, IoTDB deployment directory,
    grafana deployment directory and prometheus deployment directory

Destroy a single module

# Destroy grafana module
iotd cluster destroy default_cluster -N grafana
# Destroy prometheus module
iotd cluster destroy default_cluster -N prometheus
# Destroy iotdb module
iotd cluster destroy default_cluster -N iotdb

Distribute cluster configuration commands

iotd cluster distribute default_cluster
  • Find the yaml file in the default location according to cluster-name and obtain the configuration information of confignode_servers, datanode_servers, grafana and prometheus

  • Generate and upload iotdb-common.properties, iotdb-confignode.properties, iotdb-datanode.properties to the specified node according to the node configuration information of the yaml file

Hot load cluster configuration command

iotd cluster reload default_cluster
  • Find the yaml file in the default location according to cluster-name and obtain the configuration information of confignode_servers and datanode_servers

  • Execute load configuration in the cli according to the node configuration information of the yaml file.

Cluster node log backup

iotd cluster dumplog default_cluster -N datanode_1,confignode_1  -startdate '2023-04-11' -enddate '2023-04-26' -h 192.168.9.48 -p 36000 -u root -pw root -path '/iotdb/logs' -logs '/root/data/db/iotdb/logs'
  • Find the yaml file in the default location based on cluster-name

  • This command will verify the existence of datanode_1 and confignode_1 according to the yaml file, and then back up the log data of the specified node datanode_1 and confignode_1 to the specified service 192.168.9.48 port 36000 according to the configured start and end dates (startdate<=logtime<=enddate) The data backup path is /iotdb/logs, and the IoTDB log storage path is /root/data/db/iotdb/logs (not required, if you do not fill in -logs xxx, the default is to backup logs from the IoTDB installation path /logs )

commanddescriptionrequired
-hbackup data server ipNO
-ubackup data server usernameNO
-pwbackup data machine passwordNO
-pbackup data machine port(default 22)NO
-pathpath to backup data (default current path)NO
-loglevelLog levels include all, info, error, warn (default is all)NO
-lspeed limit (default 1024 speed limit range 0 to 104857601 unit Kbit/s)NO
-Nmultiple configuration file cluster names are separated by commas.YES
-startdatestart time (including default 1970-01-01)NO
-enddateend time (included)NO
-logsIoTDB log storage path, the default is ({iotdb}/logs))NO

Cluster data backup

iotd cluster dumpdata default_cluster -granularity partition  -startdate '2023-04-11' -enddate '2023-04-26' -h 192.168.9.48 -p 36000 -u root -pw root -path '/iotdb/datas'
  • This command will obtain the leader node based on the yaml file, and then back up the data to the /iotdb/datas directory on the 192.168.9.48 service based on the start and end dates (startdate<=logtime<=enddate)
commanddescriptionrequired
-hbackup data server ipNO
-ubackup data server usernameNO
-pwbackup data machine passwordNO
-pbackup data machine port(default 22)NO
-pathpath to backup data (default current path)NO
-granularitypartitionYES
-lspeed limit (default 1024 speed limit range 0 to 104857601 unit Kbit/s)NO
-startdatestart time (including default 1970-01-01)YES
-enddateend time (included)YES

Cluster upgrade

iotd cluster upgrade default_cluster
  • Find the yaml file in the default location according to cluster-name and obtain the configuration information of confignode_servers and datanode_servers

  • Upload lib package

Note that after performing the upgrade, please restart IoTDB for it to take effect.

Cluster initialization

iotd cluster init default_cluster
  • Find the yaml file in the default location according to cluster-name and obtain the configuration information of confignode_servers, datanode_servers, grafana and prometheus
  • Initialize cluster configuration

View cluster process status

iotd cluster status default_cluster
  • Find the yaml file in the default location according to cluster-name and obtain the configuration information of confignode_servers, datanode_servers, grafana and prometheus
  • Display the survival status of each node in the cluster

Introduction to Cluster Deployment Tool Samples

In the cluster deployment tool installation directory config/example, there are three yaml examples. If necessary, you can copy them to config and modify them.

namedescription
default_1c1d.yaml1 confignode and 1 datanode configuration example
default_3c3d.yaml3 confignode and 3 datanode configuration samples
default_3c3d_grafa_prome3 confignode and 3 datanode, Grafana, Prometheus configuration examples

Manual Deployment

Prerequisites

  1. JDK>=1.8.
  2. Max open file 65535.
  3. Disable the swap memory.
  4. Ensure that data/confignode directory has been cleared when starting ConfigNode for the first time,
    and data/datanode directory has been cleared when starting DataNode for the first time
  5. Turn off the firewall of the server if the entire cluster is in a trusted environment.
  6. By default, IoTDB Cluster will use ports 10710, 10720 for the ConfigNode and
    6667, 10730, 10740, 10750 and 10760 for the DataNode.
    Please make sure those ports are not occupied, or you will modify the ports in configuration files.

Get the Installation Package

You can either download the binary release files (see Chap 3.1) or compile with source code (see Chap 3.2).

Download the binary distribution

  1. Open our website Download Pageopen in new window.
  2. Download the binary distribution.
  3. Decompress to get the apache-iotdb-1.0.0-all-bin directory.

Compile with source code

Download the source code

Git

git clone https://github.com/apache/iotdb.git
git checkout v1.0.0

Website

  1. Open our website Download Pageopen in new window.
  2. Download the source code.
  3. Decompress to get the apache-iotdb-1.0.0 directory.
Compile source code

Under the source root folder:

mvn clean package -pl distribution -am -DskipTests

Then you will get the binary distribution under
distribution/target/apache-iotdb-1.0.0-SNAPSHOT-all-bin/apache-iotdb-1.0.0-SNAPSHOT-all-bin.

Binary Distribution Content

FolderDescription
confConfiguration files folder, contains configuration files of ConfigNode, DataNode, JMX and logback
dataData files folder, contains data files of ConfigNode and DataNode
libJar files folder
licensesLicenses files folder
logsLogs files folder, contains logs files of ConfigNode and DataNode
sbinShell files folder, contains start/stop/remove shell of ConfigNode and DataNode, cli shell
toolsSystem tools

Cluster Installation and Configuration

Cluster Installation

apache-iotdb-1.0.0-SNAPSHOT-all-bin contains both the ConfigNode and the DataNode.
Please deploy the files to all servers of your target cluster.
A best practice is deploying the files into the same directory in all servers.

If you want to try the cluster mode on one server, please read
Cluster Quick Start.

Cluster Configuration

We need to modify the configurations on each server.
Therefore, login each server and switch the working directory to apache-iotdb-1.0.0-SNAPSHOT-all-bin.
The configuration files are stored in the ./conf directory.

For all ConfigNode servers, we need to modify the common configuration (see Chap 5.2.1)
and ConfigNode configuration (see Chap 5.2.2).

For all DataNode servers, we need to modify the common configuration (see Chap 5.2.1)
and DataNode configuration (see Chap 5.2.3).

Common configuration

Open the common configuration file ./conf/iotdb-common.properties,
and set the following parameters base on the
Deployment Recommendation:

ConfigurationDescriptionDefault
cluster_nameCluster name for which the Node to join indefaultCluster
config_node_consensus_protocol_classConsensus protocol of ConfigNodeorg.apache.iotdb.consensus.ratis.RatisConsensus
schema_replication_factorSchema replication factor, no more than DataNode number1
schema_region_consensus_protocol_classConsensus protocol of schema replicasorg.apache.iotdb.consensus.ratis.RatisConsensus
data_replication_factorData replication factor, no more than DataNode number1
data_region_consensus_protocol_classConsensus protocol of data replicas. Note that RatisConsensus currently does not support multiple data directoriesorg.apache.iotdb.consensus.iot.IoTConsensus

Notice: The preceding configuration parameters cannot be changed after the cluster is started. Ensure that the common configurations of all Nodes are the same. Otherwise, the Nodes cannot be started.

ConfigNode configuration

Open the ConfigNode configuration file ./conf/iotdb-confignode.properties,
and set the following parameters based on the IP address and available port of the server or VM:

ConfigurationDescriptionDefaultUsage
cn_internal_addressInternal rpc service address of ConfigNode127.0.0.1Set to the IPV4 address or domain name of the server
cn_internal_portInternal rpc service port of ConfigNode10710Set to any unoccupied port
cn_consensus_portConfigNode replication consensus protocol communication port10720Set to any unoccupied port
cn_target_config_node_listConfigNode address to which the node is connected when it is registered to the cluster. Note that Only one ConfigNode can be configured.127.0.0.1:10710For Seed-ConfigNode, set to its own cn_internal_address:cn_internal_port; For other ConfigNodes, set to other one running ConfigNode's cn_internal_address:cn_internal_port

Notice: The preceding configuration parameters cannot be changed after the node is started. Ensure that all ports are not occupied. Otherwise, the Node cannot be started.

DataNode configuration

Open the DataNode configuration file ./conf/iotdb-datanode.properties,
and set the following parameters based on the IP address and available port of the server or VM:

ConfigurationDescriptionDefaultUsage
dn_rpc_addressClient RPC Service address127.0.0.1Set to the IPV4 address or domain name of the server
dn_rpc_portClient RPC Service port6667Set to any unoccupied port
dn_internal_addressControl flow address of DataNode inside cluster127.0.0.1Set to the IPV4 address or domain name of the server
dn_internal_portControl flow port of DataNode inside cluster10730Set to any unoccupied port
dn_mpp_data_exchange_portData flow port of DataNode inside cluster10740Set to any unoccupied port
dn_data_region_consensus_portData replicas communication port for consensus10750Set to any unoccupied port
dn_schema_region_consensus_portSchema replicas communication port for consensus10760Set to any unoccupied port
dn_target_config_node_listRunning ConfigNode of the Cluster127.0.0.1:10710Set to any running ConfigNode's cn_internal_address:cn_internal_port. You can set multiple values, separate them with commas(",")

Notice: The preceding configuration parameters cannot be changed after the node is started. Ensure that all ports are not occupied. Otherwise, the Node cannot be started.

Cluster Operation

Starting the cluster

This section describes how to start a cluster that includes several ConfigNodes and DataNodes.
The cluster can provide services only by starting at least one ConfigNode
and no less than the number of data/schema_replication_factor DataNodes.

The total process are three steps:

  • Start the Seed-ConfigNode
  • Add ConfigNode (Optional)
  • Add DataNode
Start the Seed-ConfigNode

The first Node started in the cluster must be ConfigNode. The first started ConfigNode must follow the tutorial in this section.

The first ConfigNode to start is the Seed-ConfigNode, which marks the creation of the new cluster.
Before start the Seed-ConfigNode, please open the common configuration file ./conf/iotdb-common.properties and check the following parameters:

ConfigurationCheck
cluster_nameIs set to the expected name
config_node_consensus_protocol_classIs set to the expected consensus protocol
schema_replication_factorIs set to the expected schema replication count
schema_region_consensus_protocol_classIs set to the expected consensus protocol
data_replication_factorIs set to the expected data replication count
data_region_consensus_protocol_classIs set to the expected consensus protocol

Notice: Please set these parameters carefully based on the Deployment Recommendation.
These parameters are not modifiable after the Node first startup.

Then open its configuration file ./conf/iotdb-confignode.properties and check the following parameters:

ConfigurationCheck
cn_internal_addressIs set to the IPV4 address or domain name of the server
cn_internal_portThe port isn't occupied
cn_consensus_portThe port isn't occupied
cn_target_config_node_listIs set to its own internal communication address, which is cn_internal_address:cn_internal_port

After checking, you can run the startup script on the server:

# Linux foreground
bash ./sbin/start-confignode.sh

# Linux background
nohup bash ./sbin/start-confignode.sh >/dev/null 2>&1 &

# Windows
.\sbin\start-confignode.bat

For more details about other configuration parameters of ConfigNode, see the
ConfigNode Configurations.

Add more ConfigNodes (Optional)

The ConfigNode who isn't the first one started must follow the tutorial in this section.

You can add more ConfigNodes to the cluster to ensure high availability of ConfigNodes.
A common configuration is to add extra two ConfigNodes to make the cluster has three ConfigNodes.

Ensure that all configuration parameters in the ./conf/iotdb-common.properites are the same as those in the Seed-ConfigNode;
otherwise, it may fail to start or generate runtime errors.
Therefore, please check the following parameters in common configuration file:

ConfigurationCheck
cluster_nameIs consistent with the Seed-ConfigNode
config_node_consensus_protocol_classIs consistent with the Seed-ConfigNode
schema_replication_factorIs consistent with the Seed-ConfigNode
schema_region_consensus_protocol_classIs consistent with the Seed-ConfigNode
data_replication_factorIs consistent with the Seed-ConfigNode
data_region_consensus_protocol_classIs consistent with the Seed-ConfigNode

Then, please open its configuration file ./conf/iotdb-confignode.properties and check the following parameters:

ConfigurationCheck
cn_internal_addressIs set to the IPV4 address or domain name of the server
cn_internal_portThe port isn't occupied
cn_consensus_portThe port isn't occupied
cn_target_config_node_listIs set to the internal communication address of an other running ConfigNode. The internal communication address of the seed ConfigNode is recommended.

After checking, you can run the startup script on the server:

# Linux foreground
bash ./sbin/start-confignode.sh

# Linux background
nohup bash ./sbin/start-confignode.sh >/dev/null 2>&1 &

# Windows
.\sbin\start-confignode.bat

For more details about other configuration parameters of ConfigNode, see the
ConfigNode Configurations.

Start DataNode

Before adding DataNodes, ensure that there exists at least one ConfigNode is running in the cluster.

You can add any number of DataNodes to the cluster.
Before adding a new DataNode,

please open its common configuration file ./conf/iotdb-common.properties and check the following parameters:

ConfigurationCheck
cluster_nameIs consistent with the Seed-ConfigNode

Then open its configuration file ./conf/iotdb-datanode.properties and check the following parameters:

ConfigurationCheck
dn_rpc_addressIs set to the IPV4 address or domain name of the server
dn_rpc_portThe port isn't occupied
dn_internal_addressIs set to the IPV4 address or domain name of the server
dn_internal_portThe port isn't occupied
dn_mpp_data_exchange_portThe port isn't occupied
dn_data_region_consensus_portThe port isn't occupied
dn_schema_region_consensus_portThe port isn't occupied
dn_target_config_node_listIs set to the internal communication address of other running ConfigNodes. The internal communication address of the seed ConfigNode is recommended.

After checking, you can run the startup script on the server:

# Linux foreground
bash ./sbin/start-datanode.sh

# Linux background
nohup bash ./sbin/start-datanode.sh >/dev/null 2>&1 &

# Windows
.\sbin\start-datanode.bat

For more details about other configuration parameters of DataNode, see the
DataNode Configurations.

Notice: The cluster can provide services only if the number of its DataNodes is no less than the number of replicas(max{schema_replication_factor, data_replication_factor}).

Start Cli

If the cluster is in local environment, you can directly run the Cli startup script in the ./sbin directory:

# Linux
./sbin/start-cli.sh

# Windows
.\sbin\start-cli.bat

If you want to use the Cli to connect to a cluster in the production environment,
Please read the Cli manual.

Verify Cluster

Use a 3C3D(3 ConfigNodes and 3 DataNodes) as an example.
Assumed that the IP addresses of the 3 ConfigNodes are 192.168.1.10, 192.168.1.11 and 192.168.1.12, and the default ports 10710 and 10720 are used.
Assumed that the IP addresses of the 3 DataNodes are 192.168.1.20, 192.168.1.21 and 192.168.1.22, and the default ports 6667, 10730, 10740, 10750 and 10760 are used.

After starting the cluster successfully according to chapter 6.1, you can run the show cluster details command on the Cli, and you will see the following results:

IoTDB> show cluster details
+------+----------+-------+---------------+------------+-------------------+------------+-------+-------+-------------------+-----------------+
|NodeID|  NodeType| Status|InternalAddress|InternalPort|ConfigConsensusPort|  RpcAddress|RpcPort|MppPort|SchemaConsensusPort|DataConsensusPort|
+------+----------+-------+---------------+------------+-------------------+------------+-------+-------+-------------------+-----------------+
|     0|ConfigNode|Running|   192.168.1.10|       10710|              10720|            |       |       |                   |                 |
|     2|ConfigNode|Running|   192.168.1.11|       10710|              10720|            |       |       |                   |                 |
|     3|ConfigNode|Running|   192.168.1.12|       10710|              10720|            |       |       |                   |                 |
|     1|  DataNode|Running|   192.168.1.20|       10730|                   |192.168.1.20|   6667|  10740|              10750|            10760|
|     4|  DataNode|Running|   192.168.1.21|       10730|                   |192.168.1.21|   6667|  10740|              10750|            10760|
|     5|  DataNode|Running|   192.168.1.22|       10730|                   |192.168.1.22|   6667|  10740|              10750|            10760|
+------+----------+-------+---------------+------------+-------------------+------------+-------+-------+-------------------+-----------------+
Total line number = 6
It costs 0.012s

If the status of all Nodes is Running, the cluster deployment is successful.
Otherwise, read the run logs of the Node that fails to start and
check the corresponding configuration parameters.

Stop IoTDB

This section describes how to manually shut down the ConfigNode or DataNode process of the IoTDB.

Stop ConfigNode by script

Run the stop ConfigNode script:

# Linux
./sbin/stop-confignode.sh

# Windows
.\sbin\stop-confignode.bat
Stop DataNode by script

Run the stop DataNode script:

# Linux
./sbin/stop-datanode.sh

# Windows
.\sbin\stop-datanode.bat
Kill Node process

Get the process number of the Node:

jps

# or

ps aux | grep iotdb

Kill the process:

kill -9 <pid>

Notice Some ports require root access, in which case use sudo

Shrink the Cluster

This section describes how to remove ConfigNode or DataNode from the cluster.

Remove ConfigNode

Before removing a ConfigNode, ensure that there is at least one active ConfigNode in the cluster after the removal.
Run the remove-confignode script on an active ConfigNode:

# Linux
# Remove the ConfigNode with confignode_id
./sbin/remove-confignode.sh <confignode_id>

# Remove the ConfigNode with address:port
./sbin/remove-confignode.sh <cn_internal_address>:<cn_internal_port>


# Windows
# Remove the ConfigNode with confignode_id
.\sbin\remove-confignode.bat <confignode_id>

# Remove the ConfigNode with address:port
.\sbin\remove-confignode.bat <cn_internal_address>:<cn_internal_portcn_internal_port>
Remove DataNode

Before removing a DataNode, ensure that the cluster has at least the number of data/schema replicas DataNodes.
Run the remove-datanode script on an active DataNode:

# Linux
# Remove the DataNode with datanode_id
./sbin/remove-datanode.sh <datanode_id>

# Remove the DataNode with rpc address:port
./sbin/remove-datanode.sh <dn_rpc_address>:<dn_rpc_port>


# Windows
# Remove the DataNode with datanode_id
.\sbin\remove-datanode.bat <datanode_id>

# Remove the DataNode with rpc address:port
.\sbin\remove-datanode.bat <dn_rpc_address>:<dn_rpc_port>

FAQ

See FAQ.

Copyright © 2024 The Apache Software Foundation.
Apache IoTDB, IoTDB, Apache, the Apache feather logo, and the Apache IoTDB project logo are either registered trademarks or trademarks of The Apache Software Foundation in all countries

Have a question? Connect with us on QQ, WeChat, or Slack. Join the community now.