# Deploy a Druid Cluster
###### tags: `druid`
## Prerequisites
* jvm consumes ~1GB ram itself
* Metadata Storage (MySQL)
* JDBC MySQL Driver jar list: [Index of /Code/JarDownload/mysql](http://www.java2s.com/Code/JarDownload/mysql/)
```bash
apt update && apt install mysql-server -y
mysql -uroot -p -e 'create database druid'
mysql -uroot -p -e "grant all privileges on druid.* to 'druid'@'192.168.2.%' identified by 'o0O_druid_O0o'"
mysql -uroot -p -e 'alter database druid character set utf8 collate utf8_general_ci;'
# modify bind-address
vim /etc/mysql/mysql.conf.d/mysqld.cnf
systemctl restart mysql
```
If use MySQL as Druid Metadata Storage, **MUST** add `"druid.extensions.loadList"` to config file `conf/druid/_common/common.runtime.properties`’s `druid.extensions.loadList` list.
[druid.io 使用mysql存储metadata overlord启动出错 - 一只小江的个人空间 - OSCHINA](https://my.oschina.net/u/2460844/blog/637334)
```properties
druid.extensions.loadList=["...", "mysql-metadata-storage"]
```
* Deep Storage (S3)
## Master node, Data node & Query node
## Confidure `druid.host`
### All In One
```bash
cd zookeeper
bin/zkServer.sh start conf/zoo.cfg
cd druid
AWS_REGION=us-west-2 bin/run-druid coordinator quickstart/kettan-conf
AWS_REGION=us-west-2 bin/run-druid broker quickstart/kettan-conf
AWS_REGION=us-west-2 bin/run-druid router quickstart/kettan-conf
AWS_REGION=us-west-2 bin/run-druid historical quickstart/kettan-conf
AWS_REGION=us-west-2 bin/run-druid overlord quickstart/kettan-conf
AWS_REGION=us-west-2 bin/run-druid middleManager quickstart/kettan-conf
cd tranquility
bin/tranquility server -configFile pushstream_schema.json -Ddruid.extensions.loadList=[]
curl -XPOST -H'Content-Type: application/json' --data-binary @pushstream_data.json http://localhost:8200/v1/post/clickstream
```
### Master node
```bash
apt update
apt install openjdk-8-jre-headless curl unzip mysql-server -y
curl https://archive.apache.org/dist/zookeeper/zookeeper-3.4.11/zookeeper-3.4.11.tar.gz -o zookeeper-3.4.11.tar.gz
tar -xzf zookeeper-3.4.11.tar.gz
cp zookeeper-3.4.11/conf/zoo_sample.cfg zookeeper-3.4.11/conf/zoo.cfg
./zookeeper-3.4.11/bin/zkServer.sh start ./zookeeper-3.4.11/conf/zoo.cfg
cd apache-druid-0.14.0-incubating
# download mysql driver
curl https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.38/mysql-connector-java-5.1.38.jar -O
mv mysql-connector-java-5.1.38.jar extensions/mysql-metadata-storage/
# coordinator
AWS_REGION=us-west-2 java `cat conf/druid/coordinator/jvm.config | xargs` -cp conf/druid/_common:conf/druid/coordinator:lib/* org.apache.druid.cli.Main server coordinator
# overlord
AWS_REGION=us-west-2 java `cat conf/druid/overlord/jvm.config | xargs` -cp conf/druid/_common:conf/druid/overlord:lib/* org.apache.druid.cli.Main server overlord
```
### Data node
* Adjust conf for Zookeeper `conf/druid/_common/common.runtime.properties`
```properties
# ...
druid.zk.service.host=192.168.2.100:2181
# ...
```
* Adjust conf for Historical to adopt the hardware `conf/druid/historical/jvm.config`
> Please adjust -XX:MaxDirectMemorySize, druid.processing.buffer.sizeBytes, druid.processing.numThreads, or druid.processing.numMergeBuffers: maxDirectMemory[4,294,967,296], memoryNeeded[5,368,709,120] = druid.processing.buffer.sizeBytes[536,870,912] * (druid.processing.numMergeBuffers[2] + druid.processing.numThreads[7] + 1)
```properties
-XX:MaxDirectMemorySize=4096M
druid.processing.buffer.sizeBytes=300000000
druid.processing.numMergeBuffers=2
druid.processing.numThreads=7
```
* Run Historical & MiddleManager
```bash
# historical
AWS_REGION=us-west-2 java `cat conf/druid/historical/jvm.config | xargs` -cp conf/druid/_common:conf/druid/historical:lib/* org.apache.druid.cli.Main server historical
# middlemanager
AWS_REGION=us-west-2 java `cat conf/druid/middleManager/jvm.config | xargs` -cp conf/druid/_common:conf/druid/middleManager:lib/* org.apache.druid.cli.Main server middleManager
```
## Query node
### Attention!
The Broker will need to link to **Master node** and **MilldeManager**, these node’s **HOSTNAME** will be registered to Zookeeper, MUST ensure the Broker node’s `/etc/hosts` had bee well configured so can recognize other servers.
* Adjust conf for Zookeeper `conf/druid/_common/common.runtime.properties`
```properties
# ...
druid.zk.service.host=192.168.2.100:2181
# ...
```
* Adjust conf for Broker to adopt the hardware `conf/druid/broker/jvm.config`
> Please adjust -XX:MaxDirectMemorySize, druid.processing.buffer.sizeBytes, druid.processing.numThreads, or druid.processing.numMergeBuffers: maxDirectMemory[4,294,967,296], memoryNeeded[5,368,709,120] = druid.processing.buffer.sizeBytes[536,870,912] * (druid.processing.numMergeBuffers[2] + druid.processing.numThreads[7] + 1)
```properties
-Xms4G
-Xmx4G
-XX:MaxDirectMemorySize=4096M
-Ddruid.processing.buffer.sizeBytes=300000000
-Ddruid.processing.numMergeBuffers=2
-Ddruid.processing.numThreads=7
```
* Run Broker
```
AWS_REGION=us-west-2 java `cat conf/druid/broker/jvm.config | xargs` -cp conf/druid/_common:conf/druid/broker:lib/* org.apache.druid.cli.Main server broker
```
* Run Router
```
AWS_REGION=us-west-2 java `cat conf/druid/router/jvm.config | xargs` -cp conf/druid/_common:conf/druid/router:lib/* org.apache.druid.cli.Main server router
```
## Miscs
* Zookeeper listens on port **2181** by default
* numMergeBuffers defaultValue = max(2, druid.processing.numThreads / 4)
## Configurations must be customized
### conf/druid/_common/common.runtime.properties
```properties
# extensions
druid.extensions.loadList
# zookeeper
druid.zk.service.host
# metadata storage
druid.metadata.storage.type
druid.metadata.storage.connector.connectURI
druid.metadata.storage.connector.user
druid.metadata.storage.connector.password
# deep storage
druid.storage.type
druid.storage.bucket
druid.storage.baseKey
druid.s3.accessKey
druid.s3.secretKey
```
```bash
# numMergeBuffers defaultValue = max(2, druid.processing.numThreads / 4)
MaxDirectMemorySize = druid.processing.buffer.sizeBytes * (druid.processing.numMergeBuffers + druid.processing.numThreads + 1)
```
### conf/druid/coordinator/jvm.config
```
-XX:MaxDirectMemorySize=4096M
-Daws.region=us-west-2
```
### conf/druid/coordinator/runtime.properties
```properties
druid.processing.buffer.sizeBytes=536870912
druid.processing.numThreads=7
```
### conf/druid/middleManager/runtime.properties (AWS S3)
Add `-Daws.region=AWS_REGION`
```properties
druid.indexer.runner.javaOpts=-server -Xmx2g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+ExitOnOutOfMemoryError -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager -Daws.region=us-west-2
```
## References
* [Druid | Clustering](http://druid.io/docs/latest/tutorials/cluster.html)