搭建前的准备
配置好防火墙以及搭建好zookeeper,这里就不做搭建,可自行查找攻略
首先取消打开文件数限制,然后重启不然不生效
[root@node02 ~]# vim /etc/security/limits.conf
* soft nofile 65536
* hard nofile 65536
* soft nproc 131072
* hard nproc 131072
安装依赖,rpm离线安装需要此依赖
[root@node02 ~]# yum install -y libtool
[root@node02 ~]# yum install -y *unixODBC*
备注 : 我这里用的root用户,生产环境可不是root用户哦
yum安装
# yum 安装 server 以及 client
yum install -y clickhouse-server clickhouse-client
# 查看是否安装完成
yum list installed 'clickhouse*'
[root@node02 ~]# yum list installed 'clickhouse*'
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mirrors.163.com
* epel: d2lzkl7pfhq30w.cloudfront.net
* extras: mirrors.cn99.com
* updates: mirrors.163.com
已安装的软件包
clickhouse-client.x86_64 20.8.3.18-1.el7 @Altinity_clickhouse
clickhouse-common-static.x86_64 20.8.3.18-1.el7 @Altinity_clickhouse
clickhouse-server.x86_64 20.8.3.18-1.el7 @Altinity_clickhouse
clickhouse-server-common.x86_64 20.8.3.18-1.el7 @Altinity_clickhous
# 通过 clickhouse service 来启动,停止,clickhouse
service clickhouse-server start | stop | status | restart | ...
[root@node02 ~]# service clickhouse-server start
Start clickhouse-server service: Path to data directory in /etc/clickhouse-server/config.xml: /var/lib/clickhouse/
DONE
# 进入client
clickhouse-client
[root@node02 ~]# clickhouse-client
ClickHouse client version 20.8.3.18.
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 20.8.3 revision 54438.
到这里yum模式的就安装完成了,当然只是一个单节点,后面会说集群的搭建
集群搭建
集群搭建和单节点是差不多的,就是编写配置文件就可,首先在其他节点上安装好clickhouse,然后执行下面的步骤
# 1. 修改config.xml配置文件
vim /etc/clickhouse-server/config.xml
[root@node02 ~]# vim /etc/clickhouse-server/config.xml
# 进入编辑后 /<listen_host> 回车,找到这个位置,这个配置是用于,允许IP4和IP6源主机远程访问
<listen_host>::</listen_host>
<!-- Same for hosts with disabled ipv6: -->
<!-- <listen_host>0.0.0.0</listen_host> -->
# 将注释的内容给打开即可, 注意每台节点都要更改
# 然后进入到/etc目录下创建metrika.xml文件即可
[root@node02 etc]# vim metrika.xml
vim metrika.xml内容,我这里是四台节点,配置文件是这样的,以后的文章中会记录clickhouse的一些调优内容
<yandex>
# 集群配置
<clickhouse_remote_servers>
<perftest_4shards_1replicas>
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>node01</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<internal_replication>true</internal_replication>
<host>node02</host>
<port>9000</port>
</replica>
</shard>
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>node03</host>
<port>9000</port>
</replica>
</shard>
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>node04</host>
<port>9000</port>
</replica>
</shard>
</perftest_4shards_1replicas>
</clickhouse_remote_servers>
# zk配置
<zookeeper-servers>
<node index="1">
<host>node01</host>
<port>2181</port>
</node>
<node index="2">
<host>node02</host>
<port>2181</port>
</node>
<node index="3">
<host>node03</host>
<port>2181</port>
</node>
<node index="4">
<host>node04</host>
<port>2181</port>
</node>
</zookeeper-servers>
# 这个位置需要根据不同的节点来更改
<macros>
<replica>node02</replica>
</macros>
<networks>
<ip>::/0</ip>
</networks>
<clickhouse_compression>
<case>
<min_part_size>10000000000</min_part_size>
<min_part_size_ratio>0.01</min_part_size_ratio>
<method>lz4</method>
</case>
</clickhouse_compression>
</yandex>
然后将metrika.xml文件分发到其他的节点,然后停止之前启动clickhouse,然后每台节点启动服务,对没看错,不需要在做什么事情了,他会自动读取这个文件,启动就行了
# 每台节点启动完成了查看是否启动
[root@node02 etc]# ps -aux | grep clickhouse-server
clickho+ 244604 0.4 1.1 1971184 368656 ? Ssl 18:57 0:09 clickhouse-server --daemon --pid-file=/var/run/clickhouse-server/clickhouse-server.pid --config-file=/etc/clickhouse-server/config.xml
root 248725 0.0 0.0 112824 980 pts/0 S+ 19:36 0:00 grep --color=auto clickhouse-server
# 如过启动失败可以去查看log ,log位置
[root@node02 clickhouse-server]# pwd
/var/log/clickhouse-server
[root@node02 clickhouse-server]# ll
总用量 864
-rw-r-----. 1 clickhouse clickhouse 154597 11月 25 18:17 clickhouse-server.err.log
-rw-r-----. 1 clickhouse clickhouse 720270 11月 25 19:37 clickhouse-server.log
-rw-r-----. 1 clickhouse clickhouse 5395 11月 25 18:57 stderr.log
-rw-r-----. 1 clickhouse clickhouse 0 11月 25 18:14 stdout.log
# 启动完成后我们进入client ,说明成功了
[root@node02 clickhouse-server]# clickhouse-client
ClickHouse client version 20.8.3.18.
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 20.8.3 revision 54438.
node02 :)
# 查看集群信息,可以看到以下信息
# perftest_4shards_1replicas这个是我们在metrika.xml配置文件中配置的
node02 :) select * from system.clusters;
SELECT *
FROM system.clusters
┌─cluster───────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name─┬─host_address──┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─estimated_recovery_time─┐
│ perftest_4shards_1replicas │ 1 │ 1 │ 1 │ node01 │ 192.168.2.217 │ 9000 │ 0 │ default │ │ 0 │ 0 │
│ perftest_4shards_1replicas │ 2 │ 1 │ 1 │ node02 │ 192.168.2.202 │ 9000 │ 1 │ default │ │ 0 │ 0 │
│ perftest_4shards_1replicas │ 3 │ 1 │ 1 │ node03 │ 192.168.2.203 │ 9000 │ 0 │ default │ │ 0 │ 0 │
│ perftest_4shards_1replicas │ 4 │ 1 │ 1 │ node04 │ 192.168.2.218 │ 9000 │ 0 │ default │ │ 0 │ 0 │
│ test_cluster_two_shards │ 1 │ 1 │ 1 │ 127.0.0.1 │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │
│ test_cluster_two_shards │ 2 │ 1 │ 1 │ 127.0.0.2 │ 127.0.0.2 │ 9000 │ 0 │ default │ │ 0 │ 0 │
│ test_cluster_two_shards_localhost │ 1 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │
│ test_cluster_two_shards_localhost │ 2 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │
│ test_shard_localhost │ 1 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │
│ test_shard_localhost_secure │ 1 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9440 │ 0 │ default │ │ 0 │ 0 │
│ test_unavailable_shard │ 1 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 9000 │ 1 │ default │ │ 0 │ 0 │
│ test_unavailable_shard │ 2 │ 1 │ 1 │ localhost │ 127.0.0.1 │ 1 │ 0 │ default │ │ 0 │ 0 │
└───────────────────────────────────┴───────────┴──────────────┴─────────────┴───────────┴───────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────────────┘
到这里我们的clickhouse集群就搭建成功了,后面还会做一些优化,现在的这个只适用于测试