基本环境

这里搭建的是三主三从的redis集群。需要三台虚拟机,每一台虚拟机都会有一主一从。

三台虚拟机可以相互访问。

下载Redis

访问官网:https://redis.io/

Redis有中文网:http://redis.cn/这个是根据英文官网翻译过来的,有一定滞后性

点击右上角的Download

然后点击左边的Download 7.0.10。即可下载

截屏2023-03-25 17.13.23

下载好后上传到虚拟机。Windows电脑用Xftp传输即可,Mac使用FileZilla传输。注意目标文件夹到权限。

安装Redis

注意,安装Redis必须先具备gcc到编译环境

输入命令:

1
gcc -v

如果出现对应版本,则表示有gcc编译环境。

如果没有就下载:

1
sudo apt install gcc

可以更新一下apt:sudo apt update之后再下载gcc。这里默认是下载的gcc-11

因为提前将安装包传递到你虚拟机内。去到指定目录,然后解压。

我是将安装包放在:/usr/local/redis目录下的。

1
tar -xf redis-7.0.10.tar

如果是.tar.gz结尾则输入:

1
tar -zxf redis-7.0.10.tar.gz

之后将解压后的包放在一个位置即可,这个压缩包就可以删除了。

我是这样操作的:

1
2
3
4
sudo mv redis-7.0.10 ../ # 把解压好后的文件夹放到 /usr/local 目录下
cd .. # 退回到 /usr/local 目录下
sudo rm -rf redis # 删除包含redis安装包的redis文件夹
sudo mv redis-7.0.10 redis # 将解压好后的redis文件夹改名为redis

这样就完成了一台虚拟机的安装。

利用分发脚本,将文件夹分发给其他两个服务器:

1
xsync redis

xsync命令是我自己封装的,如果你想要封装参考文章:集群分发脚本

如果你没有分发命令,也不想自己封装,就在其他两台服务器上重复上述操作即可。

配置Redis

编译Redis

进入redis目录:

1
cd /usr/local/redis

编译redis,这一步最好切换到root用户执行

1
make && make install

如果没有make命令,则安装:

1
sudo apt install make

出现:Hint: It's a good idea to run 'make test' ;)且没有ERROR就表示编译成功。

它默认会安装到/usr/local/bin目录下

/usr/local/bin目录下查看,如图情况即为成功:

便写Redis的配置文件

接下来继续写Redis的配置文件/usr/local/redis目录下的redis.conf文件:

为了稳妥起见如此操作(在/usr/local/redis目录下):

1
2
mkdir myredis
cp redis.conf myredis/redis7.conf

这个意思是:创建一个myredis目录,然后把原本的redis.conf文件拷贝到myredis目录下。相当于是做了一个备份。

接下来打开文件修改内容(/usr/local/redis/myredis目录下):

1
vim redis7.conf

修改内容:

可以在命令模式下利用/requirepass查找对应位置。

  1. 默认daemonize no 修改为 daemonize yes

    默认是服务前台打开,改为后台启动

  2. 默认 protected-mode yes 修改为 protected-mode no

    默认的保护模式打开,现在关闭,如果不关闭外部无法连接

  3. 默认 bind 127.0.0.1 直接注释掉或者改成本机IP,否则影响远程IP连接。我选择直接注释掉。

  4. 添加redis密码:你的密码

    这里默认是注释掉,需要打开注释。或者直接添加requirepass 密码例如我:requirepass 111111

保存退出。

启动服务

先配置环境变量吧:

1
vim ~/.bashrc

写入:

1
2
# redis
export REDIS_HOME=/usr/local/redis

保存后生效:

1
source ~/.bashrc

/usr/local/redis目录下:

1
redis-server myredis/redis7.conf

检查一下:

1
ps -ef|grep redis|grep -v grep

出现:

1
hadoop1+   45203       1  0 10:03 ?        00:00:01 redis-server *:6379

这种的就表示启动了。默认的端口就是6379。

连接

1
redis-cli -a 111111 -p 6379

说明:

-a 命令后是密码

-p 命令后是服务的端口,默认是6379,其实也可以不写

出现这个警告是正常的:

Warning: Using a password with ‘-a’ or ‘-u’ option on the command line interface may not be safe.

可以简单玩一玩:

1
2
3
4
5
6
7
127.0.0.1:6379> ping
PONG
127.0.0.1:6379> set k1 v1
OK
127.0.0.1:6379> get k1
"v1"
127.0.0.1:6379>

到这里单机安装就成功啦。

一些说明,如果前面配置文件没有写:requirepass配置,就不用-a选项

如果是单机启动,一般启动的默认端口就是6379,可以不指定端口。

如果有密码但是输入:redis-cli直接回车也可以进入,但是会出现如下情况:

1
2
3
4
hadoop103@hadoop103:/usr/local/redis/myredis$ redis-cli
127.0.0.1:6379> ping
(error) NOAUTH Authentication required.
127.0.0.1:6379>

这个时候需要再输入:auth 111111,完整操作:

1
2
3
4
5
6
7
8
hadoop103@hadoop103:/usr/local/redis/myredis$ redis-cli
127.0.0.1:6379> ping
(error) NOAUTH Authentication required.
127.0.0.1:6379> auth 111111
OK
127.0.0.1:6379> ping
PONG
127.0.0.1:6379>

退出连接直接输入:quit或者exit

关闭服务

单实例关闭

在redis-cli即客户端中输入shutdown

1
2
3
4
127.0.0.1:6379> shutdown
not connected> exit
hadoop103@hadoop103:/usr/local/redis/myredis$ ps -ef|grep redis|grep -v grep
hadoop103@hadoop103:/usr/local/redis/myredis$

如果没有在redis客户端中,输入:redis-cli -a 111111 shutdown

多实例关闭

假设指定关闭6379端口的redis:

1
redis-cli -a 111111 -p 6379 shutdown

其实原理差不多。

卸载redis

如果想要更换版本就一定要先卸载原本的redis。

  1. 停止服务,看上一章

  2. 删除/usr/local/bin下的所有配置:

    1
    rm -rf /usr/local/bin/redis-*

搭建集群

创建redis集群文件夹

1
mkdir /usr/local/redis/myredis/cluster

然后进入:

1
cd /usr/local/redis/myredis/cluster

三个虚拟机都要如此执行

新建6个独立的redis实例

分配端口:我们知道单机启动时redis默认在6379端口启动,所以我们从6381端口开始分配:

hadoop103:6381、6382

hadoop104:6383、6384

hadoop105:6385、6386

hadoop103

在hadoop103上编写集群配置文件:redisCluster6381.conf、redisCluster6382.conf

1
vim redisCluster6381.conf

写入:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
bind 0.0.0.0
daemonize yes
protected-mode no
port 6381
logfile "/usr/local/redis/myredis/cluster/cluster6381.log"
pidfile /usr/local/redis/myredis/cluster6381.pid
dir /usr/local/redis/myredis/cluster
dbfilename dump6381.rdb
appendonly yes
appendfilename "appendonly6381.aof"
requirepass 111111
masterauth 111111

cluster-enabled yes
cluster-config-file nodes-6381.conf
cluster-node-timeout 5000

1
vim redisCluster6382.conf

写入:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
bind 0.0.0.0
daemonize yes
protected-mode no
port 6382
logfile "//usr/local/redismyredis/cluster/cluster6382.log"
pidfile /usr/local/redis/myredis/cluster6382.pid
dir /usr/local/redis/myredis/cluster
dbfilename dump6382.rdb
appendonly yes
appendfilename "appendonly6382.aof"
requirepass 111111
masterauth 111111

cluster-enabled yes
cluster-config-file nodes-6382.conf
cluster-node-timeout 5000

hadoop104

1
vim redisCluster6383.conf

写入:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
bind 0.0.0.0
daemonize yes
protected-mode no
port 6383
logfile "/usr/local/redis/myredis/cluster/cluster6383.log"
pidfile /usr/local/redis/myredis/cluster6383.pid
dir /usr/local/redis/myredis/cluster
dbfilename dump6383.rdb
appendonly yes
appendfilename "appendonly6383.aof"
requirepass 111111
masterauth 111111

cluster-enabled yes
cluster-config-file nodes-6383.conf
cluster-node-timeout 5000

1
vim redisCluster6384.conf

写入:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
bind 0.0.0.0
daemonize yes
protected-mode no
port 6384
logfile "/usr/local/redis/myredis/cluster/cluster6384.log"
pidfile /usr/local/redis/myredis/cluster6384.pid
dir /usr/local/redis/myredis/cluster
dbfilename dump6384.rdb
appendonly yes
appendfilename "appendonly6384.aof"
requirepass 111111
masterauth 111111

cluster-enabled yes
cluster-config-file nodes-6384.conf
cluster-node-timeout 5000

hadoop105

1
vim redisCluster6385.conf

写入:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
bind 0.0.0.0
daemonize yes
protected-mode no
port 6385
logfile "/usr/local/redis/myredis/cluster/cluster6385.log"
pidfile /usr/local/redis/myredis/cluster6385.pid
dir /usr/local/redis/myredis/cluster
dbfilename dump6385.rdb
appendonly yes
appendfilename "appendonly6385.aof"
requirepass 111111
masterauth 111111

cluster-enabled yes
cluster-config-file nodes-6385.conf
cluster-node-timeout 5000

1
vim redisCluster6386.conf

写入:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
bind 0.0.0.0
daemonize yes
protected-mode no
port 6386
logfile "/usr/local/redis/myredis/cluster/cluster6386.log"
pidfile /usr/local/redis/myredis/cluster6386.pid
dir /usr/local/redis/myredis/cluster
dbfilename dump6386.rdb
appendonly yes
appendfilename "appendonly6386.aof"
requirepass 111111
masterauth 111111

cluster-enabled yes
cluster-config-file nodes-6386.conf
cluster-node-timeout 5000

启动六个redis服务实例

在对应虚拟机上启动:

  • hadoop103

    1
    2
    redis-server myredis/cluster/redisCluster6381.conf
    redis-server myredis/cluster/redisCluster6382.conf
  • hadoop104

    1
    2
    redis-server myredis/cluster/redisCluster6383.conf
    redis-server myredis/cluster/redisCluster6384.conf
  • hadoop105

    1
    2
    redis-server myredis/cluster/redisCluster6385.conf
    redis-server myredis/cluster/redisCluster6386.conf

测试:

1
ps -ef|grep redis|grep -v grep

结果(后面带有[cluster]就是成功了):

1
2
3
hadoop103@hadoop103:/usr/local/redis$ ps -ef|grep redis|grep -v grep
hadoop1+ 47113 1 0 14:18 ? 00:00:01 redis-server 0.0.0.0:6381 [cluster]
hadoop1+ 47151 1 0 14:21 ? 00:00:00 redis-server 0.0.0.0:6382 [cluster]

构建主从关系

1
redis-cli -a 111111 --cluster create --cluster-replicas 1 hadoop103:6381 hadoop103:6382 hadoop104:6383 hadoop104:6384 hadoop105:6385 hadoop105:6386

—cluster create 表示创建集群环境

—cluster-replicas 1 表示每一个master给一个slave节点

要把所有实例的redis-server启动起来之后再执行上面的命令

执行结果:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
hadoop103@hadoop103:/usr/local/redis$ redis-cli -a 111111 --cluster create --cluster-replicas 1 hadoop103:6381 hadoop103:6382 hadoop104:6383 hadoop104:6384 hadoop105:6385 hadoop105:6386
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica hadoop104:6384 to hadoop103:6381
Adding replica hadoop105:6386 to hadoop104:6383
Adding replica hadoop103:6382 to hadoop105:6385
M: 322b0e3ca2879165cadff469c409e454ce7fe50d hadoop103:6381
slots:[0-5460] (5461 slots) master
S: f02366fcfa1a74f7b642c69787605c825506efe9 hadoop103:6382
replicates e211f8a39ac8a24f5b68e9bd0aae590c101073a2
M: 188a4323f92bcd33c52ac418ce820dcf192c7749 hadoop104:6383
slots:[5461-10922] (5462 slots) master
S: b05f518d17fc839d455752e296a1ae31547def93 hadoop104:6384
replicates 322b0e3ca2879165cadff469c409e454ce7fe50d
M: e211f8a39ac8a24f5b68e9bd0aae590c101073a2 hadoop105:6385
slots:[10923-16383] (5461 slots) master
S: 1944d02efaa72cb1e941e5c9b39398d158f62bea hadoop105:6386
replicates 188a4323f92bcd33c52ac418ce820dcf192c7749
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node hadoop103:6381)
M: 322b0e3ca2879165cadff469c409e454ce7fe50d hadoop103:6381
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: b05f518d17fc839d455752e296a1ae31547def93 192.168.70.104:6384
slots: (0 slots) slave
replicates 322b0e3ca2879165cadff469c409e454ce7fe50d
S: 1944d02efaa72cb1e941e5c9b39398d158f62bea 192.168.70.105:6386
slots: (0 slots) slave
replicates 188a4323f92bcd33c52ac418ce820dcf192c7749
M: 188a4323f92bcd33c52ac418ce820dcf192c7749 192.168.70.104:6383
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: f02366fcfa1a74f7b642c69787605c825506efe9 192.168.70.103:6382
slots: (0 slots) slave
replicates e211f8a39ac8a24f5b68e9bd0aae590c101073a2
M: e211f8a39ac8a24f5b68e9bd0aae590c101073a2 192.168.70.105:6385
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

分析

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica hadoop104:6384 to hadoop103:6381
Adding replica hadoop105:6386 to hadoop104:6383
Adding replica hadoop103:6382 to hadoop105:6385
M: 322b0e3ca2879165cadff469c409e454ce7fe50d hadoop103:6381
slots:[0-5460] (5461 slots) master
S: f02366fcfa1a74f7b642c69787605c825506efe9 hadoop103:6382
replicates e211f8a39ac8a24f5b68e9bd0aae590c101073a2
M: 188a4323f92bcd33c52ac418ce820dcf192c7749 hadoop104:6383
slots:[5461-10922] (5462 slots) master
S: b05f518d17fc839d455752e296a1ae31547def93 hadoop104:6384
replicates 322b0e3ca2879165cadff469c409e454ce7fe50d
M: e211f8a39ac8a24f5b68e9bd0aae590c101073a2 hadoop105:6385
slots:[10923-16383] (5461 slots) master
S: 1944d02efaa72cb1e941e5c9b39398d158f62bea hadoop105:6386
replicates 188a4323f92bcd33c52ac418ce820dcf192c7749
Can I set the above configuration? (type 'yes' to accept): yes

这里是提示分配原则,分配redis的槽点为:

1
2
3
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383

之后就是添加主从关系:

1
2
3
Adding replica hadoop104:6384 to hadoop103:6381
Adding replica hadoop105:6386 to hadoop104:6383
Adding replica hadoop103:6382 to hadoop105:6385

意思是hadoop103下的6381端口是一个master,它的slave是hadoop104下的6384端口…

1
2
3
4
5
6
7
8
9
10
11
12
M: 322b0e3ca2879165cadff469c409e454ce7fe50d hadoop103:6381
slots:[0-5460] (5461 slots) master
S: f02366fcfa1a74f7b642c69787605c825506efe9 hadoop103:6382
replicates e211f8a39ac8a24f5b68e9bd0aae590c101073a2
M: 188a4323f92bcd33c52ac418ce820dcf192c7749 hadoop104:6383
slots:[5461-10922] (5462 slots) master
S: b05f518d17fc839d455752e296a1ae31547def93 hadoop104:6384
replicates 322b0e3ca2879165cadff469c409e454ce7fe50d
M: e211f8a39ac8a24f5b68e9bd0aae590c101073a2 hadoop105:6385
slots:[10923-16383] (5461 slots) master
S: 1944d02efaa72cb1e941e5c9b39398d158f62bea hadoop105:6386
replicates 188a4323f92bcd33c52ac418ce820dcf192c7749

这也是描述主从关系的:M表示Master,S表示slave

这里需要手动输入yes表示服从这样的分配。

之后的应用就是如此配置了。

检查集群情况

这里用hadoop103来检查即可

1
redis-cli -a 111111 -p 6381

这里登陆6381看看,注意这种集群启动的情况一定要指定端口了。

打开客户端后:输入info replication查看信息:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
hadoop103@hadoop103:/usr/local/redis$ redis-cli -a 111111 -p 6381
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
127.0.0.1:6381> info replication
# Replication
role:master
connected_slaves:1
slave0:ip=192.168.70.104,port=6384,state=online,offset=672,lag=1
master_failover_state:no-failover
master_replid:ba6e029edb67301b96744ef2c8767107fe6f5312
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:672
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:672
127.0.0.1:6381>

可以从输出看出:

role:master 身份是master

connected_slaves:1 连接的salve数为1

slave0:ip=192.168.70.104,port=6384,state=online,offset=672,lag=1 slave的ip等信息,和构建主从关系这一步的分配是一样的:

1
2
3
Adding replica hadoop104:6384 to hadoop103:6381
Adding replica hadoop105:6386 to hadoop104:6383
Adding replica hadoop103:6382 to hadoop105:6385

这样的结果就是正确的。


可以输入cluster nodes查看信息:

1
2
3
4
5
6
7
8
127.0.0.1:6381> cluster nodes
b05f518d17fc839d455752e296a1ae31547def93 192.168.70.104:6384@16384 slave 322b0e3ca2879165cadff469c409e454ce7fe50d 0 1679755326000 1 connected
1944d02efaa72cb1e941e5c9b39398d158f62bea 192.168.70.105:6386@16386 slave 188a4323f92bcd33c52ac418ce820dcf192c7749 0 1679755326951 3 connected
188a4323f92bcd33c52ac418ce820dcf192c7749 192.168.70.104:6383@16383 master - 0 1679755326442 3 connected 5461-10922
f02366fcfa1a74f7b642c69787605c825506efe9 192.168.70.103:6382@16382 slave e211f8a39ac8a24f5b68e9bd0aae590c101073a2 0 1679755326000 5 connected
e211f8a39ac8a24f5b68e9bd0aae590c101073a2 192.168.70.105:6385@16385 master - 0 1679755326000 5 connected 10923-16383
322b0e3ca2879165cadff469c409e454ce7fe50d 192.168.70.103:6381@16381 myself,master - 0 1679755325000 1 connected 0-5460
127.0.0.1:6381>
1
322b0e3ca2879165cadff469c409e454ce7fe50d 192.168.70.103:6381@16381 myself,master - 0 1679755325000 1 connected 0-5460

上面这句的意思就是标记这是我,身份是master,我对应的槽点:0-5468

1
b05f518d17fc839d455752e296a1ae31547def93 192.168.70.104:6384@16384 slave 322b0e3ca2879165cadff469c409e454ce7fe50d 0 1679755326000 1 connected

这一句表示:我的身份是slave,我是b05f518d17fc839d455752e296a1ae31547def93,我服务的master是322b0e3ca2879165cadff469c409e454ce7fe50d。


可以输入cluster info

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
127.0.0.1:6381> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:1689
cluster_stats_messages_pong_sent:1836
cluster_stats_messages_sent:3525
cluster_stats_messages_ping_received:1831
cluster_stats_messages_pong_received:1689
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:3525
total_cluster_links_buffer_limit_exceeded:0
127.0.0.1:6381>

这就是查看集群的状态,总槽点,节点数量,服务器数量等。

使用集群

这里是使用集群的一些注意事项:

引出问题:

一般启动:

1
redis-cli -a 111111 -p 6381

输入:

1
2
3
4
5
127.0.0.1:6381> set k1 v1
(error) MOVED 12706 192.168.70.105:6385
127.0.0.1:6381> set k2 v2
OK
127.0.0.1:6381>

发现set k1的时候报错了,意思是计算出来k1应该属于12706这个槽点,但是6381只能管0-5468这部分槽点,所以要求我去能关12706槽点的节点才能set k1,也就是hadoop105。

如果要解决这种问题,之后使用客户端时需要添加-c选项:

1
redis-cli -a 111111 -p 6381 -c

再输入命令:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
192.168.70.105:6385> flushall
OK
192.168.70.105:6385> keys *
(empty array)
192.168.70.105:6385> set k1 v1
OK
192.168.70.105:6385> set k2 v2
-> Redirected to slot [449] located at 192.168.70.103:6381
OK
192.168.70.103:6381> keys *
1) "k2"
192.168.70.103:6381> get k1
-> Redirected to slot [12706] located at 192.168.70.105:6385
"v1"
192.168.70.105:6385> get k2
-> Redirected to slot [449] located at 192.168.70.103:6381
"v2"
192.168.70.103:6381>

可以看出,它会自己帮忙定向到对应位置了。每次定向之后前面的IP也在变化105和103之间变化。

所以其他节点也可以读到了。