1.redis集群和常见集群的区别
我们以elasticsearch集群架构方案和redis集群方案做对比分析,redis集群的优点。 优点:因为redis是一个内存数据库,因此其采用无中心化的集群架构,使得与传统的中心节点转发的方式相比,其并发量和吞吐量都更大,并且不会因为中心节点压力过大影响集群的性能。
redis cluster节点分配:假设我们有三个主节点分别是:A, B, C 三个节点,它们可以是一台机器上的三个端口,也可以是三台不同的服务器。那么,采用哈希槽 (hash slot)的方式来分配16384个slot 的话,它们三个节点分别承担的slot 区间是:
节点A覆盖
0-
5460;
节点B覆盖
5461-
10922;
节点
C覆盖
10923-
16383.
如果存入一个值,按照redis cluster哈希槽的算法: CRC16(‘key’)384 = 6782。 那么就会把这个key 的存储分配到 B 上了。同样,当我连接(A,B,C)任何一个节点想获取’key’这个key时,也会这样的算法,然后内部跳转到B节点上获取数据
新增一个节点D,redis cluster的这种做法是从各个节点的前面各拿取一部分slot到D上,我会在接下来的实践中实验。大致就会变成这样:
节点A覆盖
1365-
5460
节点B覆盖
6827-
10922
节点
C覆盖
12288-
16383
节点
D覆盖
0-
1364,
5461-
6826,
10923-
12287
同样删除一个节点也是类似,移动完成后就可以删除这个节点了。
2.redis安装
假设已经准备了三台Linux机器(可以采用虚拟机virtualbox安装,并且以桥接方式配置网络,使得三台linux机器都有不同的局域网ip)。在每台机器的/etc/hosts目录下配置域名映射关系
192.168.
31.88 hadoop-master
192.168.
31.234 hadoop-slave01
192.168.
31.186 hadoop-slave02
192.168.
31.19 hadoop-macbook
下载最新版redis https://redis.io/上传redis文件到虚拟机上,并进行编译(假设已安装了polysh)
#设置polysh登陆到~/.bash_profile
alias hadoop-login=
"polysh 'hadoop@hadoop-master' 'hadoop@hadoop-slave<01-02>' "
source ~/.bash_profile
#同时登陆三台机器
hadoop-login
#上传文件
scp wenyicao
@192.168.31.19:/Users/wenyi/Downloads/redis-
3.2.8.tar.gz ./
mv redis-
3.2.8.tar.gz ~/
workspace
cd
workspace
tar -xvf redis-
3.2.8.tar.gz
cd redis-
3.2.8
#编译生成redis命令,并且将所有编译生成的bin命令copy到指定目录
make
make install PREFIX=/home/hadoop/
workspace/redis-
cluster
#因为机器有限需要在每台机器上启动两个redis进程,只需要配置不同端口即可(
3master
3slave)
echo
"export PATH=/home/hadoop/workspace/redis-cluster/bin:$PATH" >> ~/.bash_profile
source ~/.bash_profile
#设置实例
1
mkdir master
cp /home/hadoop/
workspace/redis-
3.2.8/redis.conf /home/hadoop/
workspace/redis-
cluster/master/redis-master.conf
#修改配置文件
port
6379
bind
192.168.31.234
daemonize yes
pidfile /var/run/redis_6379.pid
cluster-enabled yes
cluster-config-
file nodes_6379.conf
cluster-node-timeout
15000
appendonly yes
#设置实例
2
mkdir slave
cp /home/hadoop/
workspace/redis-
3.2.8/redis.conf /home/hadoop/
workspace/redis-
cluster/master/redis-slave.conf
#修改配置文件
port
6380
bind
192.168.31.234
daemonize yes
pidfile /var/run/redis_6380.pid
cluster-enabled yes
cluster-config-
file nodes_6380.conf
cluster-node-timeout
15000
appendonly yes
启动redis进程
vim start-redis.sh
cd /home/hadoop/workspace/redis-cluster/master
redis-server redis-master.conf
cd /home/hadoop/workspace/redis-cluster/slave
redis-server redis-slave.conf
chmod
777 start-redis.sh
sh start-redis.sh
~/workspace/redis-cluster$ ps -ef | grep
'redis'
hadoop
2018 1 0 07:
42 ?
00:
00:
59 ./redis-server
192.168.31.88:6379 [cluster]
hadoop
2022 1 0 07:
42 ?
00:
01:
02 ./redis-server
192.168.31.88:6380 [cluster]
cp /home/hadoop/workspace/redis-
3.
2.
8/src/redis-trib.rb /home/hadoop/workspace/redis-cluster/bin
redis-trib.rb create --replicas
1 192.168.31.88:6379 192.168.31.88:6380 192.168.31.234:6379 192.168.31.234:6380 192.168.31.186:6379 192.168.31.186:6380
redis-trib.rb check
192.168.31.88:6379
>>>
Performing Cluster Check (
using node 192
.168.31.88:6379)
M: 1
f46b01cba0e6f42d51ff5d2206f37d27411ae16 192
.168.31.88:6379
slots:0-5460 (5461
slots)
master
1
additional replica(
s)
M: 40
de1b50fcafc24cda55d4ae923efed3b4e4e649 192
.168.31.186:6379
slots:10923-16383 (5461
slots)
master
1
additional replica(
s)
S: 99
b6e22ecc195cfaa261f3a56226ddf24e9e5ac7 192
.168.31.88:6380
slots: (0
slots)
slave
replicates f2f54db7aecc20a0311ea7776cde758e52ba9086
M:
f2f54db7aecc20a0311ea7776cde758e52ba9086 192
.168.31.234:6379
slots:5461-10922 (5462
slots)
master
1
additional replica(
s)
S:
e52ecfefc6898a4db9c159bdd04bf0c1e3629037 192
.168.31.186:6380
slots: (0
slots)
slave
replicates 40
de1b50fcafc24cda55d4ae923efed3b4e4e649
S: 37
f29c44cca874dc965ddb505219f267ad6dc270 192
.168.31.234:6380
slots: (0
slots)
slave
replicates 1
f46b01cba0e6f42d51ff5d2206f37d27411ae16
[OK] All nodes agree about slots configuration.
>>>
Check for open slots...
>>>
Check slots coverage...
[OK] All 16384
slots covered.
如果执行redis-trib.rb 命令前需要安装ruby环境
sudo apt-get
install ruby
sudo apt-get
install rubygems
gem
install redis-
3.2.
8.gem
其他说明 进程在第一次启动时会在master或者slave目录建立相关的数据库、日志文件、以及集群节点信息 dump.rdb appendonly.aof nodes-6379.conf 在第一次加入集群后节点间信息会写入nodes-6379文件中
40
de1b50fcafc24cda55d4ae923efed3b4e4e649 192
.168.31.186:6379 master - 0 1494036866836 5
connected 10923
-16383
99
b6e22ecc195cfaa261f3a56226ddf24e9e5ac7 192
.168.31.88:6380 slave f2f54db7aecc20a0311ea7776cde758e52ba9086 0 1494036866399 3
connected
f2f54db7aecc20a0311ea7776cde758e52ba9086 192
.168.31.234:6379 master - 0 1494036866995 3
connected 5461
-10922
1
f46b01cba0e6f42d51ff5d2206f37d27411ae16 192
.168.31.88:6379 myself,
master - 0 0 1
connected 0
-5460
e52ecfefc6898a4db9c159bdd04bf0c1e3629037 192
.168.31.186:6380 slave 40
de1b50fcafc24cda55d4ae923efed3b4e4e649 0 1494036866836 6
connected
37
f29c44cca874dc965ddb505219f267ad6dc270 192
.168.31.234:6380 slave 1
f46b01cba0e6f42d51ff5d2206f37d27411ae16 0 1494036866995 4
connected
vars currentEpoch 6
lastVoteEpoch 0
因此以后在启动redis集群的时候不需要再次执行redis-trib.rb create –replicas 1 192.168.31.88:6379 192.168.31.88:6380 192.168.31.234:6379 192.168.31.234:6380 192.168.31.186:6379 192.168.31.186:6380 只需要执行:sh start-redis.sh文件启动redis进程即可 可以通过调用redis-trib.rb check 192.168.31.88:6379发现已经是集群形式
3 开发的spring测试程序
http://blog.csdn.net/cweeyii/article/details/71369145