这篇文章将为大家详细讲解有关ceph中rbdmap遇到问题的案例分析,小编觉得挺实用的,因此分享给大家做个参考,希望大家阅读完这篇文章后可以有所收获。
运行于centos6.5的rbdmap:
[root@mon0 ceph]# cat /etc/init.d/rbdmap
#!/bin/bash
#
# rbdmap Ceph RBD Mapping
#
# chkconfig: 2345 70 70
# description: Ceph RBD Mapping
### BEGIN INIT INFO
# Provides: rbdmap
# Required-Start: $network $remote_fs
# Required-Stop: $network $remote_fs
# Should-Start: ceph
# Should-Stop: ceph
# X-Start-Before: $x-display-manager
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Ceph RBD Mapping
# Description: Ceph RBD Mapping
### END INIT INFO
DESC="RBD Mapping:"
RBDMAPFILE="/etc/ceph/rbdmap"
. /lib/lsb/init-functions
do_map() {
if [ ! -f "$RBDMAPFILE" ]; then
#log_warning_msg "$DESC : No $RBDMAPFILE found."
exit 0
fi
# Read /etc/rbdtab to create non-existant mapping
RET=0
while read DEV PARAMS; do
case "$DEV" in
""|\#*)
continue
;;
*/*)
;;
*)
DEV=rbd/$DEV
;;
esac
#log_action_begin_msg "${DESC} '${DEV}'"
newrbd=""
MAP_RV=""
RET_OP=0
OIFS=$IFS
IFS=','
for PARAM in ${PARAMS[@]}; do
CMDPARAMS="$CMDPARAMS --$(echo $PARAM | tr '=' ' ')"
done
IFS=$OIFS
if [ ! -b /dev/rbd/$DEV ]; then
MAP_RV=$(rbd map $DEV $CMDPARAMS 2>&1)
if [ $? -eq 0 ]; then
newrbd="yes"
else
RET=$((${RET}+$?))
RET_OP=1
fi
fi
#log_action_end_msg ${RET_OP} "${MAP_RV}"
if [ "$newrbd" ]; then
## Mount new rbd
MNT_RV=""
mount --fake /dev/rbd/$DEV >>/dev/null 2>&1 \
&& MNT_RV=$(mount -v /dev/rbd/$DEV 2>&1)
[ -n "${MNT_RV}" ] && log_action_msg "mount: ${MNT_RV}"
## post-mapping
if [ -x "/etc/ceph/rbd.d/${DEV}" ]; then
#log_action_msg "RBD Running post-map hook '/etc/ceph/rbd.d/${DEV}'"
/etc/ceph/rbd.d/${DEV} map "/dev/rbd/${DEV}"
fi
fi
done < $RBDMAPFILE
exit ${RET}
}
do_unmap() {
RET=0
## Unmount and unmap all rbd devices
if ls /dev/rbd[0-9]* >/dev/null 2>&1; then
for DEV in /dev/rbd[0-9]*; do
## pre-unmapping
for L in $(find /dev/rbd -type l); do
LL="${L##/dev/rbd/}"
if [ "$(readlink -f $L)" = "${DEV}" ] \
&& [ -x "/etc/ceph/rbd.d/${LL}" ]; then
log_action_msg "RBD pre-unmap: '${DEV}' hook '/etc/ceph/rbd.d/${LL}'"
/etc/ceph/rbd.d/${LL} unmap "$L"
break
fi
done
#log_action_begin_msg "RBD un-mapping: '${DEV}'"
UMNT_RV=""
UMAP_RV=""
RET_OP=0
MNT=$(findmnt --mtab --source ${DEV} --noheadings | awk '{print $1'})
if [ -n "${MNT}" ]; then
# log_action_cont_msg "un-mounting '${MNT}'"
UMNT_RV=$(umount "${MNT}" 2>&1)
fi
if mountpoint -q "${MNT}"; then
## Un-mounting failed.
RET_OP=1
RET=$((${RET}+1))
else
## Un-mapping.
UMAP_RV=$(rbd unmap $DEV 2>&1)
if [ $? -ne 0 ]; then
RET=$((${RET}+$?))
RET_OP=1
fi
fi
#log_action_end_msg ${RET_OP} "${UMAP_RV}"
[ -n "${UMNT_RV}" ] && log_action_msg "${UMNT_RV}"
done
fi
exit ${RET}
}
case "$1" in
start)
do_map
;;
stop)
do_unmap
;;
restart|force-reload)
$0 stop
$0 start
;;
reload)
do_map
;;
status)
rbd showmapped
;;
*)
log_success_msg "Usage: rbdmap {start|stop|restart|force-reload|reload|status}"
exit 1
;;
esac
修改一些在centos上没有的log后,这个脚本使用起来还是有问题,具体描述如下:
1. 在只rbd map一个块得到/dev/rbd0后,使用rbdmap脚本可以正常map/unmap /dev/rbd0。
2. 当将/dev/rbd0格式化后挂载到一个目录上,再使用rbdmap,关机的时候系统就会hang在unmounting filesystem上了,只能强制断电;再开机启动后执行了rbdmap的do_map()函数,一切正常。
排查后发现,当将/dev/rbd0挂载到目录后,rbdmap就不会执行do_unmap()函数,即使函数中加入显式umount操作也不会执行。
想了一个折中的办法,在rbdmap停止优先级高的服务中的stop函数中显式加入umount操作,重启时一切正常了。
先来看一下ceph、rbdmap的启停顺序:
head rbdmap
#!/bin/bash
#
# rbdmap Ceph RBD Mapping
#
# chkconfig: 2345 70 70
# description: Ceph RBD Mapping
head ceph
#!/bin/sh
# Start/stop ceph daemons
# chkconfig: - 60 80
### BEGIN INIT INFO
# Provides: ceph
# Default-Start:
# Default-Stop:
# Required-Start: $remote_fs $named $network $time
# Required-Stop: $remote_fs $named $network $time
可以看出ceph先于rbdmap启动,rbdmap先于ceph停止。如果采用nfs,使用rbdmap映射出的块设备,先看看nfs的启停顺序:
head /etc/init.d/nfs
#!/bin/sh
#
# nfs This shell script takes care of starting and stopping
# the NFS services.
#
# chkconfig: - 30 60
# description: NFS is a popular protocol for file sharing across networks.
# This service provides NFS server functionality, which is \
# configured via the /etc/exports file.
# probe: true
nfs是这三者中最先启动的,也是最先停止的。所以在nfs的stop函数中加入umount命令:
umount /mnt/nfs umount /mnt/nfs2
在rbdmap中加入挂载命令:
mount /dev/rbd0 -o rw,noexec,nodev,noatime,nobarrier,discard /mnt/nfs mount /dev/rbd1 -o rw,noexec,nodev,noatime,nobarrier,discard /mnt/nfs2
/etc/ceph/rbdmap的设置如下:
backup1/backup.img backup2/backup.img
记得在ceph-0.80时测试是没有问题的,到了0.87.1出现了上述问题。
关于“ceph中rbdmap遇到问题的案例分析”这篇文章就分享到这里了,希望以上内容可以对大家有一定的帮助,使各位可以学到更多知识,如果觉得文章不错,请把它分享出去让更多的人看到。
亿速云「云服务器」,即开即用、新一代英特尔至强铂金CPU、三副本存储NVMe SSD云盘,价格低至29元/月。点击查看>>
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。
原文链接:https://my.oschina.net/renguijiayi/blog/382386