这篇文章主要介绍 分布式存储Ceph的安装方法有哪些,文中介绍的非常详细,具有一定的参考价值,感兴趣的小伙伴们一定要看完!
一、源码安装
说明:源码安装可以了解到系统各个组件, 但是安装过程也是很费劲的,主要是依赖包太多。 当时尝试了centos 和 ubuntu 上安装,都是可以安装好的。
1下载ceph http://ceph.com/download/
wget http://ceph.com/download/ceph-0.72.tar.gz
2 安装编译工具apt-get install automake autoconf automake libtool make
3 解压
#tar zxvf
ceph-0.72.tar.gz
#cd
ceph-0.72.tar.gz
#./autogen.sh
4、
#apt-get install autotools-dev autoconf automake cdbs g++ gcc git libatomic-ops-dev libboost-dev \
libcrypto++-dev libcrypto++ libedit-dev libexpat1-dev libfcgi-dev libfuse-dev \
libgoogle-perftools-dev libgtkmm-2.4-dev libtool pkg-config uuid-dev libkeyutils-dev \
uuid-dev libkeyutils-dev btrfs-tools
4 可能遇到错误
4.1 fuse:
apt-get install fuse-devel
4.2 tcmalloc:
wget https://gperftools.googlecode.com/files/gperftools-2.1.zip
安装google-perftools
4.3 libedit:
apt-get install libedit -devel
4.4 no libatomic-ops found
apt-get install libatomic_ops-devel
4.5 snappy:
https://snappy.googlecode.com/files/snappy-1.1.1.tar.gz
4.6 libleveldb not found:
https://leveldb.googlecode.com/files/leveldb-1.14.0.tar.gz
make
cp libleveldb.* /usr/lib
cp -r include/leveldb /usr/local/include
4.7 libaio
apt-get install libaio-dev
4.8 boost
apt-get install libboost-dev
apt-get install libboost-thread-dev
apt-get install libboost-program-options-dev
4.9 g++
apt-get install g++
5 编译安装
#./configure –prefix=/opt/ceph/
#make
#make install
二、使用ubuntn 12.04自带的ceph 版本可能是ceph version 0.41
资源:
两台机器:一台server,一台client,安装ubuntu12.04
其中,server安装时,另外分出两个区,作为osd0、osd1的存储,没有的话,系统安装好后,使用loop设备虚拟出两个也可以。
1 、服务端安装 CEPH (MON 、 MDS 、 OSD)
apt-cache search ceph
apt-get install ceph
apt-get install ceph-common
2、添加key到APT中,更新sources.list,安装ceph
wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -
echo deb http://ceph.com/debian/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
apt-get update && sudo apt-get install ceph
3、查看版本
# ceph-v //将显示ceph的版本和key信息
如果没有显示,请执行如下命令
# sudo apt-get update && apt-get upgrade
4、配置文件
# vim /etc/ceph/ceph.conf
[global]
# For version 0.55 and beyond, you must explicitly enable
# or disable authentication with "auth" entries in [global].
auth cluster required = none
auth service required = none
auth client required = none
[osd]
osd journal size = 1000
#The following assumes ext4 filesystem.
filestore xattr use omap = true
# For Bobtail (v 0.56) and subsequent versions, you may
# add settings for mkcephfs so that it will create and mount
# the file system on a particular OSD for you. Remove the comment `#`
# character for the following settings and replace the values
# in braces with appropriate values, or leave the following settings
# commented out to accept the default values. You must specify the
# --mkfs option with mkcephfs in order for the deployment script to
# utilize the following settings, and you must define the 'devs'
# option for each osd instance; see below.
osd mkfs type = xfs
osd mkfs options xfs = -f # default for xfs is "-f"
osd mount options xfs = rw,noatime # default mount option is "rw,noatime"
# For example, for ext4, the mount option might look like this:
#osd mkfs options ext4 = user_xattr,rw,noatime
# Execute $ hostname to retrieve the name of your host,
# and replace {hostname} with the name of your host.
# For the monitor, replace {ip-address} with the IP
# address of your host.
[mon.a]
host = ceph2
mon addr = 192.168.1.1:6789
[osd.0]
host = ceph2
# For Bobtail (v 0.56) and subsequent versions, you may
# add settings for mkcephfs so that it will create and mount
# the file system on a particular OSD for you. Remove the comment `#`
# character for the following setting for each OSD and specify
# a path to the device if you use mkcephfs with the --mkfs option.
devs = /dev/sdb1
[mds.a]
host = ceph2
5、执行初始化
sudo mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.keyring
注意每次初始化 需要 删除原有 数据 目录
rm –rf /var/lib/ceph/osd/ceph-0/*
rm –rf /var/lib/ceph/osd/ceph-1/*
rm –rf /var/lib/ceph/mon/ceph-a/*
rm –rf /var/lib/ceph/mds/ceph-a/*
mkdir -p /var/lib/ceph/osd/ceph-0
mkdir -p /var/lib/ceph/osd/ceph-1
mkdir -p /var/lib/ceph/mon/ceph-a
mkdir -p /var/lib/ceph/mds/ceph-a
6、启动
service ceph -a start
7、执行健康检查
ceph health
8、磁盘用 ext4 出现 mount 5 错误
后来用
mkfs.xfs -f /dev/sda7
就 好了。
9、在客户端上操作:
sudo mkdir /mnt/mycephfs
sudo mount -t ceph {ip-address-of-monitor}:6789:/ /mnt/mycephfs
三、ceph-deploy安装
1、下载
https://github.com/ceph/ceph-deploy/archive/master.zip
2、
apt-get install python-virtualenv
./bootstrap
3、
ceph-deploy install ubuntu1
4、
ceph-deploy new ubuntu1
5、
ceph-deploy mon create ubuntu1
6、
ceph-deploy gatherkeys
遇错提示没有keyring则执行:
ceph-deploy forgetkeys
会生成
{cluster-name}.client.admin.keyring
{cluster-name}.bootstrap-osd.keyring
{cluster-name}.bootstrap-mds.keyring
7、
ceph-deploy osd create ubuntu1:/dev/sdb1 (磁盘路径)
可能遇到错:
1、磁盘已经挂载,用umount
2、磁盘格式化问题,用fdisk分区, mkfs.xfs -f /dev/sdb1 格式化
8、
ceph -s
可能遇到错误:
提示没有osd
health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
则执行ceph osd create
9、
cluster faf5e4ae-65ff-4c95-ad86-f1b7cbff8c9a
health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean
monmap e1: 1 mons at {ubuntu1=12.0.0.115:6789/0}, election epoch 1, quorum 0 ubuntu1
osdmap e10: 3 osds: 1 up, 1 in
pgmap v17: 192 pgs, 3 pools, 0 bytes data, 0 objects
1058 MB used, 7122 MB / 8181 MB avail
192 active+degraded
10、客户端挂摘
注意:需要用用户名及密码挂载
10.1查看密码
cat /etc/ceph/ceph.client.admin.keyring
ceph-authtool --print-key ceph.client.admin.keyring
AQDNE4xSyN1WIRAApD1H/glMB5VSLwmmnt7UDw==
10.2挂载
其他:
1、多台机器之间要添加ssh 无密码认证 ssh-keygen
2、最好有单独的磁盘分区做存储,格式化也有几种不同方式
3、总会遇到各种错误。 只能单独分析,解决
ceph-deploy forgetkeys
以上是“ 分布式存储Ceph的安装方法有哪些”这篇文章的所有内容,感谢各位的阅读!希望分享的内容对大家有帮助,更多相关知识,欢迎关注亿速云行业资讯频道!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。