这篇文章主要为大家展示了“ceph rbd在线resize的示例分析”,内容简而易懂,条理清晰,希望能够帮助大家解决疑惑,下面让小编带领大家一起研究并学习一下“ceph rbd在线resize的示例分析”这篇文章吧。
扩容前
[root@mon0 ceph]# rbd create myrbd/rbd1 -s 1024 --image-format=2 [root@mon0 ceph]# rbd ls myrbd rbd1 [root@mon0 ceph]# rbd info myrbd/rbd1 rbd image 'rbd1': size 1024 MB in 256 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.12ce6b8b4567 format: 2 features: layering
扩容
[root@mon0 ceph]# rbd resize myrbd/rbd1 -s 2048 Resizing image: 100% complete...done.
在rbd1未格式化和挂载之前,直接resize就可以了。如果rbd1已经格式化并挂载了,需要一些额外的操作:
[root@mon0 ceph]# rbd map myrbd/rbd1 [root@mon0 ceph]# rbd showmapped id pool image snap device 0 test test.img - /dev/rbd0 1 myrbd rbd1 - /dev/rbd1 [root@mon0 ceph]# mkfs.xfs /dev/rbd1 log stripe unit (4194304 bytes) is too large (maximum is 256KiB) log stripe unit adjusted to 32KiB meta-data=/dev/rbd1 isize=256 agcount=9, agsize=64512 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=524288, imaxpct=25 = sunit=1024 swidth=1024 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@mon0 ceph]# mount /dev/rbd1 /mnt [root@mon0 ceph]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 529G 20G 482G 4% / tmpfs 16G 408K 16G 1% /dev/shm /dev/sdb 559G 33G 527G 6% /openstack /dev/sdc 1.9T 75M 1.9T 1% /cephmp1 /dev/sdd 1.9T 61M 1.9T 1% /cephmp2 /dev/rbd1 2.0G 33M 2.0G 2% /mnt [root@mon0 ceph]# rbd resize myrbd/rbd1 -s 4096 Resizing image: 100% complete...done. [root@mon0 ceph]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 529G 20G 482G 4% / tmpfs 16G 408K 16G 1% /dev/shm /dev/sdb 559G 33G 527G 6% /openstack /dev/sdc 1.9T 75M 1.9T 1% /cephmp1 /dev/sdd 1.9T 61M 1.9T 1% /cephmp2 /dev/rbd1 2.0G 33M 2.0G 2% /mnt [root@mon0 ceph]# xfs_growfs /mnt meta-data=/dev/rbd1 isize=256 agcount=9, agsize=64512 blks = sectsz=512 attr=2, projid32bit=0 data = bsize=4096 blocks=524288, imaxpct=25 = sunit=1024 swidth=1024 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 524288 to 1048576 [root@mon0 ceph]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 529G 20G 482G 4% / tmpfs 16G 408K 16G 1% /dev/shm /dev/sdb 559G 33G 527G 6% /openstack /dev/sdc 1.9T 75M 1.9T 1% /cephmp1 /dev/sdd 1.9T 61M 1.9T 1% /cephmp2 /dev/rbd1 4.0G 33M 4.0G 1% /mnt
还有一种情况是,rbd1已经被挂载到一个vm上:
virsh domblklist myvm rbd resize myrbd/rbd1 #这里需要通过virsh blockresize进行操作 virsh blockresize --domain myvm --path vdb --size 100G rbd info myrbd/rbd1
以上是“ceph rbd在线resize的示例分析”这篇文章的所有内容,感谢各位的阅读!相信大家都有了一定的了解,希望分享的内容对大家有所帮助,如果还想学习更多知识,欢迎关注亿速云行业资讯频道!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。