衡阳百度网站建设,博客 选择 WordPress,百度云资源搜索平台,网络营销推广方案步骤一、资源池 Pool 管理#xff08;在admin和node三个节点都可#xff09;
1、资源池介绍 上面我们已经完成了 Ceph 集群的部署#xff0c;但是我们如何向 Ceph 中存储数据呢#xff1f;首先我们需要在 Ceph 中定义一个 Pool 资源池。Pool 是 Ceph 中存储 Object 对象抽象概…一、资源池 Pool 管理在admin和node三个节点都可
1、资源池介绍 上面我们已经完成了 Ceph 集群的部署但是我们如何向 Ceph 中存储数据呢首先我们需要在 Ceph 中定义一个 Pool 资源池。Pool 是 Ceph 中存储 Object 对象抽象概念。我们可以将其理解为 Ceph 存储上划分的逻辑分区Pool 由多个 PG 组成而 PG 通过 CRUSH 算法映射到不同的 OSD 上同时 Pool 可以设置副本 size 大小默认副本数量为 3。 Ceph 客户端向 monitor 请求集群的状态并向 Pool 中写入数据数据根据 PGs 的数量通过 CRUSH 算法将其映射到不同的 OSD 节点上实现数据的存储。 这里我们可以把 Pool 理解为存储 Object 数据的逻辑单元当然当前集群没有资源池因此需要进行定义。 创建一个 Pool 资源池其名字为 mypoolPGs 数量设置为 64设置 PGs 的同时还需要设置 PGP通常PGs和PGP的值是相同的 PG (Placement Group)pg 是一个虚拟的概念用于存放 objectPGP(Placement Group for Placement purpose)相当于是 pg 存放的一种 osd 排列组合
2、命令行操作
增
[rootadmin ~]# cd /etc/ceph
[rootadmin ceph]# ceph osd pool create mypool 64 64
pool mypool already exists查
[rootadmin ceph]# ceph osd pool ls #查看 Pool 资源池
mypool
[rootadmin ceph]# rados lspools
mypool
[rootadmin ceph]# ceph osd lspools
1 mypool[rootadmin ceph]# ceph osd pool get mypool size #查看资源池副本的数量
size: 2[rootadmin ceph]# ceph osd pool get mypool pg_num #查看 PG 和 PGP 数量
pg_num: 128
[rootadmin ceph]# ceph osd pool get mypool pgp_num
pgp_num: 128改
[rootadmin ceph]# ceph osd pool set mypool pg_num 128 #修改 pg_num 和 pgp_num 的数量为 128
pg_num: 128
[rootadmin ceph]# ceph osd pool set mypool pgp_num 128
pgp_num: 128
[rootadmin ceph]# ceph osd pool get mypool pg_num
pg_num: 128
[rootadmin ceph]# ceph osd pool get mypool pgp_num
pgp_num: 128[rootadmin ceph]# ceph osd pool set mypool size 2 # 修改 Pool 副本数量为 2
set pool 1 size to 2
[rootadmin ceph]# ceph osd pool get mypool size
size: 2vim ceph.conf #修改默认副本数为 2......osd_pool_default_size 2
[rootadmin ceph]# ceph-deploy --overwrite-conf config push node01 node02 node03
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy --overwrite-conf config push node01 node02 node03
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : True
[ceph_deploy.cli][INFO ] subcommand : push
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : ceph_deploy.conf.cephdeploy.Conf instance at 0x7f4ed78e7c68
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] client : [node01, node02, node03]
[ceph_deploy.cli][INFO ] func : function config at 0x7f4ed7d261b8
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.config][DEBUG ] Pushing config to node01
[node01][DEBUG ] connected to host: node01
[node01][DEBUG ] detect platform information from remote host
[node01][DEBUG ] detect machine type
[node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.config][DEBUG ] Pushing config to node02
[node02][DEBUG ] connected to host: node02
[node02][DEBUG ] detect platform information from remote host
[node02][DEBUG ] detect machine type
[node02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.config][DEBUG ] Pushing config to node03
[node03][DEBUG ] connected to host: node03
[node03][DEBUG ] detect platform information from remote host
[node03][DEBUG ] detect machine type
[node03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf删
① 删除存储池命令存在数据丢失的风险Ceph 默认禁止此类操作需要管理员先在 ceph.conf 配置文件中开启支持删除存储池的操作
vim ceph.conf #删除 Pool 资源池...... [mon]mon allow pool delete true② 推送 ceph.conf 配置文件给所有 mon 节点
[rootadmin ceph]# ceph-deploy --overwrite-conf config push node01 node02 node03
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy --overwrite-conf config push node01 node02 node03
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : True
[ceph_deploy.cli][INFO ] subcommand : push
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : ceph_deploy.conf.cephdeploy.Conf instance at 0x7f0dc767fc68
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] client : [node01, node02, node03]
[ceph_deploy.cli][INFO ] func : function config at 0x7f0dc7abe1b8
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.config][DEBUG ] Pushing config to node01
[node01][DEBUG ] connected to host: node01
[node01][DEBUG ] detect platform information from remote host
[node01][DEBUG ] detect machine type
[node01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.config][DEBUG ] Pushing config to node02
[node02][DEBUG ] connected to host: node02
[node02][DEBUG ] detect platform information from remote host
[node02][DEBUG ] detect machine type
[node02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.config][DEBUG ] Pushing config to node03
[node03][DEBUG ] connected to host: node03
[node03][DEBUG ] detect platform information from remote host
[node03][DEBUG ] detect machine type
[node03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf③ 所有 mon 节点重启 ceph-mon 服务
[rootadmin ceph]# systemctl restart ceph-mon.target④ 执行删除 Pool 命令
[rootadmin ceph]# ceph osd pool rm pool01 pool01 --yes-i-really-really-mean-it
pool pool01 does not exist