差異處
這裏顯示兩個版本的差異處。
兩邊的前次修訂版 前次修改 下次修改 | 前次修改 | ||
tech:zfs [2020/12/04 22:11] – 還原成舊版 (2020/12/03 22:29) jonathan_tsai | tech:zfs [2024/01/30 18:04] (目前版本) – [對 zpool 加上 Metadata 的 Special Device 加速讀取效能的方式] jonathan | ||
---|---|---|---|
行 1: | 行 1: | ||
+ | ====== ZFS 指令與操作整理 ====== | ||
+ | * 以下 zfs 的操作環境是在 PVE(PROXMOX Virtual Environment) 主機環境執行 | ||
+ | ===== ZFS 基本語法 ===== | ||
+ | * https:// | ||
+ | * zpool | ||
+ | * zfs | ||
+ | |||
+ | ===== 新硬碟建立為 ZFS 方式 ===== | ||
+ | * https:// | ||
+ | * 建立一個 ZFS 的 Partation Exp. /dev/sdb < | ||
+ | * g : 建立為使用 GPT disklabel 硬碟 | ||
+ | * n : 建立一個新的 Partation | ||
+ | * t : 48 - Solaris /usr & Apple ZFS | ||
+ | * w : 寫入 | ||
+ | * 透過 zfs 工具建立 pool Exp. /dev/sdb2 -> ssd-zpool <cli> | ||
+ | zpool create -f -o ashift=12 ssd-zpool /dev/sdb2 | ||
+ | zfs set compression=lz4 atime=off ssd-zpool | ||
+ | zpool list | ||
+ | </ | ||
+ | root@TP-PVE-249: | ||
+ | NAME SIZE ALLOC | ||
+ | rpool | ||
+ | ssd-zpool | ||
+ | </ | ||
+ | * 再透過 PVE web 介面 Datacenter -> Storage -> Add -> ZFS | ||
+ | * 輸入 ID Exp. ssd-zfs | ||
+ | * 選擇 ZFS Pool : ssd-zpool | ||
+ | * 這樣就可以加入 ZFS 的磁碟 | ||
+ | |||
+ | ===== 限制 ZFS 使用多少 RAM 當 cache 方式 ===== | ||
+ | * 參考 - https:// | ||
+ | * 預設 ZFS 會使用主機的 50% RAM 當 Cache, 如要更改就需要設定 zfs_arc_max 的值, 為了 ZFS 的效能, zfs_arc_max 的值不應該小於 2 GiB Base + 1 GiB/TiB ZFS Storage, 也就是說如果有 1T 的 ZFS pool , 需要的 RAM 至少 2+1 = 3G | ||
+ | * Exp. 限制最多使用 3 GB 的 RAM 當 ZFS Cache | ||
+ | * 編輯 / | ||
+ | root@aac:~# echo " | ||
+ | 3221225472 | ||
+ | </ | ||
+ | vi / | ||
+ | </ | ||
+ | : | ||
+ | options zfs zfs_arc_max=3221225472 | ||
+ | </ | ||
+ | * 如果 root 不是 ZFS 可以設定立即生效 <cli> | ||
+ | echo "$[3 * 1024*1024*1024]" | ||
+ | </ | ||
+ | * 如果 root 是 ZFS 需要執行以下命令, | ||
+ | update-initramfs -u -k all | ||
+ | reboot | ||
+ | </ | ||
+ | |||
+ | ===== 將一顆 ZFS 資料碟加回主機內 ===== | ||
+ | 因系統碟毀損重新安裝, | ||
+ | * 這棵資料碟 / | ||
+ | root@nuc:/ | ||
+ | Disk / | ||
+ | Disk model: PLEXTOR PX-1TM9PeGN | ||
+ | Units: sectors of 1 * 512 = 512 bytes | ||
+ | Sector size (logical/ | ||
+ | I/O size (minimum/ | ||
+ | Disklabel type: gpt | ||
+ | Disk identifier: 31E1D4CF-870D-E741-B28E-2305FDF86533 | ||
+ | |||
+ | Device | ||
+ | / | ||
+ | / | ||
+ | </ | ||
+ | * 透過 zdb -l / | ||
+ | root@nuc:/ | ||
+ | ------------------------------------ | ||
+ | LABEL 0 | ||
+ | ------------------------------------ | ||
+ | version: 5000 | ||
+ | name: ' | ||
+ | state: 0 | ||
+ | txg: 5514533 | ||
+ | pool_guid: 1902468729180364296 | ||
+ | errata: 0 | ||
+ | hostid: 585158084 | ||
+ | hostname: ' | ||
+ | top_guid: 1144094455533164821 | ||
+ | guid: 1144094455533164821 | ||
+ | vdev_children: | ||
+ | vdev_tree: | ||
+ | type: ' | ||
+ | id: 0 | ||
+ | guid: 1144094455533164821 | ||
+ | path: '/ | ||
+ | devid: ' | ||
+ | phys_path: ' | ||
+ | whole_disk: 1 | ||
+ | metaslab_array: | ||
+ | metaslab_shift: | ||
+ | ashift: 12 | ||
+ | asize: 1024195035136 | ||
+ | is_log: 0 | ||
+ | DTL: 25995 | ||
+ | create_txg: 4 | ||
+ | features_for_read: | ||
+ | com.delphix: | ||
+ | com.delphix: | ||
+ | labels = 0 1 2 3 | ||
+ | </ | ||
+ | * 執行 zpool import -d / | ||
+ | root@nuc:/ | ||
+ | pool: local-zfs | ||
+ | id: 1902468729180364296 | ||
+ | state: ONLINE | ||
+ | | ||
+ | | ||
+ | the ' | ||
+ | see: http:// | ||
+ | | ||
+ | |||
+ | local-zfs | ||
+ | nvme0n1 | ||
+ | </ | ||
+ | * 依照上面出現的訊息要輸入 -f 的以下語法才能讓 local-zfs 加回系統 <cli> | ||
+ | zpool import -f local-zfs | ||
+ | </ | ||
+ | * 參考網址 - | ||
+ | * https:// | ||
+ | * https:// | ||
+ | |||
+ | ===== 修改 zpool 名稱的方式 ===== | ||
+ | * 原本的 zpool name 為 ssd-zfs 想要改成 ssd-zpool <cli> | ||
+ | zpool export ssd-zfs | ||
+ | zpool import ssd-zfs ssd-zpool | ||
+ | </ | ||
+ | * 參考網址 - https:// | ||
+ | |||
+ | ===== 移除 zpool 的方式 ===== | ||
+ | * 原本的 zpool name 為 ssd2-zfs , 因 Fragmentation > 20% 想要移除後再重建, | ||
+ | zpool destroy ssd2-zfs | ||
+ | </ | ||
+ | |||
+ | ===== 安裝一顆存在相同 zpool 名稱的 zfs 硬碟處理方式 ===== | ||
+ | * 最簡單的方式是先在外面 fdisk 清空好硬碟再安裝, | ||
+ | Message: cannot import ' | ||
+ | import by numeric ID instead | ||
+ | Error: 1 | ||
+ | </ | ||
+ | - 找出現有 rpool 的 id <cli> | ||
+ | /sbin/zpool import | ||
+ | </ | ||
+ | - 判別原有真正要的 rpool 的 id 進行匯入 Exp. 13396254673059535051 , 語法如下:< | ||
+ | /sbin/zpool import -N 13396254673059535051 | ||
+ | </ | ||
+ | - 如果沒有出現其他訊息, | ||
+ | |||
+ | <note warning> | ||
+ | * 進入開機成功狀態後, | ||
+ | * 否則下次開機還是要處理一次這樣的程序. | ||
+ | </ | ||
+ | |||
+ | ===== zpool 出現降級(Degraded)時更換 zfs 硬碟處理方式 ===== | ||
+ | * Exp. pbs-zpool < | ||
+ | pool: pbs-zpool | ||
+ | | ||
+ | status: One or more devices is currently being resilvered. | ||
+ | continue to function, possibly in a degraded state. | ||
+ | action: Wait for the resilver to complete. | ||
+ | scan: resilver in progress since Thu Oct 29 15:43:26 2020 | ||
+ | 355G scanned at 49.8M/s, 96.9G issued at 13.6M/s, 1.55T total | ||
+ | 97.1G resilvered, 6.09% done, 1 days 07:14:16 to go | ||
+ | config: | ||
+ | |||
+ | NAME STATE READ WRITE CKSUM | ||
+ | pbs-zpool | ||
+ | sdb1 DEGRADED | ||
+ | |||
+ | errors: No known data errors | ||
+ | </ | ||
+ | * 假設新安裝的硬碟為 sdf , 需要 [[tech/ | ||
+ | * 將新建立的 sdf1 加入 pbs-zpool <cli> | ||
+ | zpool attach pbs-zpool sdb1 sdf1 | ||
+ | </ | ||
+ | zpool status pbs-zpool</ | ||
+ | pool: pbs-zpool | ||
+ | | ||
+ | status: One or more devices is currently being resilvered. | ||
+ | continue to function, possibly in a degraded state. | ||
+ | action: Wait for the resilver to complete. | ||
+ | scan: resilver in progress since Thu Oct 29 15:43:26 2020 | ||
+ | 355G scanned at 46.4M/s, 101G issued at 13.2M/s, 1.55T total | ||
+ | 102G resilvered, 6.34% done, 1 days 08:09:31 to go | ||
+ | config: | ||
+ | |||
+ | NAME STATE READ WRITE CKSUM | ||
+ | pbs-zpool | ||
+ | mirror-0 | ||
+ | sdb1 DEGRADED | ||
+ | sdf1 ONLINE | ||
+ | |||
+ | errors: No known data errors | ||
+ | </ | ||
+ | * 等 sdf1 加入 pbs-zpool 的 resilvering 完成後, 就可移除原本異常的 sdb1< | ||
+ | zpool clear pbs-zpool | ||
+ | zpool detach pbs-zpool sdb1 | ||
+ | zpool status pbs-zpool | ||
+ | </ | ||
+ | pool: pbs-zpool | ||
+ | | ||
+ | scan: resilvered 1.35T in 1 days 04:24:31 with 0 errors on Mon Nov 2 10:25:43 2020 | ||
+ | config: | ||
+ | |||
+ | NAME STATE READ WRITE CKSUM | ||
+ | pbs-zpool | ||
+ | sdf1 ONLINE | ||
+ | |||
+ | errors: No known data errors | ||
+ | </ | ||
+ | |||
+ | ===== 底層硬碟擴大後 zpool 擴大空間方式 ===== | ||
+ | * 參考網址 - https:// | ||
+ | * 使用情境 - | ||
+ | * 更換硬碟 Exp. 原本 4TB 更換 8TB -> zpool 透過 attach 新硬碟(8TB) 與原本硬碟(4TB) 進行 mirror 後, detach 原本硬碟(4TB) 後雖然 zpool 的硬碟已經是 8TB, 但 zpool 還是只有原本 4TB | ||
+ | * 虛擬機底層 VDisk 調大空間後, | ||
+ | * 處理方式 Exp. pbs-zpool @ /dev/sdd1 <cli> | ||
+ | zpool set autoexpand=on pbs-zpool | ||
+ | parted /dev/sdd | ||
+ | | ||
+ | 1 | ||
+ | | ||
+ | quit | ||
+ | zpool online -e pbs-zpool sdd1 | ||
+ | df -h | ||
+ | </ | ||
+ | root@TP-PVE-252:/# | ||
+ | GNU Parted 3.2 | ||
+ | Using /dev/sdd | ||
+ | Welcome to GNU Parted! Type ' | ||
+ | (parted) resizepart | ||
+ | Partition number? 1 | ||
+ | End? [8002GB]? | ||
+ | Error: Partition(s) 1 on /dev/sdd have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will | ||
+ | remain in use. You should reboot now before making further changes. | ||
+ | Ignore/ | ||
+ | Ignore/ | ||
+ | (parted) quit | ||
+ | Information: | ||
+ | |||
+ | root@TP-PVE-252:/# | ||
+ | root@TP-PVE-252:/# | ||
+ | Filesystem | ||
+ | udev | ||
+ | tmpfs 796M | ||
+ | : | ||
+ | : | ||
+ | tmpfs 796M | ||
+ | pbs-zpool | ||
+ | root@TP-PVE-252:/# | ||
+ | </ | ||
+ | |||
+ | ===== 增加一顆 SSD 當 zfs 的 cache ===== | ||
+ | * 參考 - https:// | ||
+ | * 假設使用六顆 SAS 1TB 硬碟組成 ZFS - raidz2 , 加上一顆 SSD 400GB (/ | ||
+ | root@pve-1: | ||
+ | pool: rpool | ||
+ | | ||
+ | scan: scrub repaired 0B in 0 days 00:00:10 with 0 errors on Fri Dec 3 17:24:52 2066 | ||
+ | config: | ||
+ | |||
+ | NAME STATE READ WRITE CKSUM | ||
+ | rpool | ||
+ | raidz2-0 | ||
+ | scsi-35000c50099b187df-part3 | ||
+ | scsi-35000c50095c609f7-part3 | ||
+ | scsi-35000c50099b18c6b-part3 | ||
+ | scsi-35000c50099b185e3-part3 | ||
+ | scsi-35000c50099b18453-part3 | ||
+ | scsi-35000c50099b18ebb-part3 | ||
+ | </ | ||
+ | * 加入 nvme0n1 當 rpool 的 cache 語法 < | ||
+ | zpool add rpool cache nvme0n1</ | ||
+ | root@pve1: | ||
+ | pool: rpool | ||
+ | | ||
+ | scan: none requested | ||
+ | config: | ||
+ | |||
+ | NAME STATE READ WRITE CKSUM | ||
+ | rpool | ||
+ | raidz2-0 | ||
+ | scsi-35000c50099b187df-part3 | ||
+ | scsi-35000c50095c609f7-part3 | ||
+ | scsi-35000c50099b18c6b-part3 | ||
+ | scsi-35000c50099b185e3-part3 | ||
+ | scsi-35000c50099b18453-part3 | ||
+ | scsi-35000c50099b18ebb-part3 | ||
+ | cache | ||
+ | nvme0n1 | ||
+ | </ | ||
+ | * 查看 zpool 使用狀況 <code sh>zpool iostat -v</ | ||
+ | root@pve1: | ||
+ | capacity | ||
+ | pool alloc | ||
+ | -------------------------------- | ||
+ | rpool 180G 5.28T 0 243 2.62K 5.17M | ||
+ | raidz2 | ||
+ | scsi-35000c50099b187df-part3 | ||
+ | scsi-35000c50095c609f7-part3 | ||
+ | scsi-35000c50099b18c6b-part3 | ||
+ | scsi-35000c50099b185e3-part3 | ||
+ | scsi-35000c50099b18453-part3 | ||
+ | scsi-35000c50099b18ebb-part3 | ||
+ | cache | ||
+ | nvme0n1 | ||
+ | -------------------------------- | ||
+ | </ | ||
+ | |||
+ | ===== 移除 ZFS cache 硬碟方式 ===== | ||
+ | * 要移除 zfs2TB 內的 cache - sdc 的語法 <cli> | ||
+ | zpool remove zfs2TB sdc | ||
+ | </ | ||
+ | * 執行前 <cli> | ||
+ | # zpool status zfs2TB | ||
+ | pool: zfs2TB | ||
+ | | ||
+ | scan: scrub repaired 0B in 0 days 00:55:00 with 0 errors on Sun Dec 13 01:19:07 2020 | ||
+ | config: | ||
+ | |||
+ | NAME | ||
+ | zfs2TB | ||
+ | ata-WDC_WD2002FAEX-007BA0_WD-WMAY03424496 | ||
+ | cache | ||
+ | sdc ONLINE | ||
+ | </ | ||
+ | * 執行後 <cli> | ||
+ | # zpool status zfs2TB | ||
+ | pool: zfs2TB | ||
+ | | ||
+ | scan: scrub repaired 0B in 0 days 00:55:00 with 0 errors on Sun Dec 13 01:19:07 2020 | ||
+ | config: | ||
+ | |||
+ | NAME | ||
+ | zfs2TB | ||
+ | ata-WDC_WD2002FAEX-007BA0_WD-WMAY03424496 | ||
+ | </ | ||
+ | |||
+ | ===== 對 zpool 加上 Metadata 的 Special Device 加速讀取效能的方式 ===== | ||
+ | * 參考 | ||
+ | - https:// | ||
+ | - https:// | ||
+ | * Metadata 是指 ZFS 儲存檔案系統資訊的資料, | ||
+ | * 語法 : zpool add < | ||
+ | * Exp. 對 pbs-zpool 加上 / | ||
+ | zpool add pbs-zpool special mirror / | ||
+ | </ | ||
+ | root@h470: | ||
+ | pool: pbs-zpool | ||
+ | | ||
+ | scan: scrub repaired 0B in 02:55:43 with 0 errors on Sun Nov 12 03:19:52 2023 | ||
+ | config: | ||
+ | |||
+ | NAME STATE READ WRITE CKSUM | ||
+ | pbs-zpool | ||
+ | sda1 ONLINE | ||
+ | special | ||
+ | nvme0n1 | ||
+ | nvme1n1 | ||
+ | </ | ||
+ | * 想要瞭解 IO 狀態, 可透過以下語法觀察 < | ||
+ | |||
+ | {{tag> |