Pomoc TS-464 - Dodanie starych PULi po wymianie dysku systemowego

plonking

Entry Technician
Q Associate
2 Listopad 2012
26
5
3
46
QNAP
TS-x64
Ethernet
1 GbE
Witam

Pacjent TS-464 (4xSata + 2x M2)
Na M2 zainstalowany system i wszystkie aplikacje
Na sata 2 PULe

1. RAID-0 2x3TB
2. RAID-0 2x10TB

(tak RAID-0) to do specyficznych zadań, zrobienia 'szybkiego' backupu setek GB, szybka selekcja, dobre rzeczy pchane na inny NAS, reszta do kosza)

Wszystko działało, ale ostatnio dysk M2 z systemem nagle padł. Zgłaszał błędy SMART a zaraz potem wysypał się całkowicie że aż nie było go widać w NASie.

Udało się jeszcze zapisać konfigurację NASa, ale plików żadnych już nie, dysk padł całkowicie.

Wymieniłem na nowy dysk M2, ODŁĄCZYŁEM wszystkie dyski SATA, i ZAINICJOWAŁEM ponownie NASa.

Wszystko przebiegło poprawnie, utworzyłem PULę na tym jednym dysku.

Wyłączyłem NASa, podłączyłem ponownie wszystkie 4 dyski (tak PRAWIDŁOWA KOLEJNOŚĆ), uruchomiłem ponownie i ... nic się nie stało, nie została wryta automatycznie odbudowa/dodanie ich do NASa.

Same dyski popranie wykryte

1765678247109.png


Ale w PULach brak

1765678288414.png



Przy próbie dodania ich przy pomocy komendy "Attach and recover storage pool", dostaję taką zwrotkę


1765678613174.png


Typ tych PULi to 'najprawdopodobniej' zwykłe proste zajmujące pełną (nie dynamiczną) pojemnością, bez migawek.

Były to PULe 2 i 3 (1 to systemowa)

Jak dołączyć te PULe do ponownie zainicjowanego NASa.
 
Witam

No jest 'lepiej', pojawiła się jedna PULa

Kod:
[~] # /etc/init.d/init_lvm.sh
Changing old config name...
Reinitialing...
Detect disk(259, 1)...
dev_count ++ = 0
Detect disk(8, 48)...
dev_count ++ = 1
Detect disk(8, 16)...
dev_count ++ = 2
Detect disk(252, 0)...
ignore non-root enclosure disk(252, 0).
Detect disk(8, 32)...
dev_count ++ = 3
Detect disk(259, 0)...
dev_count ++ = 4
Detect disk(8, 0)...
dev_count ++ = 5
Detect disk(259, 1)...
Detect disk(8, 48)...
Detect disk(8, 16)...
Detect disk(252, 0)...
ignore non-root enclosure disk(252, 0).
Detect disk(8, 32)...
Detect disk(259, 0)...
Detect disk(8, 0)...
sys_startup_p2:got called count = -1
Done


Pojawiła się PULa, ale "Odmontowana"

1765706073925.png


Po "Sprawdź wszystkie" status się nie zmienił.
Pojemność się raczej zgadza ...

1765707310397.png


Takie akcje mogę wykonać na tym (ale jeszcze nic nie kliknąłem)

1765707380848.png


W podglądzie Dysków, pozycje które się pojawiły w nowej PULi zmieniły typ na "Dane", druga PULa nadal się nie pokazuje ...


1765707471721.png


Jeszcze wynik "cat /proc/mdstat"

Kod:
[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md3 : active raid0 sda3[0] sdb3[1]
      19512966144 blocks super 1.0 512k chunks

md1 : active raid1 nvme1n1p3[0]
      918467584 blocks super 1.0 [1/1] [U]

md2 : active raid1 nvme0n1p3[0]
      228094976 blocks super 1.0 [1/1] [U]

md322 : active raid1 sdb5[3](S) sda5[2](S) sdd5[1] sdc5[0]
      6702656 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md256 : active raid1 sdb2[3](S) sda2[2](S) sdd2[1] sdc2[0]
      530112 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md321 : active raid1 nvme0n1p5[2] nvme1n1p5[0]
      6702656 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md13 : active raid1 sdc4[131] sdd4[132] nvme1n1p4[0] sda4[129] sdb4[130] nvme0n1p4[128]
      458880 blocks super 1.0 [128/6] [UUUUUU__________________________________________________________________________________________________________________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md9 : active raid1 sdc1[131] sdd1[132] nvme1n1p1[0] sda1[129] sdb1[130] nvme0n1p1[128]
      530048 blocks super 1.0 [128/6] [UUUUUU__________________________________________________________________________________________________________________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>

i wynik "md_checker"

Kod:
[~] # md_checker

Welcome to MD superblock checker (v2.0) - have a nice day~

Scanning system...


RAID metadata found!
UUID:           50aa363d:3e5545ea:16b8e1b0:84930ec0
Level:          raid1
Devices:        1
Name:           md1
Chunk Size:     -
md Version:     1.0
Creation Time:  Dec 13 21:01:11 2025
Status:         ONLINE (md1) [U]
===============================================================================================
 Enclosure | Port | Block Dev Name | # | Status |   Last Update Time   | Events | Array State
===============================================================================================
 NAS_HOST       1   /dev/nvme1n1p3   0   Active   Dec 14 11:19:42 2025        8   A
===============================================================================================


RAID metadata found!
UUID:           5af766d4:19eb6494:27e44120:0c3530a0
Level:          raid0
Devices:        2
Name:           md3
Chunk Size:     512K
md Version:     1.0
Creation Time:  Nov 16 00:31:15 2025
Status:         OFFLINE
===============================================================================================
 Enclosure | Port | Block Dev Name | # | Status |   Last Update Time   | Events | Array State
===============================================================================================
 NAS_HOST       3        /dev/sdc3   0   Active   Nov 16 00:31:15 2025        0   AA
 NAS_HOST       4        /dev/sdd3   1   Active   Nov 16 00:31:15 2025        0   AA
===============================================================================================


RAID metadata found!
UUID:           48731014:28c60a52:1e8b92b4:0205ee8a
Level:          raid0
Devices:        2
Name:           md3
Chunk Size:     512K
md Version:     1.0
Creation Time:  Aug 12 13:15:58 2023
Status:         ONLINE (md3) raid0
===============================================================================================
 Enclosure | Port | Block Dev Name | # | Status |   Last Update Time   | Events | Array State
===============================================================================================
 NAS_HOST       5        /dev/sda3   0   Active   Aug 12 13:15:58 2023        0   AA
 NAS_HOST       6        /dev/sdb3   1   Active   Aug 12 13:15:58 2023        0   AA
===============================================================================================


RAID metadata found!
UUID:           30988257:fae8f2a8:6b4cdccd:50c3850f
Level:          raid1
Devices:        1
Name:           md2
Chunk Size:     -
md Version:     1.0
Creation Time:  Dec 13 21:22:37 2025
Status:         ONLINE (md2) [U]
===============================================================================================
 Enclosure | Port | Block Dev Name | # | Status |   Last Update Time   | Events | Array State
===============================================================================================
 NAS_HOST    P1-1   /dev/nvme0n1p3   0   Active   Dec 14 11:19:42 2025        2   A
===============================================================================================

Co mogę dalej zrobić ?

Pozdrawiam
 
Witam

Brak rezultatu ...

Zauważyłem, że jak po wykonaniu tej komendy jest wywoływane sprawdzanie PULI to zatrzymuje się ok 25% ..
1765713081504.png


Z logów systemowych to tylko tyle jest

1765713178022.png


ale w innym logu "/var/log/storage_lib.log" znalazłem coś takiego.. trochę tych 'failed!' jest


Kod:
2025-12-14 13:06:10 [30256 manaRequest.cgi] get_md_string: Execute "/sys/block/md3/md/degraded" failed!
2025-12-14 13:06:10 [30256 manaRequest.cgi] get_md_string: Execute "/sys/block/md3/md/sync_completed" failed!
2025-12-14 13:06:10 [30256 manaRequest.cgi] md_get_status: /dev/md3 : status=0, progress=100.000000.
2025-12-14 13:06:10 [30256 manaRequest.cgi] Blk_Dev_Generate_Mount_Point: mount point for "/dev/mapper/cachedev3" is "/share/CACHEDEV3_DATA", is_internal is 1.
2025-12-14 13:06:10 [30256 manaRequest.cgi] md_get_status: /dev/md2 : status=0, progress=100.000000.
2025-12-14 13:06:10 [30256 manaRequest.cgi] Blk_Dev_Generate_Mount_Point: mount point for "/dev/mapper/vg256-lv256" is "/share/VG256-LV256_DATA", is_internal is 1.
2025-12-14 13:06:11 [30256 manaRequest.cgi] Blk_Dev_Generate_Mount_Point: mount point for "/dev/sdc3" is "/share/5000CCA248DBCE1D_DATA", is_internal is 1.
2025-12-14 13:06:11 [30256 manaRequest.cgi] Blk_Dev_Generate_Mount_Point: mount point for "/dev/sdd3" is "/share/5000CCA22CCA8A0F_DATA", is_internal is 1.
2025-12-14 13:06:11 [30256 manaRequest.cgi] md_get_status: /dev/md1 : status=0, progress=100.000000.
2025-12-14 13:06:11 [30256 manaRequest.cgi] get_md_string: Execute "/sys/block/md3/md/degraded" failed!
2025-12-14 13:06:11 [30256 manaRequest.cgi] get_md_string: Execute "/sys/block/md3/md/sync_completed" failed!
2025-12-14 13:06:11 [30256 manaRequest.cgi] md_get_status: /dev/md3 : status=0, progress=100.000000.
2025-12-14 13:06:11 [30256 manaRequest.cgi] md_get_status: /dev/md1 : status=0, progress=100.000000.
2025-12-14 13:06:11 [30256 manaRequest.cgi] get_md_string: Execute "/sys/block/md3/md/degraded" failed!
2025-12-14 13:06:11 [30256 manaRequest.cgi] get_md_string: Execute "/sys/block/md3/md/sync_completed" failed!
2025-12-14 13:06:11 [30256 manaRequest.cgi] md_get_status: /dev/md3 : status=0, progress=100.000000.
2025-12-14 13:06:11 [30256 manaRequest.cgi] Is_Default_Volume is called for Volume(1)...
2025-12-14 13:06:11 [30256 manaRequest.cgi] Blk_Dev_Generate_Mount_Point: mount point for "/dev/mapper/cachedev1" is "/share/CACHEDEV1_DATA", is_internal is 1.
2025-12-14 13:06:11 [30256 manaRequest.cgi] Is_Default_Volume:Volume(1):is default volume!!
2025-12-14 13:06:11 [30256 manaRequest.cgi] Is_Default_Volume is called for Volume(3)...
2025-12-14 13:06:11 [30256 manaRequest.cgi] Blk_Dev_Generate_Mount_Point: mount point for "/dev/mapper/vg256-lv256" is "/share/VG256-LV256_DATA", is_internal is 1.
2025-12-14 13:06:11 [30256 manaRequest.cgi] Is_Default_Volume:Volume(3):is NOT default volume!!

Ten log szybko jest nadpisywany, ale jak 'grep-nąłem' go po 'fail' to taki efekt ... i się powtarza ...


Kod:
[/var/log] # tail -f storage_lib.log | grep fail
2025-12-14 13:14:26 [15015 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info --noheadings --columns -o minor vg2-tp2-tpool 2>>/dev/null" failed, reason code=1.
2025-12-14 13:14:26 [15015 disk_manage.cgi] get_md_string: Execute "/sys/block/md3/md/degraded" failed!
2025-12-14 13:14:26 [15015 disk_manage.cgi] get_md_string: Execute "/sys/block/md3/md/sync_completed" failed!
2025-12-14 13:14:30 [ 2864 qsnapman-recyc ] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info --noheadings --columns -o minor vg2-tp2-tpool 2>>/dev/null" failed, reason code=1.
2025-12-14 13:14:39 [15873 disk_manage.cgi] get_md_string: Execute "/sys/block/md3/md/degraded" failed!
2025-12-14 13:14:39 [15873 disk_manage.cgi] get_md_string: Execute "/sys/block/md3/md/sync_completed" failed!
2025-12-14 13:14:42 [15956 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info /dev/mapper/vg1-tp1_tierdata_1_fcorig &>/dev/null 2>>/dev/null" failed, reason code=1.
2025-12-14 13:14:42 [15956 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info /dev/mapper/vg2-tp2_tierdata_1_fcorig &>/dev/null 2>>/dev/null" failed, reason code=1.
2025-12-14 13:14:42 [15956 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info /dev/mapper/vg1-tp1_tierdata_1_fcorig &>/dev/null 2>>/dev/null" failed, reason code=1.
2025-12-14 13:14:42 [15956 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info /dev/mapper/vg1-tp1_tierdata_1_fcorig &>/dev/null 2>>/dev/null" failed, reason code=1.
2025-12-14 13:14:42 [15956 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info /dev/mapper/vg2-tp2_tierdata_1_fcorig &>/dev/null 2>>/dev/null" failed, reason code=1.
2025-12-14 13:14:42 [15956 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info /dev/mapper/vg2-tp2_tierdata_1_fcorig &>/dev/null 2>>/dev/null" failed, reason code=1.
2025-12-14 13:14:42 [15956 disk_manage.cgi] get_command_result_pipe: Perform cmd "/sbin/dmsetup status CG0ssddev 2>>/dev/null | /bin/grep Switching | /bin/awk '{print $2}' 2>>/dev/null" failed, reason code=0.
2025-12-14 13:14:52 [16840 manaRequest.cgi] get_md_string: Execute "/sys/block/md3/md/degraded" failed!
2025-12-14 13:14:52 [16840 manaRequest.cgi] get_md_string: Execute "/sys/block/md3/md/sync_completed" failed!
2025-12-14 13:14:53 [16840 manaRequest.cgi] get_md_string: Execute "/sys/block/md3/md/degraded" failed!
2025-12-14 13:14:53 [16840 manaRequest.cgi] get_md_string: Execute "/sys/block/md3/md/sync_completed" failed!
2025-12-14 13:14:53 [16840 manaRequest.cgi] get_md_string: Execute "/sys/block/md3/md/degraded" failed!
2025-12-14 13:14:53 [16840 manaRequest.cgi] get_md_string: Execute "/sys/block/md3/md/sync_completed" failed!
2025-12-14 13:14:57 [17348 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info /dev/mapper/vg1-tp1_tierdata_1_fcorig &>/dev/null 2>>/dev/null" failed, reason code=1.
2025-12-14 13:14:57 [17348 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info /dev/mapper/vg2-tp2_tierdata_1_fcorig &>/dev/null 2>>/dev/null" failed, reason code=1.
2025-12-14 13:14:57 [17348 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info /dev/mapper/vg1-tp1_tierdata_1_fcorig &>/dev/null 2>>/dev/null" failed, reason code=1.
2025-12-14 13:14:57 [17348 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info /dev/mapper/vg1-tp1_tierdata_1_fcorig &>/dev/null 2>>/dev/null" failed, reason code=1.
2025-12-14 13:14:57 [17348 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info /dev/mapper/vg2-tp2_tierdata_1_fcorig &>/dev/null 2>>/dev/null" failed, reason code=1.
2025-12-14 13:14:57 [17348 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info /dev/mapper/vg2-tp2_tierdata_1_fcorig &>/dev/null 2>>/dev/null" failed, reason code=1.
2025-12-14 13:14:58 [17348 disk_manage.cgi] get_command_result_pipe: Perform cmd "/sbin/dmsetup status CG0ssddev 2>>/dev/null | /bin/grep Switching | /bin/awk '{print $2}' 2>>/dev/null" failed, reason code=0.
2025-12-14 13:15:00 [ 2864 qsnapman-recyc ] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info --noheadings --columns -o minor vg2-tp2-tpool 2>>/dev/null" failed, reason code=1.
2025-12-14 13:15:02 [17463 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info --noheadings --columns -o minor vg2-tp2-tpool 2>>/dev/null" failed, reason code=1.
2025-12-14 13:15:03 [17463 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info --noheadings --columns -o minor vg2-tp2-tpool 2>>/dev/null" failed, reason code=1.
2025-12-14 13:15:03 [17463 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info --noheadings --columns -o minor vg2-tp2-tpool 2>>/dev/null" failed, reason code=1.
2025-12-14 13:15:03 [17463 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info --noheadings --columns -o minor vg2-tp2-tpool 2>>/dev/null" failed, reason code=1.
2025-12-14 13:15:04 [17463 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info --noheadings --columns -o minor vg2-tp2-tpool 2>>/dev/null" failed, reason code=1.
2025-12-14 13:15:04 [17463 disk_manage.cgi] get_md_string: Execute "/sys/block/md3/md/degraded" failed!
2025-12-14 13:15:04 [17463 disk_manage.cgi] get_md_string: Execute "/sys/block/md3/md/sync_completed" failed!
2025-12-14 13:15:07 [18586 disk_manage.cgi] get_md_string: Execute "/sys/block/md3/md/degraded" failed!
2025-12-14 13:15:07 [18586 disk_manage.cgi] get_md_string: Execute "/sys/block/md3/md/sync_completed" failed!
2025-12-14 13:15:07 [18586 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/usr/local/sbin/blkid_64 -s UUID -o value '/dev/mapper/cachedev3' 2>>/dev/null" failed, reason code=2.
2025-12-14 13:15:07 [18586 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info --noheadings --columns -o minor vg2-tp2-tpool 2>>/dev/null" failed, reason code=1.
2025-12-14 13:15:07 [18586 disk_manage.cgi] get_md_string: Execute "/sys/block/md3/md/degraded" failed!
2025-12-14 13:15:07 [18586 disk_manage.cgi] get_md_string: Execute "/sys/block/md3/md/sync_completed" failed!
2025-12-14 13:15:10 [18745 disk_manage.cgi] get_md_string: Execute "/sys/block/md3/md/degraded" failed!
2025-12-14 13:15:10 [18745 disk_manage.cgi] get_md_string: Execute "/sys/block/md3/md/sync_completed" failed!
2025-12-14 13:15:23 [19944 manaRequest.cgi] get_md_string: Execute "/sys/block/md3/md/degraded" failed!
2025-12-14 13:15:23 [19944 manaRequest.cgi] get_md_string: Execute "/sys/block/md3/md/sync_completed" failed!
2025-12-14 13:15:23 [19944 manaRequest.cgi] get_md_string: Execute "/sys/block/md3/md/degraded" failed!
2025-12-14 13:15:23 [19944 manaRequest.cgi] get_md_string: Execute "/sys/block/md3/md/sync_completed" failed!
2025-12-14 13:15:24 [19944 manaRequest.cgi] get_md_string: Execute "/sys/block/md3/md/degraded" failed!
2025-12-14 13:15:24 [19944 manaRequest.cgi] get_md_string: Execute "/sys/block/md3/md/sync_completed" failed!
2025-12-14 13:15:30 [ 2864 qsnapman-recyc ] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info --noheadings --columns -o minor vg2-tp2-tpool 2>>/dev/null" failed, reason code=1.
2025-12-14 13:15:41 [21165 disk_manage.cgi] get_md_string: Execute "/sys/block/md3/md/degraded" failed!
2025-12-14 13:15:41 [21165 disk_manage.cgi] get_md_string: Execute "/sys/block/md3/md/sync_completed" failed!
2025-12-14 13:15:44 [21459 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info /dev/mapper/vg1-tp1_tierdata_1_fcorig &>/dev/null 2>>/dev/null" failed, reason code=1.
2025-12-14 13:15:44 [21459 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info /dev/mapper/vg2-tp2_tierdata_1_fcorig &>/dev/null 2>>/dev/null" failed, reason code=1.
2025-12-14 13:15:44 [21459 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info /dev/mapper/vg1-tp1_tierdata_1_fcorig &>/dev/null 2>>/dev/null" failed, reason code=1.
2025-12-14 13:15:44 [21459 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info /dev/mapper/vg1-tp1_tierdata_1_fcorig &>/dev/null 2>>/dev/null" failed, reason code=1.
2025-12-14 13:15:44 [21459 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info /dev/mapper/vg2-tp2_tierdata_1_fcorig &>/dev/null 2>>/dev/null" failed, reason code=1.
2025-12-14 13:15:44 [21459 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info /dev/mapper/vg2-tp2_tierdata_1_fcorig &>/dev/null 2>>/dev/null" failed, reason code=1.
2025-12-14 13:15:45 [21459 disk_manage.cgi] get_command_result_pipe: Perform cmd "/sbin/dmsetup status CG0ssddev 2>>/dev/null | /bin/grep Switching | /bin/awk '{print $2}' 2>>/dev/null" failed, reason code=0.
2025-12-14 13:15:47 [21865 chartReq.cgi   ] get_md_string: Execute "/sys/block/md3/md/degraded" failed!
2025-12-14 13:15:47 [21865 chartReq.cgi   ] get_md_string: Execute "/sys/block/md3/md/sync_completed" failed!
2025-12-14 13:15:53 [22307 manaRequest.cgi] get_md_string: Execute "/sys/block/md3/md/degraded" failed!
2025-12-14 13:15:53 [22307 manaRequest.cgi] get_md_string: Execute "/sys/block/md3/md/sync_completed" failed!
2025-12-14 13:15:54 [22307 manaRequest.cgi] get_md_string: Execute "/sys/block/md3/md/degraded" failed!
2025-12-14 13:15:54 [22307 manaRequest.cgi] get_md_string: Execute "/sys/block/md3/md/sync_completed" failed!
2025-12-14 13:15:54 [22307 manaRequest.cgi] get_md_string: Execute "/sys/block/md3/md/degraded" failed!
2025-12-14 13:15:54 [22307 manaRequest.cgi] get_md_string: Execute "/sys/block/md3/md/sync_completed" failed!
2025-12-14 13:15:55 [22751 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info /dev/mapper/vg1-tp1_tierdata_1_fcorig &>/dev/null 2>>/dev/null" failed, reason code=1.
2025-12-14 13:15:56 [22751 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info /dev/mapper/vg2-tp2_tierdata_1_fcorig &>/dev/null 2>>/dev/null" failed, reason code=1.
2025-12-14 13:15:56 [22751 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info /dev/mapper/vg1-tp1_tierdata_1_fcorig &>/dev/null 2>>/dev/null" failed, reason code=1.
2025-12-14 13:15:56 [22751 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info /dev/mapper/vg1-tp1_tierdata_1_fcorig &>/dev/null 2>>/dev/null" failed, reason code=1.
2025-12-14 13:15:56 [22751 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info /dev/mapper/vg2-tp2_tierdata_1_fcorig &>/dev/null 2>>/dev/null" failed, reason code=1.
2025-12-14 13:15:56 [22751 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info /dev/mapper/vg2-tp2_tierdata_1_fcorig &>/dev/null 2>>/dev/null" failed, reason code=1.
2025-12-14 13:15:56 [22751 disk_manage.cgi] get_command_result_pipe: Perform cmd "/sbin/dmsetup status CG0ssddev 2>>/dev/null | /bin/grep Switching | /bin/awk '{print $2}' 2>>/dev/null" failed, reason code=0.
2025-12-14 13:15:57 [22847 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info --noheadings --columns -o minor vg2-tp2-tpool 2>>/dev/null" failed, reason code=1.
2025-12-14 13:15:57 [22847 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info --noheadings --columns -o minor vg2-tp2-tpool 2>>/dev/null" failed, reason code=1.
2025-12-14 13:15:57 [22847 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info --noheadings --columns -o minor vg2-tp2-tpool 2>>/dev/null" failed, reason code=1.
2025-12-14 13:15:57 [22847 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info --noheadings --columns -o minor vg2-tp2-tpool 2>>/dev/null" failed, reason code=1.
2025-12-14 13:15:58 [22847 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info --noheadings --columns -o minor vg2-tp2-tpool 2>>/dev/null" failed, reason code=1.
2025-12-14 13:15:58 [22847 disk_manage.cgi] get_md_string: Execute "/sys/block/md3/md/degraded" failed!
2025-12-14 13:15:58 [22847 disk_manage.cgi] get_md_string: Execute "/sys/block/md3/md/sync_completed" failed!
2025-12-14 13:16:00 [23254 disk_manage.cgi] get_md_string: Execute "/sys/block/md3/md/degraded" failed!
2025-12-14 13:16:00 [23254 disk_manage.cgi] get_md_string: Execute "/sys/block/md3/md/sync_completed" failed!
2025-12-14 13:16:00 [23254 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/usr/local/sbin/blkid_64 -s UUID -o value '/dev/mapper/cachedev3' 2>>/dev/null" failed, reason code=2.
2025-12-14 13:16:00 [23254 disk_manage.cgi] Get_Command_Result_Exec: Perform cmd "/sbin/dmsetup info --noheadings --columns -o minor vg2-tp2-tpool 2>>/dev/null" failed, reason code=1.
2025-12-14 13:16:00 [23254 disk_manage.cgi] get_md_string: Execute "/sys/block/md3/md/degraded" failed!
2025-12-14 13:16:00 [23254 disk_manage.cgi] get_md_string: Execute "/sys/block/md3/md/sync_completed" failed!

Drugiej PULi nadal nie widać w GUI, choć w md_checker ją przynajmnijej widać ...

Pozdrawiam