Pomoc Odbudowa macierzy RAID5 332X - poprawny post

Raf

Nowy użytkownik
Noobie
6 Luty 2017
9
2
3
Warszawa
QNAP
TS-x53B
Ethernet
10 GbE
Dobry wieczór (oby)

NAS Model: TS-332X
Firmware: 5.0.0 Build 20211020

Czy ktoś mi podpowie co się właściwie dzieje i gdzie jest moja macierz?

Po awarii jednego z dysków (3x 8TB Raid5) wymieniłem go (offline) na nowy (większy - 12TB).
Od wczoraj teoretycznie (subiektywnie - za mała i za cicha aktywność hdd) rekonstruowała się macierzy - nie widzę rezultatów.
Qnap po kilku minutach od włączenia przestajł odpowiadać na ping, Qfinde

Kod:
HDD Information:
HDD1 - Model=Micron 1100 SATA 256GB , FwRev= M0DL003, SerialNo= 1711163A7678
Model: Micron 1100 SATA 256GB (scsi)
Disk /dev/sda: 256GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 20.5kB 543MB 543MB ext3 primary
2 543MB 1086MB 543MB primary
3 1086MB 247GB 246GB primary
4 247GB 247GB 543MB ext3 primary
5 247GB 256GB 8554MB linux-swap(v1) primary

HDD2 - Model=Micron 1100 SATA 256GB , FwRev= M0DL020, SerialNo= 174819E52571
Model: Micron 1100 SATA 256GB (scsi)
Disk /dev/sdb: 256GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 20.5kB 543MB 543MB ext3 primary
2 543MB 1086MB 543MB primary
3 1086MB 247GB 246GB primary
4 247GB 247GB 543MB ext3 primary
5 247GB 256GB 8554MB linux-swap(v1) primary

HDD3 - Model=ST8000NM000A-2KE101 , FwRev=SN02 , SerialNo= WKD2GQZ3
Model: Seagate ST8000NM000A-2KE (scsi)
Disk /dev/sdc: 8002GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 20.5kB 543MB 543MB ext3 primary
2 543MB 1086MB 543MB linux-swap(v1) primary
3 1086MB 7992GB 7991GB primary
4 7992GB 7993GB 543MB ext3 primary
5 7993GB 8002GB 8554MB linux-swap(v1) primary

Open device fail

HDD4 - Model=WDC WD121KRYZ-01W0RB0 , FwRev=01.01H01, SerialNo=5QGXE13F
Model: WDC WD121KRYZ-01W0RB (scsi)
Disk /dev/sdd: 12.0TB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 20.5kB 543MB 543MB ext3 primary
2 543MB 1086MB 543MB linux-swap(v1) primary
3 1086MB 12.0TB 12.0TB primary
4 12.0TB 12.0TB 543MB ext3 primary
5 12.0TB 12.0TB 8554MB linux-swap(v1) primary

HDD5 - Model=ST8000NM0055-1RM112 , FwRev=SN04 , SerialNo= ZA1B1WVC
Model: Seagate ST8000NM0055-1RM (scsi)
Disk /dev/sde: 8002GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 20.5kB 543MB 543MB ext3 primary
2 543MB 1086MB 543MB linux-swap(v1) primary
3 1086MB 7992GB 7991GB primary
4 7992GB 7993GB 543MB ext3 primary
5 7993GB 8002GB 8554MB linux-swap(v1) primary

Wystraszyło mnie to "Open device fail" więc parted...

md_checker mów:
Kod:
[~] # parted
GNU Parted 3.1
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) select /dev/sdc
select /dev/sdc
Using /dev/sdc
(parted) print
print
Model: Seagate ST8000NM000A-2KE (scsi)
Disk /dev/sdc: 8002GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 20.5kB 543MB 543MB ext3 primary
2 543MB 1086MB 543MB linux-swap(v1) primary
3 1086MB 7992GB 7991GB primary
4 7992GB 7993GB 543MB ext3 primary
5 7993GB 8002GB 8554MB linux-swap(v1) primary

(parted) select /dev/sdd
select /dev/sdd
Using /dev/sdd
(parted) print
print
Model: WDC WD121KRYZ-01W0RB (scsi)
Disk /dev/sdd: 12.0TB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 20.5kB 543MB 543MB ext3 primary
2 543MB 1086MB 543MB linux-swap(v1) primary
3 1086MB 12.0TB 12.0TB primary
4 12.0TB 12.0TB 543MB ext3 primary
5 12.0TB 12.0TB 8554MB linux-swap(v1) primary

(parted) select /dev/sde
select /dev/sde
Using /dev/sde
(parted) print
print
Model: Seagate ST8000NM0055-1RM (scsi)
Disk /dev/sde: 8002GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 20.5kB 543MB 543MB ext3 primary
2 543MB 1086MB 543MB linux-swap(v1) primary
3 1086MB 7992GB 7991GB primary
4 7992GB 7993GB 543MB ext3 primary
5 7993GB 8002GB 8554MB linux-swap(v1) primary

(parted) q

/dev/md1:
Version : 1.0
Creation Time : Sat Oct 24 19:27:29 2020
Raid Level : raid1
Array Size : 204084224 (194.63 GiB 208.98 GB)
Used Dev Size : 204084224 (194.63 GiB 208.98 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Fri Nov 12 22:08:13 2021
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Name : 1
UUID : e096339e:1f271279:9a9f71ba:3a8a0864
Events : 203670

Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
2 8 19 1 active sync /dev/sdb3

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md322 : active raid1 sde5[3](S) sdd5[2] sdc5[0]
7235136 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md256 : active raid1 sde2[3](S) sdd2[2] sdc2[0]
530112 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md1 : active raid1 sda3[0] sdb3[2]
204084224 blocks super 1.0 [2/2] [UU]

md321 : active raid1 sdb5[2] sda5[0]
8283712 blocks super 1.0 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md13 : active raid1 sde4[67] sdd4[66] sdc4[65] sdb4[64] sda4[0]
458880 blocks super 1.0 [64/5] [UUUUU___________________________________________________________]
bitmap: 1/1 pages [64KB], 65536KB chunk

md9 : active raid1 sde1[65] sdd1[66] sdc1[64] sdb1[67] sda1[0]
530048 blocks super 1.0 [64/5] [UUUUU___________________________________________________________]
bitmap: 1/1 pages [64KB], 65536KB chunk

unused devices: <none>

Disk Space:

Filesystem Size Used Available Use% Mounted on
none 153.0M 116.8M 36.2M 76% /
devtmpfs 3.9G 128.0K 3.9G 0% /dev
tmpfs 64.0M 4.4M 59.6M 7% /tmp
tmpfs 3.9G 3.1M 3.9G 0% /dev/shm
tmpfs 16.0M 0 16.0M 0% /share
tmpfs 16.0M 0 16.0M 0% /mnt/snapshot/export
/dev/md9 493.5M 366.2M 127.3M 74% /mnt/HDA_ROOT
cgroup_root 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/md13 417.0M 360.0M 57.0M 86% /mnt/ext
tmpfs 32.0M 27.3M 4.8M 85% /samba_third_party
/dev/ram2 193.7M 1.5M 192.2M 1% /mnt/update
tmpfs 64.0M 35.8M 28.2M 56% /samba
tmpfs 16.0M 0 16.0M 0% /share/NFSv=4


Mount Status:

none on /new_root type tmpfs (rw,mode=0755,size=696320k)
/proc on /proc type proc (rw)
devpts on /dev/pts type devpts (rw)
tmpfs on /tmp type tmpfs (rw,size=64M)
tmpfs on /dev/shm type tmpfs (rw)
tmpfs on /share type tmpfs (rw,size=16M)
tmpfs on /mnt/snapshot/export type tmpfs (ro)
/dev/md9 on /mnt/HDA_ROOT type ext3 (rw,data=ordered)
cgroup_root on /sys/fs/cgroup type tmpfs (rw)
none on /sys/fs/cgroup/memory type cgroup (rw,memory)
cpu on /sys/fs/cgroup/cpu type cgroup (rw,cpu)
/dev/md13 on /mnt/ext type ext4 (rw,data=ordered,barrier=1,nodelalloc)
tmpfs on /samba_third_party type tmpfs (rw,size=32M)
/dev/ram2 on /mnt/update type ext2 (rw)
tmpfs on /samba type tmpfs (rw,size=64M)
nfsd on /proc/fs/nfsd type nfsd (rw)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
tmpfs on /share/NFSv=4 type tmpfs (rw,size=16M)


Windows Shares:

Web:/share/CACHEDEV1_DATA/Web
Public:/share/CACHEDEV1_DATA/Public
homes:/share/CACHEDEV1_DATA/homes
foto:/share/CACHEDEV3_DATA/foto
music:/share/CACHEDEV2_DATA/music
concerts:/share/CACHEDEV2_DATA/concerts
family:/share/CACHEDEV3_DATA/family
other:/share/CACHEDEV2_DATA/other
backup:/share/CACHEDEV1_DATA/backup
software:/share/CACHEDEV1_DATA/software
kids:/share/CACHEDEV4_DATA/kids
Multimedia:/share/CACHEDEV1_DATA/Multimedia
Download:/share/CACHEDEV1_DATA/Download
tv:/share/CACHEDEV2_DATA/tv
PlexData:/share/CACHEDEV1_DATA/PlexData
filmy:/share/CACHEDEV2_DATA/filmy
encryption_test:/share/CACHEDEV2_DATA/encryption_test

md_checker mów:

Kod:
[~] # md_checker

Welcome to MD superblock checker (v2.0) - have a nice day~

Scanning system...


RAID metadata found!
UUID: e096339e:1f271279:9a9f71ba:3a8a0864
Level: raid1
Devices: 2
Name: md1
Chunk Size: -
md Version: 1.0
Creation Time: Oct 24 19:27:29 2020
Status: ONLINE (md1) [UU]
===============================================================================================
Enclosure | Port | Block Dev Name | # | Status | Last Update Time | Events | Array State
===============================================================================================
NAS_HOST 1 /dev/sda3 0 Active Nov 12 22:08:13 2021 203670 AA
NAS_HOST 2 /dev/sdb3 1 Active Nov 12 22:08:13 2021 203670 AA
===============================================================================================


RAID metadata found!
UUID: 720dd8a7:f031df5d:0ce3c888:35c7407b
Level: raid5
Devices: 3
Name: md2
Chunk Size: 512K
md Version: 1.0
Creation Time: Oct 24 19:28:43 2020
Status: OFFLINE
===============================================================================================
Enclosure | Port | Block Dev Name | # | Status | Last Update Time | Events | Array State
===============================================================================================
NAS_HOST 4 /dev/sdc3 0 Active Nov 12 08:28:13 2021 2454 AAA
NAS_HOST 5 /dev/sde3 1 Active Nov 12 08:28:13 2021 2454 AAA
NAS_HOST 6 /dev/sdd3 2 Rebuild Nov 12 08:28:13 2021 2454 AAA
===============================================================================================

Natomiast niestety:
Kod:
[~] # mdadm --assemble --scan
mdadm: No arrays found in config file
 
Ok, pomogło to:
Kod:
[~] # mdadm --assemble  /dev/md2 /dev/sdc3 /dev/sde3 /dev/sdd3
mdadm: /dev/md2 has been started with 2 drives (out of 3) and 1 rebuilding.

Jest tak:
Kod:
RAID metadata found!
UUID:           720dd8a7:f031df5d:0ce3c888:35c7407b
Level:          raid5
Devices:        3
Name:           md2
Chunk Size:     512K
md Version:     1.0
Creation Time:  Oct 24 19:28:43 2020
Status:         ONLINE (md2) [UU_]
===============================================================================================
 Enclosure | Port | Block Dev Name | # | Status |   Last Update Time   | Events | Array State
===============================================================================================
 NAS_HOST       4        /dev/sdc3   0   Active   Nov 12 23:09:03 2021     2456   AAA
 NAS_HOST       5        /dev/sde3   1   Active   Nov 12 23:09:03 2021     2456   AAA
 NAS_HOST       6        /dev/sdd3   2  Rebuild   Nov 12 23:09:03 2021     2456   AAA
===============================================================================================

I wygląda że za jakieś 11 godzin będzie ok:
Kod:
[~] # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md2 : active raid5 sdc3[0] sdd3[3] sde3[1]
      15608142848 blocks super 1.0 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
      [===>.................]  recovery = 18.7% (1465850240/7804071424) finish=653.2min speed=161702K/sec
 
  • Wow
Reakcje: jerry1333

Użytkownicy znaleźli tą stronę używając tych słów:

  1. ST8000NM000A
  2. raid recovery