Pomoc Uszkodzenie dysku

Status
Brak możliwości dodawania odpowiedzi.

MateuszK

Nowy użytkownik
Noobie
26 Grudzień 2012
11
0
1
34
Kielce/Warszawa
QNAP
TS-x51
Ethernet
10 Mbps
Witam,
Na początku napiszę: mój błąd: mea maxima culpa.
Miałem 3x3TB, dokupiłem 4-ty dysk. Źle zmiigrowałem RAID, zamiast rozszerzyć do 6-stki dodałem miejsca do 5-ki. Niewiele myśląc (po odbudowie macierzy) otworzyłem stronkę: Migracja poziomów RAID i powiększanie pojemności RAID online :: Storage、Virtualization ::NAS :: QNAP i po wyjęciu dysku nie poczekałem na sygnał dzwiękowy...
Dysk od tamtej pory miga na czerwono, w systemie z komunikatem do wymiany oraz:
"Historia dostępu do dysku - nieprawidłowy"

SMART przez QNAP OS - OK

Da się go odratować?

Poniżej z logi z QNAP diagnostic tool:
Kod:
Preparing to start, please wait...
---blkdevMonitor---
Countdown:10.9.8.7.6.5.4.3.2.1.
Turn on block_dump
Clean old dmesg
Start...

/dev/sda:
issuing standby command
setting standby immediately.

/dev/sdb:
issuing standby command
setting standby immediately.

/dev/sdc:
issuing standby command
setting standby immediately.

/dev/sdd:
issuing standby command
setting standby immediately.

/dev/sde:
issuing standby command
setting standby immediately.
drive state is:  active/idle
[ 1274.273917] dd(10069): dirtied inode 13286 (kmsg) on md9
[ 1274.273963] dd(10069): dirtied inode 13286 (kmsg) on md9
[ 1274.273987] dd(10069): dirtied inode 13286 (kmsg) on md9
[ 1278.880492] devRequest.cgi(10554): dirtied inode 6996 (uLinux.conf.bak) on md9
[ 1278.880634] devRequest.cgi(10554): dirtied inode 7001 (?) on md9
---counter=0---
Countdown:10.9.8.7.6.5.4.3.2.1.
/dev/sda:
issuing standby command
setting standby immediately.

/dev/sdb:
issuing standby command
setting standby immediately.

/dev/sdc:
issuing standby command
setting standby immediately.

/dev/sdd:
issuing standby command
setting standby immediately.

/dev/sde:
issuing standby command
setting standby immediately.
drive state is:  active/idle
---counter=1---
Countdown:10.9.8.7.


Kod:
#============================== [ HDD1 ]
/dev/sda3:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
    Array UUID : 8f6a3a21:3eb4cd9b:ce072eda:a790cbfa
          Name : 1
  Creation Time : Tue Nov 24 10:35:40 2015
    Raid Level : raid5
  Raid Devices : 4

Avail Dev Size : 5840607176 (2785.02 GiB 2990.39 GB)
    Array Size : 8760910656 (8355.06 GiB 8971.17 GB)
  Used Dev Size : 5840607104 (2785.02 GiB 2990.39 GB)
  Super Offset : 5840607440 sectors
  Unused Space : before=0 sectors, after=336 sectors
          State : clean
    Device UUID : e1684d76:d081d889:a93d730a:f3f0d568

    Update Time : Thu Feb 18 22:15:06 2016
      Checksum : 842efdf1 - correct
        Events : 25585

        Layout : left-symmetric
    Chunk Size : 64K

  Device Role : Active device 0
  Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)

#============================== [ HDD2 ]
/dev/sdb3:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
    Array UUID : 8f6a3a21:3eb4cd9b:ce072eda:a790cbfa
          Name : 1
  Creation Time : Tue Nov 24 10:35:40 2015
    Raid Level : raid5
  Raid Devices : 4

Avail Dev Size : 5840623240 (2785.03 GiB 2990.40 GB)
    Array Size : 8760910656 (8355.06 GiB 8971.17 GB)
  Used Dev Size : 5840607104 (2785.02 GiB 2990.39 GB)
  Super Offset : 5840623504 sectors
  Unused Space : before=0 sectors, after=16400 sectors
          State : clean
    Device UUID : 37b323b4:b4ad86a5:f80ab11d:d92b48a0

    Update Time : Thu Feb 18 22:15:06 2016
      Checksum : 2863f9e3 - correct
        Events : 25585

        Layout : left-symmetric
    Chunk Size : 64K

  Device Role : Active device 2
  Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)

#============================== [ HDD3 ]
/dev/sdc3:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
    Array UUID : 8f6a3a21:3eb4cd9b:ce072eda:a790cbfa
          Name : 1
  Creation Time : Tue Nov 24 10:35:40 2015
    Raid Level : raid5
  Raid Devices : 4

Avail Dev Size : 5840623240 (2785.03 GiB 2990.40 GB)
    Array Size : 8760910656 (8355.06 GiB 8971.17 GB)
  Used Dev Size : 5840607104 (2785.02 GiB 2990.39 GB)
  Super Offset : 5840623504 sectors
  Unused Space : before=0 sectors, after=16400 sectors
          State : clean
    Device UUID : 31b6b49b:ed0d6ece:5c933b52:915853c3

    Update Time : Thu Feb 18 22:15:06 2016
      Checksum : 90721231 - correct
        Events : 25585

        Layout : left-symmetric
    Chunk Size : 64K

  Device Role : Active device 1
  Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)

#============================== [ HDD4 ]
/dev/sdd3:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
    Array UUID : 8f6a3a21:3eb4cd9b:ce072eda:a790cbfa
          Name : 1
  Creation Time : Tue Nov 24 10:35:40 2015
    Raid Level : raid5
  Raid Devices : 4

Avail Dev Size : 5840623240 (2785.03 GiB 2990.40 GB)
    Array Size : 8760910656 (8355.06 GiB 8971.17 GB)
  Used Dev Size : 5840607104 (2785.02 GiB 2990.39 GB)
  Super Offset : 5840623504 sectors
  Unused Space : before=0 sectors, after=16392 sectors
          State : clean
    Device UUID : 27af40bf:fdaeabac:51cd0599:eb976718

    Update Time : Thu Feb 18 21:07:43 2016
  Bad Block Log : 512 entries available at offset -8 sectors
      Checksum : 2e271286 - correct
        Events : 24770

        Layout : left-symmetric
    Chunk Size : 64K

  Device Role : Active device 3
  Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)

#============================== [ HDD5 ]
mdadm: No md superblock detected on /dev/sde3.

Kod:
Model : TS-451
Firmware : 4.2.1 (20160201)
NAS :

==========[ BAY 1, WDCWD30EFRX-68EUZN02861587, ]

==========[ BAY 2, WDCWD30EFRX-68EUZN02861588, ]

==========[ BAY 3, WDCWD30EFRX-68EUZN02861588, ]

==========[ BAY 4, SeagateST3000VN000-1HJ1662861588, ]

==========[ BAY 5, ô~÷˜Máöii8çĺözăăöii˜˘ĺöPL»˙[}÷Y=ćöpăăöÔ·                            - WTF?
÷Ä·
÷ kÜö492, ]
Open device fail

Kod:
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid5 sda3[0] sdb3[2] sdc3[1]
      8760910656 blocks super 1.0 level 5, 64k chunk, algorithm 2 [4/3] [UUU_]
   
md256 : active raid1 sdd2[3](S) sdc2[2](S) sdb2[1] sda2[0]
      530112 blocks super 1.0 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md13 : active raid1 sdb4[25] sdc4[24] sda4[0] sdd4[26]
      458880 blocks super 1.0 [24/4] [UUUU____________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md9 : active raid1 sdb1[25] sdc1[24] sda1[0] sdd1[26]
      530048 blocks super 1.0 [24/4] [UUUU____________________]
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>
 
Propozycja:
(zakladam, ze bez tego dysku RAID jest zdegradowany) - brakuje mu 1 HDD.
Jesli tak .. to sformatuj dysk pod winda i wsadz ponownie do NASa
RAID sie odbuduje.
I wtedy bedziesz myslal jak zmigrowac z RAID 5 do 6, jezeli jest to mozliwe:
What can’t I migration RAID 5 to RAID 6? :: System & Disk Volume Management ::NAS :: QNAP

BTW: Takie krzaki zazwyczaj sa, jesli system nie potrafi rozpoznac dysku.
Jak pod winda nie dziala ... zwroc HDD na gwarancji :)
Da się go odratować?
Jakie rozwiazanie zostalo zastosowane?
Zero odzewu od autora, traktuje ze temat zostal rozwiazany i zamykam.
 
  • Lubię to
Reakcje: kaktus
Status
Brak możliwości dodawania odpowiedzi.

Użytkownicy znaleźli tą stronę używając tych słów:

  1. historia dostępu hdd błąd