Pomoc TS-412U raid 5 odmontowany jak go zamontować ponownie bez utraty danych

WD40

Passing Basics
Beginner
14 Kwiecień 2015
10
2
3
50
QNAP
TS-x59 Pro II
Ethernet
1 GbE
Witam,
mam problem z Qnapem TS-412U w jednej z lokalizacji firmy. Dostałem informację że nie można się podłączyć do smb ( jego podstawowe zadanie). Po zalogowaniu się przez przeglądarkę w panelu sterowania w zarządzaniu woluminem mam: "Wolumin dysku RAID 5: 2 3 4 odmontowany. Wczoraj prawie cały dzień googlowałem, jednak bez większego skutku. Raid jest (był?) zbudowany z trzech dysków 1Tb WD. Poniżej wrzucam log z mdam.
/dev/md0:
Version : 01.00.03
Creation Time : Sat Nov 7 23:43:07 2015
Raid Level : raid5
Array Size : 1950387072 (1860.03 GiB 1997.20 GB)
Used Dev Size : 975193536 (930.02 GiB 998.60 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Mon Nov 16 15:44:57 2015
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 64K

Name : 0
UUID : 9969659b:a658154e:bedd5826:52106fae
Events : 3

Number Major Minor RaidDevice State
0 8 19 0 active sync /dev/sdb3
1 8 35 1 active sync /dev/sdc3
2 8 51 2 active sync /dev/sdd3
[~] # fdisk -l /dev/sda # - dysk 1
+ fdisk -l /dev/sda
[~] # fdisk -l /dev/sdb # - dysk 2
+ fdisk -l /dev/sdb

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 66 530125 83 Linux
/dev/sdb2 67 132 530142 83 Linux
/dev/sdb3 133 121538 975193693 83 Linux
/dev/sdb4 121539 121600 498012 83 Linux
[~] # fdisk -l /dev/sdc # - dysk 3
+ fdisk -l /dev/sdc

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 1 66 530125 83 Linux
/dev/sdc2 67 132 530142 83 Linux
/dev/sdc3 133 121538 975193693 83 Linux
/dev/sdc4 121539 121600 498012 83 Linux
[~] # fdisk -l /dev/sdd # - dysk 4
+ fdisk -l /dev/sdd

Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdd1 1 66 530125 83 Linux
/dev/sdd2 67 132 530142 83 Linux
/dev/sdd3 133 121538 975193693 83 Linux
/dev/sdd4 121539 121600 498012 83 Linux
[~] # fdisk -l /dev/sde # - dysk 5
+ fdisk -l /dev/sde
czy może któryś z kolegów ma jakiś pomysł jak to naprawić. Mam na nim bardzo ważne dane i muszę je w jakiś sposób odzyskać.
 
ITCrowd.gif


serio - to jest czasami takie proste!
 
zrób tak
Bash:
mount
potem
Bash:
mount -a

i sprawdz wynik
Bash:
mount

niestety nic...
[~] # mount
/proc on /proc type proc (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
sysfs on /sys type sysfs (rw)
tmpfs on /tmp type tmpfs (rw,size=32M)
none on /proc/bus/usb type usbfs (rw)
/dev/sda4 on /mnt/ext type ext3 (rw)
/dev/md9 on /mnt/HDA_ROOT type ext3 (rw,data=ordered)
tmpfs on /mnt/rf/nd type tmpfs (rw,size=1m)
[~] # mount -a
[~] # mount
/proc on /proc type proc (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
sysfs on /sys type sysfs (rw)
tmpfs on /tmp type tmpfs (rw,size=32M)
none on /proc/bus/usb type usbfs (rw)
/dev/sda4 on /mnt/ext type ext3 (rw)
/dev/md9 on /mnt/HDA_ROOT type ext3 (rw,data=ordered)
tmpfs on /mnt/rf/nd type tmpfs (rw,size=1m)
[~] # [~] # mount
 
Bash:
mkdir /qtest/;  mount /dev/md0 /qtest &&  ls -la /qtest/

jeśli pokażą się foldery które tam miałeś - tzn że QNAP nie montuje z automatu RAIDa.

to raczej nie wróży nic dobrego:
[~] # mkdir /qtest/
[~] # mount /dev/md0 /qtest && ls -la /qtest/
mount: wrong fs type, bad option, bad superblock on /dev/md0,
missing codepage or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
 
superblock padł ? ehhh

przeklej wyniki na forum :
Bash:
mdadm --stop /dev/md0
mdadm --examine /dev/sd[bcd]3
mdadm --assemble --scan

@Silas Mariusz - jest workaround na to ?
[~] # mdadm --stop /dev/md0
mdadm: stopped /dev/md0
[~] # mdadm --examine /dev/sd[bcd]3
/dev/sdb3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : 9969659b:a658154e:bedd5826:52106fae
Name : 0
Creation Time : Sat Nov 7 23:43:07 2015
Raid Level : raid5
Raid Devices : 3

Used Dev Size : 1950387112 (930.02 GiB 998.60 GB)
Array Size : 3900774144 (1860.03 GiB 1997.20 GB)
Used Size : 1950387072 (930.02 GiB 998.60 GB)
Super Offset : 1950387368 sectors
State : clean
Device UUID : 8591c286:affb9eb4:5a3e6e0c:99995624

Update Time : Tue Nov 17 10:29:42 2015
Checksum : dc207bcb - correct
Events : 3

Layout : left-symmetric
Chunk Size : 64K

Array Slot : 0 (0, 1, 2, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed)
Array State : Uuu 381 failed
/dev/sdc3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : 9969659b:a658154e:bedd5826:52106fae
Name : 0
Creation Time : Sat Nov 7 23:43:07 2015
Raid Level : raid5
Raid Devices : 3

Used Dev Size : 1950387112 (930.02 GiB 998.60 GB)
Array Size : 3900774144 (1860.03 GiB 1997.20 GB)
Used Size : 1950387072 (930.02 GiB 998.60 GB)
Super Offset : 1950387368 sectors
State : clean
Device UUID : 31990b29:7d806384:4e238a80:26efc3ed

Update Time : Tue Nov 17 10:29:42 2015
Checksum : 8bb742c8 - correct
Events : 3

Layout : left-symmetric
Chunk Size : 64K

Array Slot : 1 (0, 1, 2, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed)
Array State : uUu 381 failed
/dev/sdd3:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x0
Array UUID : 9969659b:a658154e:bedd5826:52106fae
Name : 0
Creation Time : Sat Nov 7 23:43:07 2015
Raid Level : raid5
Raid Devices : 3

Used Dev Size : 1950387112 (930.02 GiB 998.60 GB)
Array Size : 3900774144 (1860.03 GiB 1997.20 GB)
Used Size : 1950387072 (930.02 GiB 998.60 GB)
Super Offset : 1950387368 sectors
State : clean
Device UUID : bb7f3884:17795e7a:f488937f:7902da01

Update Time : Tue Nov 17 10:29:42 2015
Checksum : effe9ae5 - correct
Events : 3

Layout : left-symmetric
Chunk Size : 64K

Array Slot : 2 (0, 1, 2, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed)
Array State : uuU 381 failed
[~] # mdadm --assemble --scan
mdadm: /dev/md0 has been started with 3 drives.
[~] #
 
Wklej dotychczasowe logi uwzględniające moment wystąpienia awarii.
no to chyba już wiem co się stało: mieli problem z jednym z dysków, więc wymienili na nowy, tyle tylko że osoba która to robiła ( jakiś student) nie miał pojęcia chyba jak i co trzeba zrobić. Poniżej logi z zapisem ostatnich operacji:
Typ Data Czas Użytkownicy Źródło IP Nazwa komputera Zawartość
Informacja 2015/11/17 10:03:23 System localhost System started.
Informacja 2015/11/17 10:01:23 System localhost System was shut down on Tue Nov 17 10:01:23 CET 2015.
Informacja 2015/11/17 09:59:43 admin 192.168.2.141 --- [Power Management] System restarting.
Informacja 2015/11/16 16:02:48 System localhost System started.
Informacja 2015/11/16 15:59:44 System localhost System was shut down on Mon Nov 16 15:59:44 CET 2015.
Informacja 2015/11/16 15:58:02 admin 192.168.2.141 --- [Power Management] System restarting.
Informacja 2015/11/16 15:57:27 System localhost [Firmware Update] System updated successfully from 4.2.0(20150925) to 4.2.0(20151023).
Informacja 2015/11/16 15:50:30 System localhost [Firmware Update] Started updating the firmware.
Informacja 2015/11/16 15:49:08 admin 192.168.2.141 --- [Firmware Update] Start updating firmware 4.2.0 Build 20151023.
Informacja 2015/11/16 15:48:09 admin 192.168.2.141 --- [Firmware Update] Start unzipping TS-412_20151023-4.2.0.zip
Informacja 2015/11/16 15:46:22 admin 192.168.2.141 --- [Firmware Update] Start downloading firmware 4.2.0 Build 20151023.
Informacja 2015/11/16 15:20:12 System localhost System started.
Informacja 2015/11/09 07:41:41 System localhost System was shut down on Mon Nov 9 07:41:40 CET 2015.
Informacja 2015/11/09 07:39:56 admin 192.168.2.141 --- [Power Management] System will be shutdown now.
Informacja 2015/11/08 23:23:46 System localhost LAN 2 link is Up.
Informacja 2015/11/08 23:23:27 System localhost Drive 4 plugged in.
Informacja 2015/11/08 23:23:10 System localhost Drive 3 plugged in.
Informacja 2015/11/08 23:22:51 System localhost Drive 2 plugged in.
Informacja 2015/11/08 22:04:11 System localhost System started.
Informacja 2015/11/08 20:49:23 System localhost System was shut down on Sun Nov 8 20:49:23 CET 2015.
Informacja 2015/11/08 20:47:37 admin 192.168.2.141 --- [Power Management] System restarting.
Błąd 2015/11/07 23:45:45 System localhost [RAID5 Disk Volume: Drive 2 3 4] Raid Size Expansion failed.
Informacja 2015/11/07 23:43:57 System localhost [Home Folders] The home folder for user admin has been created.
Informacja 2015/11/07 23:43:32 System localhost [RAID5 Disk Volume: Drive 2 3 4] Restore system default shares.
Informacja 2015/11/07 23:41:31 System localhost [RAID5 Disk Volume: Drive 2 3 4] Do Raid Size Expansion.
Błąd 2015/11/07 23:38:33 System localhost [RAID5 Disk Volume: Drive 2 3 4 1] Expanding Raid Device failed.
Informacja 2015/11/07 23:35:55 System localhost [RAID5 Disk Volume: Drive 2 3 4] Start to expand Raid Device: Add Drive 1.
Błąd 2015/11/07 09:37:58 System localhost [RAID5 Disk Volume: Drive 2 3 4 1] Expanding Raid Device failed.
Informacja 2015/11/07 09:35:20 System localhost [RAID5 Disk Volume: Drive 2 3 4] Start to expand Raid Device: Add Drive 1.
Informacja 2015/11/07 08:09:33 System localhost [Single Disk Volume: Drive 1] Examination completed.
Informacja 2015/11/07 08:06:04 System localhost [Single Disk Volume: Drive 1] Start examination.
Informacja 2015/11/07 07:52:12 System localhost [Single Disk Volume: Drive 1] Formatting completed.
Informacja 2015/11/07 07:51:50 System localhost [Single Disk Volume: Drive 1] Formatting begun.
Informacja 2015/11/07 00:44:01 System localhost [RAID5 Disk Volume: Drive 2 3 4] Rebuilding completed.
Informacja 2015/11/06 23:27:36 System localhost LAN 2 link is Up.
Informacja 2015/11/06 23:27:35 System localhost [RAID5 Disk Volume: Drive 2 3 4] Drive 4 added into the volume.
Informacja 2015/11/06 23:27:14 System localhost Drive 4 plugged in.
Informacja 2015/11/06 23:27:14 System localhost [RAID5 Disk Volume: Drive 2 3 4] Drive 3 added into the volume.
Informacja 2015/11/06 23:26:56 System localhost Drive 3 plugged in.
Informacja 2015/11/06 23:26:55 System localhost [RAID5 Disk Volume: Drive 2 3 4] Drive 2 added into the volume.
Informacja 2015/11/06 23:26:35 System localhost Drive 2 plugged in.
Informacja 2015/11/06 19:08:40 System localhost [RAID5 Disk Volume: Drive 2 3 4] Drive 2 added into the volume.
Informacja 2015/11/06 19:06:07 System localhost System started.
Informacja 2015/11/06 16:50:27 System localhost System was shut down on Fri Nov 6 16:50:27 CET 2015.
Informacja 2015/11/06 16:48:32 admin 192.168.2.141 --- [Power Management] System restarting.
Informacja 2015/10/23 21:03:52 System localhost LAN 2 link is Up.
Informacja 2015/10/23 21:03:50 System localhost [RAID5 Disk Volume: Drive 2 3 4] Drive 4 added into the volume.
Informacja 2015/10/23 21:03:32 System localhost Drive 4 plugged in.
Informacja 2015/10/23 21:03:32 System localhost [RAID5 Disk Volume: Drive 2 3 4] Drive 3 added into the volume.
Informacja 2015/10/23 21:03:14 System localhost Drive 3 plugged in.
Informacja 2015/10/23 19:48:39 System 127.0.0.1 localhost [App Center] QcloudSSLCertificate enabled.
Informacja 2015/10/23 19:48:39 System 127.0.0.1 localhost [App Center] QcloudSSLCertificate 1.0.35 installation succeeded.
Informacja 2015/10/23 19:47:51 System localhost [Qsync] Database migration completed.
Informacja 2015/10/23 19:47:46 System localhost [Qsync] Database migration started.
Ostrzeżenie 2015/10/23 19:47:33 System localhost [RAID5 Disk Volume: Drive 2 3 4] RAID device in degraded mode.
pytanie tylko czy te dane są jeszcze do odzyskania czy już nie
 
Jeśli 2, 3 i 4 jest pusty to może poprzednik je sformatował?
Wobec tego nie jesteśmy Tobie w stanie pomóc.

Na przyszłość proszę mieć na uwadze, że jeśli ulega awarii, któryś z dysków, to należy go wymieni. System zaczyna wtedy odbudowe macierzy dyskowej.

Po to jest macierz RAID, aby zapewnić nieprzerwaną prace macierzy dyskowej. W tym wypadku zostały podjęte działania inicjacji jej na nowo, a tym samym porzucenie ideii zastosowana macierzy RAID.
 
Jeśli 2, 3 i 4 jest pusty to może poprzednik je sformatował?
Wobec tego nie jesteśmy Tobie w stanie pomóc.

Na przyszłość proszę mieć na uwadze, że jeśli ulega awarii, któryś z dysków, to należy go wymieni. System zaczyna wtedy odbudowe macierzy dyskowej.

Po to jest macierz RAID, aby zapewnić nieprzerwaną prace macierzy dyskowej. W tym wypadku zostały podjęte działania inicjacji jej na nowo, a tym samym porzucenie ideii zastosowana macierzy RAID.
Dziękuję za pomoc. Udało mi się porozmawiać ze sprawcą całego zamieszania. Scenariusz tego złego filmu przedstawia się następująco:
1. awaria dysku nr 2 dysk wymieniony na nowy
2. macierz się odbudowała wszystko ok i wszyscy radośni
3. jak jest dobrze to zawsze może być lepiej dodają nowy dysk do zatoki nr 1
4.system wstaje dostęp do danych jest, ale macierz z automatu nie chce się powiększyć o nowy dysk
5. no to w zarządzaniu raidem dają rozszerzenie i po chwili raid jest w stanie jak powyżej....odmontowany
6. oczywiście o kopi danych i ustawień nawet nikt nie pomyślał......
7. płacz (??!!@@!!!!!!??%%$#) i takie tam a winnych nie ma.
pytanie czy z poziomu konsoli można w jakiś sposób odzyskać dane? Nie jestem specjalistą w tej dziedzinie więc proszę o info czy to jest możliwe czy po prostu dać sobie z tym spokój, "zaorać" od nowa serwer postawić od nowa macierz i niech teraz płaczą, ale to traktuję jako ostateczność.
 

Użytkownicy znaleźli tą stronę używając tych słów:

  1. odmontowany
  2. jak zamontować ponownie raid
  3. raid odmontowany
  4. awaria dysków