-
#1
Hello,
in the console, that can be opened in th GUI I get the following:
Code:
Oct 11 00:00:00 freenas syslog-ng[1010]: Configuration reload finished; Oct 11 00:00:00 freenas ZFS: vdev state changed, pool_guid=15409947220624076884 vdev_guid=15613207025042698968 Oct 11 00:00:00 freenas ZFS: vdev state changed, pool_guid=15409947220624076884 vdev_guid=8902586750251689130 Oct 11 00:00:00 freenas ZFS: vdev state changed, pool_guid=15409947220624076884 vdev_guid=937506473442443186 Oct 11 00:00:00 freenas ZFS: vdev state changed, pool_guid=15409947220624076884 vdev_guid=1355218902690235654 Oct 11 00:03:04 freenas (ada3:ahcich3:0:0:0): READ_FPDMA_QUEUED. ACB: 60 00 f0 b9 f0 40 02 00 00 01 00 00 Oct 11 00:03:04 freenas (ada3:ahcich3:0:0:0): CAM status: ATA Status Error Oct 11 00:03:04 freenas (ada3:ahcich3:0:0:0): ATA status: 41 (DRDY ERR), error: 40 (UNC ) Oct 11 00:03:04 freenas (ada3:ahcich3:0:0:0): RES: 41 40 47 ba f0 40 02 00 00 00 00 Oct 11 00:03:04 freenas (ada3:ahcich3:0:0:0): Retrying command Oct 11 00:03:06 freenas (ada3:ahcich3:0:0:0): READ_FPDMA_QUEUED. ACB: 60 00 f0 b9 f0 40 02 00 00 01 00 00 Oct 11 00:03:06 freenas (ada3:ahcich3:0:0:0): CAM status: ATA Status Error Oct 11 00:03:06 freenas (ada3:ahcich3:0:0:0): ATA status: 41 (DRDY ERR), error: 40 (UNC ) Oct 11 00:03:06 freenas (ada3:ahcich3:0:0:0): RES: 41 40 47 ba f0 40 02 00 00 00 00 Oct 11 00:03:06 freenas (ada3:ahcich3:0:0:0): Retrying command Oct 11 00:03:08 freenas (ada3:ahcich3:0:0:0): READ_FPDMA_QUEUED. ACB: 60 00 f0 b9 f0 40 02 00 00 01 00 00 Oct 11 00:03:08 freenas (ada3:ahcich3:0:0:0): CAM status: ATA Status Error Oct 11 00:03:08 freenas (ada3:ahcich3:0:0:0): ATA status: 41 (DRDY ERR), error: 40 (UNC ) Oct 11 00:03:08 freenas (ada3:ahcich3:0:0:0): RES: 41 40 47 ba f0 40 02 00 00 00 00 Oct 11 00:03:08 freenas (ada3:ahcich3:0:0:0): Retrying command Oct 11 00:03:11 freenas (ada3:ahcich3:0:0:0): READ_FPDMA_QUEUED. ACB: 60 00 f0 b9 f0 40 02 00 00 01 00 00 Oct 11 00:03:11 freenas (ada3:ahcich3:0:0:0): CAM status: ATA Status Error Oct 11 00:03:11 freenas (ada3:ahcich3:0:0:0): ATA status: 41 (DRDY ERR), error: 40 (UNC ) Oct 11 00:03:11 freenas (ada3:ahcich3:0:0:0): RES: 41 40 47 ba f0 40 02 00 00 00 00 Oct 11 00:03:11 freenas (ada3:ahcich3:0:0:0): Retrying command Oct 11 00:03:13 freenas (ada3:ahcich3:0:0:0): READ_FPDMA_QUEUED. ACB: 60 00 f0 b9 f0 40 02 00 00 01 00 00 Oct 11 00:03:13 freenas (ada3:ahcich3:0:0:0): CAM status: ATA Status Error Oct 11 00:03:13 freenas (ada3:ahcich3:0:0:0): ATA status: 41 (DRDY ERR), error: 40 (UNC ) Oct 11 00:03:13 freenas (ada3:ahcich3:0:0:0): RES: 41 40 47 ba f0 40 02 00 00 00 00 Oct 11 00:03:13 freenas (ada3:ahcich3:0:0:0): Error 5, Retries exhausted Oct 11 00:03:14 freenas ZFS: vdev state changed, pool_guid=15409947220624076884 vdev_guid=1355218902690235654
What exactly is this Error 5, is this another drive failing?
My configuration:
Version: FreeNAS-11.3-U5
HDDs: 3 x Western Digital WD RE4-GP 2 TB
1 x WD WD80EFAX Red 8TB
RAM: 16 GB
CPU: i5-4690 CPU @ 3.50GH
Mainboard: H97i-Plus from Asus
Case: Fractal Node 304
Any help is appreciated
-
#2
Your server is trying to send a command (like read block xxxxxx) to the drive ada3, which is coming back from the drive’s ATA interface with an error 5 times before it stops trying.
Once this is counted as a failed drive, your pool state will probably be showing as degraded.
This usually indicates either a connection error with the drive or a failure of the drive.
Are you able to run smartctl -a /dev/ada3
?
-
#3
Thanks four your reply.
Yes I can run said command (see below):
Code:
smartctl 7.0 2018-12-30 r4883 [FreeBSD 11.3-RELEASE-p14 amd64] (local build) Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Western Digital RE4-GP Device Model: WDC WD2003FYPS-27Y2B0 Serial Number: WD-WCAVY6438064 LU WWN Device Id: 5 0014ee 2afffa985 Firmware Version: 04.05G11 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Size: 512 bytes logical/physical Rotation Rate: 5400 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS (minor revision not indicated) SATA Version is: SATA 2.6, 3.0 Gb/s Local Time is: Mon Oct 12 19:02:50 2020 CEST SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x80) Offline data collection activity was never started. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: (41580) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 473) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x303d) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 5 3 Spin_Up_Time 0x0027 253 225 021 Pre-fail Always - 8883 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 145 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 100 253 000 Old_age Always - 0 9 Power_On_Hours 0x0032 083 083 000 Old_age Always - 13048 10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 145 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 22 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 122 194 Temperature_Celsius 0x0022 116 095 000 Old_age Always - 36 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 100 253 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed without error 00% 11864 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.
Do you see anything suspicious here?
-
#4
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always — 5
That’s 5 read errors
Also, your last long SMART test was over 1000 hours ago… maybe run another one… (then wait about 20 minutes and then look at the output again)
smartctl -t long /dev/ada3
-
#5
Thank you for the advice, I will do so and report back here.
EDIT:
Seems there is the problem:
Code:
SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed: read failure 90% 13052 49330759 # 2 Extended offline Completed: read failure 90% 13052 49330759 # 3 Extended offline Completed without error 00% 11864 -
Read failure 2 x at the same sector …. just great.
Last edited: Oct 13, 2020
Уважаемые гости и пользователи форума!
Чтобы видеть ссылки на форуме — надо зарегистрироваться и иметь 1 пост на форуме. Для этого есть КУРИЛКА и там тема Здрасти.
Модераторы: rewuxiin, kulia
Правила форума
Условием использования нашего форума, является соблюдение настоящих Правил форума.
Ваш вопрос может быть удален без объяснения причин, если на него есть ответы по приведённым ссылкам, а вы рискуете получить предупреждение.
6 сообщений
• Страница 1 из 1
-
artclub
- Сообщения: 6
- Зарегистрирован: 20 дек 2013, 17:05
Сообщение
Не могу установить FreeNas на Hp ProLiant DL580 G7
Не могу установить FreeNas на Hp ProLiant DL580 G7. Нужна ваша помощь!
inquiry. CDB 12 00 00 01 00 00
Cam status:CCB request completed with an error
Error 5, Retries exhausted
- Вложения
-
- Error6
-
- Error 5, Retries exhausted
-
SinglWolf
- Контактная информация:
- Откуда: Башкирия
- Сообщения: 2381
- Зарегистрирован: 23 янв 2012, 22:11
Сообщение
21 дек 2013, 08:15
artclub писал(а):Не могу установить FreeNas на Hp ProLiant DL580 G7
Попробуйте вручную привод примонтировать:
-
artclub
- Сообщения: 6
- Зарегистрирован: 20 дек 2013, 17:05
Сообщение
23 дек 2013, 12:32
Добрый день!
SinglWolf Спасибо вам, очень помогли!
-
artclub
- Сообщения: 6
- Зарегистрирован: 20 дек 2013, 17:05
Сообщение
23 дек 2013, 13:48
Добрый день! Не получается настроить сетевой интерфейс!
Что можно сделать?
Заранее спасибо!
- Вложения
-
- error
-
artclub
- Сообщения: 6
- Зарегистрирован: 20 дек 2013, 17:05
Сообщение
не могу настроить сеть выдает ошибку
23 дек 2013, 15:51
не могу настроить сеть выдает ошибку
- Вложения
-
- error
-
artclub
- Сообщения: 6
- Зарегистрирован: 20 дек 2013, 17:05
Сообщение
23 дек 2013, 19:13
Добрый вечер! Помогите кто встречался с такой проблемой? FreeNas установился, но не видит сетевого интерфейса. Что делать, где копать?
6 сообщений
• Страница 1 из 1
Вернуться в «FreeNAS»
Замена диска в RAID массиве — потенциально опасная операция. Если в процессе замены заменяющий диск даст сбой, будут проблемы, от неприятных до фатальных. ZFS — надежная система. И в большинстве случаев выдерживает такую проблему. Но не всегда. Давайте аккуратно разберемся как заменять диск и каких действий избегать.
Прим. Серию экспериментов для этого поста проводил на виртуальной машине с nas4free 9.0.0.1 rev 249. Но на практике и на живом железе многократно менял диски в zfs raidz. И даже сталкивался с проблемой падения заменяющего диска. Так что с уверенностью скажу — на живом железе всё точно также. Только страшнее
ВАЖНО. Самое вредное, что можно сделать — это наобум набирать разные команды, особенно с ключём -f
См пример того, как подобная команда привела к тому, что пул придётся бекапить, разбирать и собирать по новой. И это ещё повезло — команды есть и более деструктивные.
1. IMHO заменять диски надо через командную строку. Это гибче, функциональные и проще, чем через вебгуй. Как делать это через вебгуй — я не знаю, извините.
2. Заменяющий диск должен быть такого же объёма (или большего) как заменяемый. Обязательно проверьте его перед использованием, например. Падение диска в процессе замены — неприятная штука. Если этот диск ранее использовался в zfs пуле, например для тестов — обнулите его.
zpool labelclear /dev/da0
3. Определитесь возможностью использования заменяемого диска в процессе замены (Подробнее). Если с него хоть что-то читается, у вас есть свободный SATA порт и временное место ещё под один диск — подключите заменяющий диск параллельно заменяемому. Это улучшит шансы на хороший исход если что-то пойдёт не так.
Если заменяемый диск мертв или портов не хватает — переведите его в автономный режим.
nas4free:~# zpool offline Pool ada3
nas4free:~# zpool status
pool: Pool
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using ‘zpool online’ or replace the device with
‘zpool replace’.
scan: resilvered 2.42G in 0h5m with 0 errors on Wed Mar 20 19:01:01 2013
config:
NAME STATE READ WRITE CKSUM
Pool DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
478512335560695467 OFFLINE 0 0 0 was /dev/ada3
ada2 ONLINE 0 0 0
ada1 ONLINE 0 0 0
errors: No known data errors
4. Теперь внимательно смотрим на наш массив, на чем он сделан. Возможно много вариантов, но реально — три. На сырых дисках, на .nop устройствах и на gpt разделах. (устройства выглядят соответственно как ada0, ada0.nop и что-то вроде /dev/gpt/disk12WDG1708)
Наиболее надёжный вариант во FreeBSD — на gpt разделах, как это делать руками — см. здесь. Но если вы создавали массив через вебгуй nas4free (как и я), то массив был создан на nop устройствах поверх сырых дисков.
По результатам проведённых опытов (см подробнее ниже) я считаю, что после создания массива от .nop устройств надо избавиться. Если этого не сделать — увеличивается вероятность при замене диска в худшем случае (падение заменяющего диска в процессезамены) потерять массив. Напишу про удаление .nop отдельный пост. Пожалуй, если вы не удалили .nop устройства раньше, их стоит удалить перед заменой диска.
5. Собственно замена.
nas4free:~# zpool replace Pool ada3 ada4
nas4free:~# zpool status
pool: Pool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Wed Mar 20 18:30:56 2013
299M scanned out of 7.29G at 17.6M/s, 0h6m to go
98.1M resilvered, 4.00% done
config:
NAME STATE READ WRITE CKSUM
Pool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
replacing-0 ONLINE 0 0 0
ada3 ONLINE 0 0 0
ada4 ONLINE 0 0 0 (resilvering)
ada2 ONLINE 0 0 0
ada1 ONLINE 0 0 0
errors: No known data errors
Если меняем диск, который перевели в автономный режим — команда замены может выгдядеть как
nas4free:~# zpool replace Pool 478512335560695467 ada4
Вот собственно и всё. Начинается замена, за которой можно наблюдать командой status (или через вебгуй nas4free).
Это если всё было ОК.
Давайте рассмотрим возможные проблемы
Пусть у нас в процессе замены упал заменяющий диск
nas4free:~# zpool status pool: Pool
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Wed Mar 20 18:30:56 2013
5.90G scanned out of 7.29G at 1/s, (scan is slow, no estimated time)
514M resilvered, 80.98% done
config:
NAME STATE READ WRITE CKSUM
Pool DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
replacing-0 DEGRADED 0 0 0
ada3 ONLINE 0 0 0
16242751459157794503 UNAVAIL 0 0 0 was /dev/ada4
ada2 ONLINE 0 0 0
ada1 ONLINE 0 0 0
errors: No known data errors
Если у вас такое произошло, упавший диск надо попытаться отключить.
nas4free:~# zpool detach Pool 16242751459157794503
nas4free:~# zpool status
pool: Pool
state: ONLINE
scan: resilvered 514M in 0h7m with 0 errors on Wed Mar 20 18:38:48 2013
config:
NAME STATE READ WRITE CKSUM
Pool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ada3 ONLINE 0 0 0
ada2 ONLINE 0 0 0
ada1 ONLINE 0 0 0
errors: No known data errors
Если всё получилось — везение на вашей стороне. Если нет и диск не отключается — имеем редкую неприятную проблему, см. ZFS: Cannot replace a replacing device
Аналогично, используем detach в случае, если в процессе замены стали недоступны оба диска — и заменяемый и заменяющий (убедитесь только, что дело в дисках, а не в гнилых кабелях)
nas4free:~# zpool status pool: Pool
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Wed Mar 20 18:42:22 2013
6.69G scanned out of 7.29G at 1/s, (scan is slow, no estimated time)
257M resilvered, 91.72% done
config:
NAME STATE READ WRITE CKSUM
Pool DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
replacing-0 UNAVAIL 0 0 0
2050528262512619809 UNAVAIL 0 0 0 was /dev/ada3
14504872036416078121 UNAVAIL 0 0 0 was /dev/ada4
ada2 ONLINE 0 0 0
ada1 ONLINE 0 0 0
errors: No known data errors
nas4free:~# zpool detach Pool 2050528262512619809
nas4free:~# zpool status pool: Pool
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using ‘zpool online’.
see: http://www.sun.com/msg/ZFS-8000-2Q
scan: resilvered 257M in 0h6m with 0 errors on Wed Mar 20 18:48:39 2013
config:
NAME STATE READ WRITE CKSUM
Pool DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
14504872036416078121 UNAVAIL 0 0 0 was /dev/ada4
ada2 ONLINE 0 0 0
ada1 ONLINE 0 0 0
errors: No known data errors
Отрывая диски от виртуальной машины мне удалось получить и более тяжёлую ситуацию
nas4free:~# zpool status
no pools available
nas4free:~# zpool import
pool: Pool
id: 8374523812252373009
state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://www.sun.com/msg/ZFS-8000-3C
config:
Pool UNAVAIL insufficient replicas
raidz1-0 UNAVAIL insufficient replicas
replacing-0 ONLINE
ada1.nop ONLINE
ada2.nop ONLINE
15699628039254375131 UNAVAIL cannot open
13721477516992971685 UNAVAIL cannot open
Видно, что пул не виден и не может быть импортирован. Фактически, информация пропала. Почему так произошло есть только предположение. В предыдущем случае пул был на сырых дисках, ada0, при этом диски были последовательно подсоединены к SATA контроллеру. Во втором случае пул был построен на nop устройствах, ada0.nop, а диски подсоединены к портам контроллера вперемешку. В принципе, ни то ни другое обычно не представляет никакой проблемы, zfs всё находит и собирает. Но, как видим, в сложном случае при наложении одного на другое возможна проблема.
ЕМНИП в профильной ветке было сообщение от камрада, который получил подобную проблему при переносе пула на другой компьютер. Он сумел её решить, переставляя диски по портам контроллера с целью подобрать «правильную» последовательность.
Для уменьшения места для проблем советую удалить nop устройства, они нужны только при создании пула. А общее решение (для FreeBSD и Linux, но не рекомендованное под Solaris) — массивы на gpt разделах, ссылка была выше. К сожалению, если массив уже создан — это благое пожелание. Так как переделать массив на сырых дисках в массив на gpt разделах можно только его предварительно разрушив
Topic: Install errors, can’t get it to boot- Error 5 , retries exhausted etc (Read 2722 times)
I have been trying to install Opnsense 19.1 VGA edition(tried other variants as well, nano works but no way to install to HDD) on some older machines I have lying around to try it out before I switch over from Pfsense to Opnsense. I tried all day yesterday doing different things to get it to install with no success. Same errors and issues every time. It is an old AMD a7gm-s Foxconn board but I test Pfsense on it before I started on Pfsense year or 2 ago and it worked. I have tried 3 different USB sticks with the same result. Took me a while to get it to write one correctly with physdiskwrite after trying rufus with no luck as I was getting GPT boot error.
Tried these set hint options from here:
https://forum.opnsense.org/index.php?topic=7247.0
https://forum.opnsense.org/index.php?topic=7142.15
See screenshot of where it boots to before it reboots.
Really want to get this working to test it. Any help appreciated can see the forums here are much nicer to people than PF*****.
Logged
Check if running the latest Foxconn BIOS — long shot but helpful going forward regardless of the OS
Logged
Logged
Edit: Got static mapping option I was looking for. Not sure why it was not showing before.
Great got it working on another old Dell Inspiron where the nano image worked and did the install. Getting used to the web-gui. Is there a button or area to add a static mapping of a lease already given out on the gui vs having to type the whole static lease info in the static mapping screen? Basically, click add on dhcp lease and give it a static mapping?
Solved
« Last Edit: February 11, 2019, 07:09:56 pm by Snowmanut »
Logged
Участник форума
Зарегистрирован: 04.02.2013
Пользователь #: 144,779
Сообщения: 153
Источник
Unable to install, Error 5
Dundermifflin
Cadet
Currently trying to get freenas going on my old desktop.
I’m am booting from a freenas DVD that I burnt but I’m am unable to install I keep getting the following error:
«Aprobe1 :ahcich1 :0 :0 :0 CAM status : command timeout
Aprobe1 :ahcich1 :0 :0 :0 Error 5, retry was blocked»
Can anyone assist?
dlavigne
Guest
Dundermifflin
Cadet
Mother Board: Acer m3201
CPU: AMD 9150e Quad 1800MHz
RAM: 2GB
HDD: 1 x 500GB SATA
Hard to get more info as I can’t boot into anything to retrieve additional details.
danb35
Hall of Famer
Dundermifflin
Cadet
I understand the RAM is low, but this is only for a low demand home file server. I have ran machines on freenas with 2GB before.
gpsguy
Active Member
It doesn’t matter how you are using the server, 8GB is still the minimum RAM requirement for ZFS on FreeNAS. Whenever I fire up a VM for testing, evem with a couple of 30GB virtual drives, I give the OS 8GB RAM.
One could get away with say 2GB of RAM, using UFS on older versions of FreeNAS, like the 8.x series. But, with 9.3, support for UFS has been deprecated.
With that hardware, you might want to look at NAS4free (www.nas4free.org). It’s hardware requirements are lower than what FreeNAS 8/9 need.
FreeNAS 11.x, 9.10, & 9.3 manuals (PDF & ePub) download
FreeNAS-8.3.0-RELEASE-x64 | HP N40L | AMD Turion II Neo (1.5GHz) CPU
8GB DDR3 ECC RAM | 8GB Patriot Rage XT
2 x Seagate ST320005N1A1AS 2TB (ZFS mirror) | 1 x Toshiba DT01ACA300 3TB (UFS)
Dundermifflin
Cadet
Dundermifflin
Cadet
Robert Trevellyan
Pony Wrangler
Dell PowerEdge T20 | Pentium G3220 @ 3GHz | 32GB
RAIDZ1: 3x CT2000MX500SSD1
boot: Samsung 840 EVO 120GB
UPS: CP850PFCLCD
Ubuntu Server | Webmin
danb35
Hall of Famer
I’d think it sounds like a problem with the media (bad disc or bad burn), but the fact that also happened with NAS4Free would indicate otherwise. Perhaps a faulty CD/DVD-ROM drive? It’s possible that it’s incompatible, but that seems pretty unlikely.
Google comes up with a number of hits on that error; this forum thread looks like it might be somewhat relevant. But even if you can get the installation to work, you’re running a significant risk of pool corruption with that hardware.
Ahjohng
Dabbler
I have encountered a similar problem. My system spec is ASRock E3C226D2I with 16GB ECC RAM and g3240, 8GB USB stick (system boot), and 6 Hitachi Deskstar 1TB hard disks (4 of these disks came out of an older Windows 7 RAID 10, and 2 housed Windows 7 boot disks).
While installing, FreeNAS9.3 shows the following error(s). Would this error render the installed system not able to detect any of the 6 disks?
(ada2:ahcich2:0:0:0): CAM status: ATA Status Error
(ada2:ahcich2:0:0:0): ATA status: 51 (DRDY SERV ERR), error: 40 (UNC )
(ada2:ahcich2:0:0:0): RES: 51 40 30 69 70 40 74 00 00 d0 00
(ada2:ahcich2:0:0:0): Retrying command
(ada2:ahcich2:0:0:0): READ_FPDMA_QUEUED. ACB: 60 00 00 69 70 40 74 00 00 01 00 00
(ada2:ahcich2:0:0:0): CAM status: ATA Status Error
(ada2:ahcich2:0:0:0): ATA status: 51 (DRDY SERV ERR), error: 40 (UNC )
(ada2:ahcich2:0:0:0): RES: 51 40 30 69 70 40 74 00 00 d0 00
(ada2:ahcich2:0:0:0): Error 5, Retries exhausted
Loading early kernel modules:
GEOM_RAID5: Module loaded, version 1.3.20140711.62 (rev f91e28e40bf7)
savecore: /dev/dumpdev: No such file or directory
/etc/rc: WARNING: unable to figure out a UUID from DMI data, PLEASE FIX if you are an integrator.
/etc/rc: WARNING: generating a random one.
Setting hostuuid: 979a651b-08ad-11e5-acb6-d05099384afc.
Setting hostid: 0x08d5400e.
Not suitable dump device was found.
Entropy harvesting: interrupts ethernet point_to_point kickstart.
Starting file system checks:
Mounting local file systems:
igb0:link state changed to UP
Источник
Adblock
detector
I have been using GNU/Linux for several years and now I am giving FreeBSD a try.
Yesterday I managed to install FreeBSD 10.2 on an old computer using a 5 GB partition. I only installed the base system and a few programs and everything seems to work fine. The disk on which I installed FreeBSD contains another primary partition which is used by GNU/Linux.
Today I wanted to try installing FreeBSD on a 43 GB spare disk on another computer. The disk has three primary (empty) partitions already. Using fdisk
under GNU/Linux I set the type of one partition to a5
(FreeBSD): I wanted to install FreeBSD on this partition / slice. I then booted the FreeBSD installation CD and expected I would only need to create labels inside the FreeBSD slice, but the FreeBSD fdisk
that is started by the installer won’t see any partition at all! It reports the whole disk as unused and offers to create a new slice.
For me it is OK to use the whole disk, but why doesn’t fdisk
see the existing slices? Note that on the older computer I was able to see and use a 5 GB partition using the same installation CD.
Am I overlooking something?
EDIT
I found out there are problems when trying to access the two disks from FreeBSD. After booting the installation CD I opened a shell. I looked for my two disks. If I understood correctly, they are
/dev/ad0 # Blank 43 GB disk where I want to install FreeBSD
/dev/ad2 # 60 GB disk with working Debian 8 on it
When I try to access both disks from the shell with
# diskinfo -c ad0
# diskinfo -c ad2
I get error messages:
(ada1:ata1:0:0:0) READ_DMA. ACB: c8 00 00 00 00 40 00 00 00 01 00
(ada1:ata1:0:0:0) CAM status: ATA Status Error
(ada1:ata1:0:0:0) ATA status: 51 (DRDY SERV ERR), error: 84 (ICRC ABRT )
(ada1:ata1:0:0:0) RES: 51 84 00 00 00 00 00 00 00 00 00
(ada1:ata1:0:0:0) Error 5, Retries exhausted
diskinfo: read: Input/output error
Note that I get the same error on both disks while I can access both disks from Debian. The only thing I can think of is that I need to change some BIOS setting, but I have no idea what the problem could be.
EDIT 2
Booting with
hw.ata.ata_dma=0
seems to solve the problem. I got the hint from here. Still, I am not sure what the problem is and why setting this variable would solve it.
IMO the problem should not be caused by a bad drive because at least one of the disks has no errors (checked it recently for bad blocks). I will check the other disk now.
@sandy said in problems with the hard drive in the pfsense:
@dotdash is what I’m going to do but I wanted to know if there was another way to not have to install again and for services, thanks brother
«pfSense», that you installed from an ISO or USB drive on a drive (hard disk, SSD, whatever) needs a not-broken disk.
You can’t neither install Windows on it. Nor Debian, Mac OS (Apple) won’t work — their is nothing you can do with this drive.
This drive became a good paper-weight, or something like that. Or , if you have enough of them :
https://www.youtube.com/watch?v=BJhwhN3GNdY
Longer answer : it might be possible to use special hard drive test-software to mark bad sectors as definitely bad so they won’t get used by the drive any-more. That was a very valid reason to do so in the last century (the ’80and ’90).
These days, a drive cost as much as 5 Big MAC’s so nobody cares anymore.
Also : when drives start to loss sectors, more sectors will die soon.
Also : installing pfSense on a new drive : a couple of minutes or so — so why wait ??
pfSense has SMART capabilities included. Now you know why ^^
My problem was that hdds of my zfs raid partly degraded and partly destroyed after a lightning.
I was able to detect the problem with zpool status:
zpool status myzfs
pool myzfs
state: DEGRADED (DESTROYED)
The good news. ZFS seems to be really reliable and I in my case was able to recover the raid fully. How I recovered see below in the answer.
Recovering a ZFS Raid I learned two things:
- 1 failed drive brings a zpool down. An raid based on stiped mirrows however stays available. Details exlained in ZFS: You should use mirror vdevs, not RAIDZ
- Recovering and resilvering a
zpool based on raidz2-0 takes a really long time. You may be better of with a striped mirror. This has pros and cons widely discussed in the internet sdfg - A raid is NOT a backup! Offshoring backups into the cloud or a second location is an big advantage and is possible today without any big affords. Most Raids allow backups to the cloud or ZFS replication to another NAS
Orignal debug information
This however there not necessarily important to detect and solve the problem.
I have troubles with my freenas 9.2.1. It crashed today. Its running a fileserver on zfs jbod raid 2. I’m not sure what exactly causes the problems. The system is booting however reacting pretty slow. From the logs I couldn’t figure anything totally wrong. Thus I’m not sure where to get startet with error analysis and how to solve them.
The problem is that the system crashes and responds pretty slow. The freenas web interface crashes as well since pyhon dies.
Freenas is installed on an usb stick, an additional drive (2tb) is attached for backup. The other 4 drives run as zfs raid.
The harddrives do show smart errors. How can I fix them? May they be the reason for the problems.
TOP
CPU: 0.1% user, 0.0% nice, 2.5% system, 0.1% interrupt, 97.3% idle
Mem: 131M Active, 11G Inact, 3689M Wired, 494M Cache, 3232M Buf, 16M Free
ARC: 3028K Total, 347K MFU, 1858K MRU, 16K Anon, 330K Header, 477K Other
Swap: 10G Total, 636K Used, 10G Free
DF
Filesystem Size Used Avail Capacity Mounted on
/dev/ufs/FreeNASs2a 971M 866M 27M 97% /
devfs 1.0k 1.0k 0B 100% /dev
/dev/md0 4.8M 3.5M 918k 79% /etc
/dev/md1 843k 2.6k 773k 0% /mnt
/dev/md2 156M 40M 103M 28% /var
/dev/ufs/FreeNASs4 20M 3.4M 15M 18% /data
fink-zfs01 6.0T 249k 6.0T 0% /mnt/fink-zfs01
fink-zfs01/.system 6.0T 249k 6.0T 0% /mnt/fink-zfs01/.system
fink-zfs01/.system/cores 6.0T 14M 6.0T 0% /mnt/fink-zfs01/.system/cores
fink-zfs01/.system/samba4 6.0T 862k 6.0T 0% /mnt/fink-zfs01/.system/samba4
fink-zfs01/.system/syslog 6.0T 2.7M 6.0T 0% /mnt/fink-zfs01/.system/syslog
fink-zfs01/shares 6.0T 261k 6.0T 0% /mnt/fink-zfs01/shares
fink-zfs01/shares/fink-privat 6.4T 344G 6.0T 5% /mnt/fink-zfs01/shares/fink-privat
fink-zfs01/shares/gf 6.0T 214k 6.0T 0% /mnt/fink-zfs01/shares/gf
fink-zfs01/shares/kundendaten 6.6T 563G 6.0T 9% /mnt/fink-zfs01/shares/kundendaten
fink-zfs01/shares/zubehoer 6.6T 539G 6.0T 8% /mnt/fink-zfs01/shares/zubehoer
fink-zfs01/temp 6.2T 106G 6.0T 2% /mnt/fink-zfs01/temp
/dev/ufs/Backup 1.9T 114G 1.7T 6% /mnt/Backup
/var/log/messages
Jan 21 21:48:32 s-FreeNAS root: /etc/rc: WARNING: failed to start syslogd
Jan 21 21:48:32 s-FreeNAS kernel: .
Jan 21 21:48:32 s-FreeNAS root: /etc/rc: WARNING: failed to start watchdogd
Jan 21 21:48:32 s-FreeNAS root: /etc/rc: WARNING: failed precmd routine for vmware_guestd
Jan 21 21:48:34 s-FreeNAS ntpd[2589]: ntpd 4.2.4p5-a (1)
Jan 21 21:48:34 s-FreeNAS kernel: .
Jan 21 21:48:36 s-FreeNAS generate_smb4_conf.py: [common.pipesubr:58] Popen()ing: zfs list -H -o mountpoint,name
Jan 21 21:48:36 s-FreeNAS generate_smb4_conf.py: [common.pipesubr:58] Popen()ing: zfs list -H -o mountpoint
Jan 21 21:48:38 s-FreeNAS last message repeated 4 times
Jan 21 21:48:38 s-FreeNAS generate_smb4_conf.py: [common.pipesubr:58] Popen()ing: /usr/local/bin/pdbedit -d 0 -i smbpasswd:/tmp/tmpEKKZ2A -e tdbsam:/var/etc/private/passdb.tdb -s /usr/local/etc/smb4.conf
Jan 21 21:48:43 s-FreeNAS ntpd[2590]: time reset -0.194758 s
Jan 21 21:48:45 s-FreeNAS smartd[2867]: Device: /dev/ada3, FAILED SMART self-check. BACK UP DATA NOW!
Jan 21 21:48:45 s-FreeNAS smartd[2867]: Device: /dev/ada3, 164 Currently unreadable (pending) sectors
Jan 21 21:48:45 s-FreeNAS smartd[2867]: Device: /dev/ada3, Failed SMART usage Attribute: 5 Reallocated_Sector_Ct.
Jan 21 21:48:45 s-FreeNAS smartd[2867]: Device: /dev/ada3, previous self-test completed with error (unknown test element)
Jan 21 21:48:51 s-FreeNAS mDNSResponder: mDNSResponder (Engineering Build) (Mar 1 2014 18:12:24) starting
Jan 21 21:48:51 s-FreeNAS mDNSResponder: 8: Listening for incoming Unix Domain Socket client requests
Jan 21 21:48:51 s-FreeNAS mDNSResponder: mDNS_AddDNSServer: Lock not held! mDNS_busy (0) mDNS_reentrancy (0)
Jan 21 21:48:51 s-FreeNAS mDNSResponder: mDNS_AddDNSServer: Lock not held! mDNS_busy (0) mDNS_reentrancy (0)
Jan 21 21:48:53 s-FreeNAS netatalk[3142]: Netatalk AFP server starting
Jan 21 21:48:53 s-FreeNAS cnid_metad[3179]: CNID Server listening on localhost:4700
Jan 21 21:48:53 s-FreeNAS kernel: done.
Jan 21 21:48:54 s-FreeNAS mDNSResponder: mDNS_Register_internal: ERROR!! Tried to register AuthRecord 0000000800C2FD60 s-FreeNAS.local. (Addr) that's already in the list
...
Jan 21 21:48:54 s-FreeNAS mDNSResponder: mDNS_Register_internal: ERROR!! Tried to register AuthRecord 0000000800C30180 109.1.1.10.in-addr.arpa. (PTR) that's already in the list
Jan 21 22:04:44 s-FreeNAS kernel: swap_pager: indefinite wait buffer: bufobj: 0, blkno: 1572950, size: 8192
...
Jan 21 22:05:25 s-FreeNAS kernel: GEOM_ELI: g_eli_read_done() failed ada0p1.eli[READ(offset=110592, length=4096)]
Jan 21 22:05:25 s-FreeNAS kernel: swap_pager: I/O error - pagein failed; blkno 1572894,size 4096, error 5
Jan 21 22:05:25 s-FreeNAS kernel: vm_fault: pager read error, pid 3020 (python2.7)
Jan 21 22:05:25 s-FreeNAS kernel: Failed to write core file for process python2.7 (error 14)
...
Jan 21 22:19:44 s-FreeNAS kernel: (ada0:ahcich0:0:0:0): READ_FPDMA_QUEUED. ACB: 60 08 70 02 00 40 00 00 00 00 00 00
Jan 21 22:19:44 s-FreeNAS kernel: (ada0:ahcich0:0:0:0): CAM status: ATA Status Error
Jan 21 22:19:44 s-FreeNAS kernel: (ada0:ahcich0:0:0:0): ATA status: 41 (DRDY ERR), error: 40 (UNC )
Jan 21 22:19:44 s-FreeNAS kernel: (ada0:ahcich0:0:0:0): RES: 41 40 70 02 00 40 00 00 00 00 00
Jan 21 22:19:44 s-FreeNAS kernel: (ada0:ahcich0:0:0:0): Error 5, Retries exhausted
Jan 21 22:19:44 s-FreeNAS kernel: GEOM_ELI: g_eli_read_done() failed ada0p1.eli[READ(offset=253952, length=4096)]
Jan 21 22:19:44 s-FreeNAS kernel: swap_pager: I/O error - pagein failed; blkno 1572929,size 4096, error 5
Jan 21 22:19:44 s-FreeNAS kernel: vm_fault: pager read error, pid 2869 (smartd)
Jan 21 22:19:44 s-FreeNAS kernel: Failed to write core file for process smartd (error 14)
Jan 21 22:19:44 s-FreeNAS kernel: pid 2869 (smartd), uid 0: exited on signal 11
smartctl —scan
/dev/ada0 -d atacam # /dev/ada0, ATA device
/dev/ada1 -d atacam # /dev/ada1, ATA device
/dev/ada2 -d atacam # /dev/ada2, ATA device
/dev/pass3 -d atacam # /dev/pass3, ATA device
/dev/ada3 -d atacam # /dev/ada3, ATA device
/dev/ada4 -d atacam # /dev/ada4, ATA device
/dev/ada5 -d atacam # /dev/ada5, ATA device
smartctl -a /dev/ada3
smartctl 6.2 2013-07-26 r3841 [FreeBSD 9.2-RELEASE-p3 amd64] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Device Model: WDC WD4000F9YZ-09N20L0
Serial Number: WD-WMC1F1211607
LU WWN Device Id: 5 0014ee 0ae5c0b4c
Firmware Version: 01.01A01
User Capacity: 4,000,787,030,016 bytes [4.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 7200 rpm
Device is: Not in smartctl database [for details use: -P showall]
ATA Version is: ATA8-ACS (minor revision not indicated)
SATA Version is: SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is: Wed Jan 21 23:07:55 2015 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: FAILED!
See vendor-specific Attribute list for failed Attributes.
General SMART Values:
Offline data collection status: (0x85) Offline data collection activity
was aborted by an interrupting command from host.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 73) The previous self-test completed having
a test element that failed and the test
element that failed is not known.
Total time to complete Offline
data collection: (41640) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 451) minutes.
Conveyance self-test routine
recommended polling time: ( 5) minutes.
SCT capabilities: (0x70bd) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 187 187 051 Pre-fail Always - 553
3 Spin_Up_Time 0x0027 142 138 021 Pre-fail Always - 11900
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 93
5 Reallocated_Sector_Ct 0x0033 139 139 140 Pre-fail Always FAILING_NOW 1791
7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0
9 Power_On_Hours 0x0032 090 090 000 Old_age Always - 7553
10 Spin_Retry_Count 0x0032 100 253 000 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 93
183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 0
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 59
193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 35
194 Temperature_Celsius 0x0022 108 098 000 Old_age Always - 44
196 Reallocated_Event_Count 0x0032 001 001 000 Old_age Always - 353
197 Current_Pending_Sector 0x0032 200 199 000 Old_age Always - 162
198 Offline_Uncorrectable 0x0030 100 253 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0
200 Multi_Zone_Error_Rate 0x0008 100 253 000 Old_age Offline - 0
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed: unknown failure 90% 7553 -
# 2 Short offline Completed: unknown failure 90% 7552 -
# 3 Short offline Completed: unknown failure 90% 7551 -
# 4 Short offline Completed: unknown failure 90% 7550 -
# 5 Short offline Completed: unknown failure 90% 7549 -
# 6 Short offline Completed: unknown failure 90% 7548 -
# 7 Short offline Completed: unknown failure 90% 7547 -
# 8 Short offline Completed: unknown failure 90% 7546 -
# 9 Short offline Completed: unknown failure 90% 7545 -
#10 Short offline Completed: unknown failure 90% 7544 -
#11 Short offline Completed: unknown failure 90% 7543 -
#12 Short offline Completed: unknown failure 90% 7542 -
#13 Short offline Completed without error 00% 7541 -
#14 Short offline Completed without error 00% 7540 -
#15 Short offline Completed: read failure 10% 7538 1148054536
#16 Short offline Completed: read failure 10% 7538 1148054536
#17 Short offline Completed: read failure 10% 7536 1148057328
#18 Short offline Completed: read failure 10% 7535 1148057328
#19 Short offline Completed without error 00% 7530 -
#20 Short offline Completed without error 00% 7529 -
#21 Short offline Completed: read failure 10% 7528 1148057328
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.