Zfs checksum error

This is the first time I've ever gotten this kind of error across the board, I've only experienced bad drives before. zpool status pool: datapool state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications...

  • #5

Quick follow up on this one.

I recently got a lot more errors, also affecting files. I’ll probably do a more thorough standalone post, but I have a couple of quick questions:

Question 1:
How common is it that multiple drives throws «too many errors» at the same time ish? Does drives (same batch, same install date) usually fail at the same time?

As far as I can see, the SMART data looks okay.

Question 2:
All the 17 files with errors are the same file, in different snapshots.

Code:

errors: Permanent errors have been detected in the following files:

        datapool/Lager@daily-1427810489_2019.07.24.08.00.14:/dir/dir/dir/dir/dir/image_257.jpg
        datapool/Lager@daily-1427810489_2019.07.28.08.00.15:/dir/dir/dir/dir/dir/image_257.jpg
        datapool/Lager@daily-1427810489_2019.07.16.08.00.15:/dir/dir/dir/dir/dir/image_257.jpg
        datapool/Lager@daily-1427810489_2019.07.14.08.00.14:/dir/dir/dir/dir/dir/image_257.jpg
        datapool/Lager@daily-1427810489_2019.07.17.08.00.15:/dir/dir/dir/dir/dir/image_257.jpg
        datapool/Lager@daily-1427810489_2019.07.29.08.00.15:/dir/dir/dir/dir/dir/image_257.jpg
        datapool/Lager@daily-1427810489_2019.07.27.08.00.14:/dir/dir/dir/dir/dir/image_257.jpg
        datapool/Lager@daily-1427810489_2019.07.15.08.00.15:/dir/dir/dir/dir/dir/image_257.jpg
        datapool/Lager@daily-1427810489_2019.07.23.08.00.15:/dir/dir/dir/dir/dir/image_257.jpg
        datapool/Lager@daily-1427810489_2019.07.20.08.00.13:/dir/dir/dir/dir/dir/image_257.jpg
        datapool/Lager@daily-1427810489_2019.07.13.08.00.14:/dir/dir/dir/dir/dir/image_257.jpg
        datapool/Lager@daily-1427810489_2019.07.21.08.00.15:/dir/dir/dir/dir/dir/image_257.jpg
        datapool/Lager@daily-1427810489_2019.07.25.08.00.15:/dir/dir/dir/dir/dir/image_257.jpg
        datapool/Lager@daily-1427810489_2019.07.18.08.00.15:/dir/dir/dir/dir/dir/image_257.jpg
        datapool/Lager@daily-1427810489_2019.07.22.08.00.14:/dir/dir/dir/dir/dir/image_257.jpg
        datapool/Lager@daily-1427810489_2019.07.19.08.00.14:/dir/dir/dir/dir/dir/image_257.jpg
        datapool/Lager@daily-1427810489_2019.07.26.08.00.14:/dir/dir/dir/dir/dir/image_257.jpg

Is that normal?

Last edited: Jul 30, 2019

What does this line mean:
scrub repaired 16.1M in 7h16m with 68 errors

Does it mean that ZFS scanned 16.1 million things, and found 68 of them with errors?

Or does it mean that ZFS scanned the whole pool, found 16.1 million things that had errors, and was able to repair most of them, but 68 of them were unrepairable?

When I say «thing», I also don’t know whether ZFS is reporting bytes, sectors (512 bytes or 4K?), or some sort of ZFS allocation unit (blocks, extents, who knows).

Given modern disks (these seem to be 2TB drives), and the known failure rates (the uncorrectable bit error rate is usually spec’ed as 10^-14), it is theoretically possible that you had 68 sectors with an uncorrectable read error. But that seems wrong, since the SMART data shows that the drives themselves have not found any read errors. It is completely impossible that you would have 16.1M media errors; any sensible SMART implementation (and the good people at WD are very sensible, I know some of them) would have long raised serious alarms. So it seems likely that the corruption to the on-disk data happened at a layer above the disk. Suspecting the HBA seems implausible to me; HBAs don’t quietly corrupt data (they tend to instead lose connection to the disk, or fabricate IO errors). If you really got 16.1M sector errors, my suspicion would be that someone actively overwrite the disk (by going behind ZFSs back directly to the device). Perhaps someone got confused, didn’t use your

/dev/gpt/wd_redbpro_2tb_4

device entry for the disk, and by mistake created a new file system on

/dev/da7

? Maybe someone was doing a performance test on the raw disk with dd, and switched «if=» and «of=» around? I work on file systems for a living, and we have a joke in the group: The worst thing that can happen to a file system is that someone tries to reformat one of our disks with a Reiser file system (to understand how cruel this joke is, you need to know what Hans Reiser did to his wife).

You ask what checksum error rate you should be seeing. That’s a difficult question. Let me first answer a different question: Disk manufacturers quote an uncorrectable bit error rate, typically 10^-14 for consumer grade, and 10^-15 for enterprise grade drives. You can do a quick back-of-the-envelope calculation: Typical disks have a performance of roughly 100 MByte/s (that’s accurate to within a factor of two), but are typically under-utilized by a factor of 10. So they read 10 MByte per second, or 100 MBit/second (I’m rounding to make the math easier). Multiply that by 10^-14 errors per bit, and you should get one error every million seconds, or roughly 30 errors per year (warning: that estimate is likely inaccurate by a factor of 10 or 100 due to various effects).

BUT: Those are cases where the drive detects an internal unrecoverable read error, and reports a media error or read error up to the operating system. These are *not* cases where the drive silently corrupts data and returns it to ZFS, pretending it to be good data. The disk drive has very extensive ECC (error correcting code) internally, and shall fundamentally never return wrong data. Vendors typically don’t even specify numbers for silent corruption, and the rate of such errors is supposed to be several orders of magnitude smaller than the uncorrectable BER. So checksum errors shall never happen; when they do, the problem is likely not the drive itself. Only when you have systems so large that you have seen thousands of media errors should you even entertain the thought of silent corruption. (In this discussion I’m not specifically talking about off-track writes, but they are likely the only real-world mechanism for silent data corruption in drives themselves.)

Or in other words: On a system of your scale, any checksum error means that you need to check the IO stack between ZFS and the drive, as it is statistically unlikely to come from the drive itself.

After having thought some more time about the subject and after having read user121391’s answer multiple times, I’d like to answer my own questions, thereby trying to clarify user121391’s statements even more. If there is something wrong, please correct me.

First question: What does «device» mean?

This has been clarified by user121391; I could not add anything meaningful.

Second and third question: What are uncorrectable errors / why are only uncorrectable errors shown in the error counters?

The wording chosen by Sun / Oracle is very unclear and misleading. Normally, when a disk (or any hardware component up in the hierarchy) encounters a data integrity error, two things could happen:

  • The error can be corrected (by mechanisms like ECC and so on), and the respective component passes the data on after it has been corrected (thereby possibly increasing some error counter which an administrator can read out by appropriate tools).

  • The error can not be corrected. In this case, usually an I/O error occurs to inform hardware / driver / applications that there was a problem.

Now, in rare cases, the I/O error does not occur even if there was a data integrity error which has not been corrected. This could be due to buggy software, failing hardware and so on. It is the thing which I personally mean by «silent bit rot», and is exactly the thing for which I have switched to ZFS: Such errors are detected by ZFS’s own «end-to-end» checksumming.

So, a ZFS checksum error is exactly a data (integrity) error at the hardware level which has not lead to an I/O error (as it should have), and hence is undetected by any mechanism except ZFS’s own checksumming, and vice versa. In that sense, the number of errors in the CKSUM column of the zpool status -v command is the number of the ZFS checksum errors as well as that of the undetected hardware errors, as these two numbers are just identical.

In other words, if the device had corrected the integrity error on its own, or (if the error had been uncorrectable) had set an I/O error, ZFS would not have increased its CKSUM error counter.

I have been heavily worried by that section of Sun’s documentation since the term «uncorrectable errors» is never explained and as such is very misleading. If they instead had written «uncorrectable hardware errors which did not lead to I/O errors as they normally should«, I would not have had any issues with that part of the documentation.

So, in summary, and to stress again: «uncorrectable» in this context means «uncorrectable and undetected at the hardware level» (undetected in the sense that no I/O error occurred despite the data (integrity) error), not «uncorrectable at the ZFS level» (actually, as far as I know, ZFS does not try to correct bad data on single disks by some error correcting checksum mechanism, but it recognizes faulty data with the help of checksums and then tries to correct the data if there are correct copies of the data on other disks (mirror) or if the data can be reconstructed from data on other disks (RAIDZ)).

Last question (regarding the persistent log)

Once again, Sun’s documentation is just wrong here (or at least so misleading that nobody will understand what really happens from reading it):

There are obviously at least two persistent logs.

The one of them the documentation talks about is the log which contains in detail which file could not be read due to an application I/O error, i.e. an I/O error or ZFS checksum error which could not be corrected even by the redundancy mechanisms of ZFS. In other words, if an I/O error happens at the disk level, but ZFS could heal that error by its redundancy mechanisms (RAIDZ, mirror), the error is not recorded in that persistent log.

IMHO, this makes sense. With the help of that log, an administrator understands at a glance what files should be restored from backup.

But there is a second persistent «log» the documentation does not mention: The «log» for the error counters. Of course, the error counters are preserved between reboots, whether the errors have been detected during a scrub or during normal operation. Otherwise, ZFS would not make any sense:

Imagine you have a script which runs zpool status -v once a day at 11 pm and mails the output to you, and you check those emails every morning to see if all is well. One day, at high noon, ZFS detects an error on one of its disks, increases the I/O or CKSUM error counters for the respective device, corrects the error (e.g. because a mirror disk has correct data) and passes on the data. In that case, there is no application I/O error; consequently, the error will not be written to the persistent error log the documentation talks about.

At that point, the I/O or CKSUM error counters are the only hint that there has been a problem with the respective disk. Then, two hours later, you have to reboot the server for some reason. Time pressure is high, production must continue, and of course you will not run zpool status -v manually in that situation before rebooting (you possibly can’t log in anyway). Now, if ZFS would not have written the error counters to a separate «log», you would lose the information that there has been an error with one of the disks. The script which checks ZFS’s status would run at 11 pm, and next morning, when studying the respective email, you would be glad to see that there has been no problem …

For that reason, the error counters are stored somewhere persistently (we could discuss if we should call that a «log», but the key point is that they are stored persistently so that zpool status -v after a reboot shows the same results as it would have shown immediately before the reboot). Actually, AFAIK, zpool clear is the only method to reset the error counters.

I think Sun / Oracle does not do itself a favor when writing such unclear documentation. I am an experienced user (in fact, developer), and I am used to reading bad documentation. But Sun’s documentation is really catastrophic. What do they expect? Should I really trick one of my disks into producing an I/O error and then reboot my server to see if the error counters are preserved? Or should I read the source code to answer such basic and important questions?

If I would have to make a decision for or against ZFS / Solaris, I would read the docs and then decide. In this case, I would clearly decide against since from the docs I would get the impression that the error counters are not preserved when rebooting, and this of course would be completely not acceptable.

Fortunately, I have tried ZFS after reading some other articles about it and before reading Sun’s documentation. As bad as the documentation is, as good is the product (IMHO).

I’m fairly new to ZFS and I have a simple mirrored storage pool setup with 8 drives. After a few weeks of running, one drive seemed to generate a lot of errors, so I replaced it.

A few more weeks go by and now I’m seeing small errors crop up all around the pool (see the zpool status output below). Should I be worried about this? How can I determine if the error indicates the drive needs to be replaced?

# zpool status
  pool: storage
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://zfsonlinux.org/msg/ZFS-8000-9P
  scan: scrub repaired 22.5K in 1h18m with 0 errors on Sun Jul 10 03:18:42 2016
config:

        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            enc-a   ONLINE       0     0     2
            enc-b   ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            enc-c   ONLINE       0     0     0
            enc-d   ONLINE       0     0     2
          mirror-2  ONLINE       0     0     0
            enc-e   ONLINE       0     0     2
            enc-f   ONLINE       0     0     1
          mirror-3  ONLINE       0     0     0
            enc-g   ONLINE       0     0     0
            enc-h   ONLINE       0     0     3

errors: No known data errors

ZFS helpfully tells me to «Determine if the device needs to be replaced…» but I’m not sure how to do that. I did read the referenced article which was helpful but not exactly conclusive.

I have looked at the SMART test results for the effected drives, and nothing jumped out at me (all tests were completed without errors), but I can post the SMART data as well if it would be helpful.

Update: While preparing to reboot into Memtest86+, I noticed a lot of errors on the console. I normally SSH in, so I didn’t see them before. I’m not sure which log I should have been checking, but the entire screen was filled with errors that look like this (not my exact error line, I just copied this from a different forum):

blk_update_request: I/0 error, dev sda, sector 220473440

From some Googling, it seems like this error can be indicative of a bad drive, but it’s hard for me to believe that they are all failing at once like this. Thoughts on where to go from here?

Update 2: I came across this ZOL issue that seems like it might be related to my problem. Like the OP there I am using hdparm to spin-down my drives and I am seeing similar ZFS checksum errors and blk_update_request errors. My machine is still running Memtest, so I can’t check my kernel or ZFS version at the moment, but this at least looks like a possibility. I also saw this similar question which is kind of discouraging. Does anyone know of issues with ZFS and spinning down drives?

Update 3: Could a mismatched firmware and driver version on the LSI controller cause errors like this? It looks like I’m running a driver version of 20.100.00.00 and a firmware version of 17.00.01.00. Would it be worth while to try to flash updated firmware on the card?

# modinfo mpt2sas
filename:       /lib/modules/3.10.0-327.22.2.el7.x86_64/kernel/drivers/scsi/mpt2sas/mpt2sas.ko
version:        20.100.00.00
license:        GPL
description:    LSI MPT Fusion SAS 2.0 Device Driver
author:         Avago Technologies <MPT-FusionLinux.pdl@avagotech.com>
rhelversion:    7.2
srcversion:     FED1C003B865449804E59F5

# sas2flash -listall
LSI Corporation SAS2 Flash Utility
Version 20.00.00.00 (2014.09.18) 
Copyright (c) 2008-2014 LSI Corporation. All rights reserved 

    Adapter Selected is a LSI SAS: SAS2308_2(D1) 

Num   Ctlr            FW Ver        NVDATA        x86-BIOS         PCI Addr
----------------------------------------------------------------------------

0  SAS2308_2(D1)   17.00.01.00    11.00.00.05    07.33.00.00     00:04:00:00

Update 4: Caught some more errors in the dmesg output. I’m not sure what triggered these, but I noticed them after unmounting all of the drives in the array in preparation for updating the LSI controller’s firmware. I’ll wait a bit to see if the firmware update solved the problem, but here are the errors in the meantime. I’m not really sure what they mean.

[87181.144130] sd 0:0:2:0: [sdc] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK
[87181.144142] sd 0:0:2:0: [sdc] CDB: Write(10) 2a 00 35 04 1c d1 00 00 01 00
[87181.144148] blk_update_request: I/O error, dev sdc, sector 889461969
[87181.144255] sd 0:0:3:0: [sdd] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK
[87181.144259] sd 0:0:3:0: [sdd] CDB: Write(10) 2a 00 35 04 1c d1 00 00 01 00
[87181.144263] blk_update_request: I/O error, dev sdd, sector 889461969
[87181.144371] sd 0:0:4:0: [sde] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK
[87181.144375] sd 0:0:4:0: [sde] CDB: Write(10) 2a 00 37 03 87 30 00 00 08 00
[87181.144379] blk_update_request: I/O error, dev sde, sector 922978096
[87181.144493] sd 0:0:5:0: [sdf] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK
[87181.144500] sd 0:0:5:0: [sdf] CDB: Write(10) 2a 00 37 03 87 30 00 00 08 00
[87181.144505] blk_update_request: I/O error, dev sdf, sector 922978096
[87191.960052] sd 0:0:6:0: [sdg] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK
[87191.960063] sd 0:0:6:0: [sdg] CDB: Write(10) 2a 00 36 04 18 5c 00 00 01 00
[87191.960068] blk_update_request: I/O error, dev sdg, sector 906238044
[87191.960158] sd 0:0:7:0: [sdh] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK
[87191.960162] sd 0:0:7:0: [sdh] CDB: Write(10) 2a 00 36 04 18 5c 00 00 01 00
[87191.960179] blk_update_request: I/O error, dev sdh, sector 906238044
[87195.864565] sd 0:0:0:0: [sda] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK
[87195.864578] sd 0:0:0:0: [sda] CDB: Write(10) 2a 00 37 03 7c 68 00 00 20 00
[87195.864584] blk_update_request: I/O error, dev sda, sector 922975336
[87198.770065] sd 0:0:1:0: [sdb] FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK
[87198.770078] sd 0:0:1:0: [sdb] CDB: Write(10) 2a 00 37 03 7c 88 00 00 20 00
[87198.770084] blk_update_request: I/O error, dev sdb, sector 922975368

Update 5: I updated the firmware for the LSI controller, but after clearing the ZFS errors and scrubbing, I’m seeing the same behavior (minor checksum errors on a few of the drives). The next step will be updating the firmware on the drives themselves.

Update 6: I replaced the PCI riser after reading in some forums that other people with the U-NAS NSC800 case have had issues with the provided riser. There was no effect on the checksum errors. I have been putting off the HDD firmware update because the process is such a pain, but I guess it’s time to suck it up and make a bootable DOS flash drive.

Update 7: I updated the firmware on the three of the Seagate drives. The other drives either didn’t have a firmware update available or I wasn’t able to get it (Western Digital told me there was no firmware update for my drive). No errors popped up after an initial scrub, but I’m going to give it at least a week or two before I say this solved the problem. It seems highly unlikely to me that the firmware in three drives could be effecting the entire pool like this.

Update 8: The checksum errors are back, just like before. I might look into a firmware update for the motherboard, but at this point I’m at a loss. It will be difficult/expensive to replace the remaining physical components (controller, backplane, cabling), and I’m just not 100% sure that it’s not a problem with my setup (ZFS + Linux + LUKS + Spinning down idle drives). Any other ideas are welcome.

Update 9: Still trying to track this one down. I came across this question which had some similarities to my situation. So, I went ahead and rebuilt the zpool using ashift=12 to see if that would resolve the issue (no luck). Then, I bit the bullet and bought a new controller. I just installed a Supermicro AOC-SAS2LP-MV8 HBA card. I’ll give it a week or two to see if this solves the problem.

Update 10: Just to close this out. It’s been about 2 weeks since the new HBA card went in and, at the risk of jinxing it, I’ve had no checksum errors since. A huge thanks to everyone who helped me sort this one out.

When a pool is imported read-only, the checksum error only shows up at the top level, and when the same pool is imported read-write, the checksum error will show up on the actual drive… and the top level of the pool.

(If imported read-only at the time; it will also «forget» about the checksum error when exported, so you can try it over again to see if it still happens under different circumstances.)

In the example above, I had the «n» pool imported read-write at the time, but there’s no checksum error on any of the disks. I don’t know how it’s possible to have a checksum error on the whole raidz1, but not on any individual devices.

I’ve observed that the Seagate ST8000AS0002 and ST8000DM002 drives have a rather high rate of silent data corruption. I’ve switched to using WD and Hitachi drives a few months ago, and haven’t detected any problems with any of the drives yet. These two example pools here, are are using drives, which were possibly manufactured on the same assembly line on the same day as each other. So, yeah, they could all have identical defects… though, wouldn’t there at least be a checksum error or something? I have another pool with six disks, three pairs of different hard drive models, configured into a raidz1 pool. One of the Seagate drives started corrupting data, and somehow the entire pool has two checksum errors on the top level vdev now… but only one drive went bad. I wrote about it in issue #4983

The problem with your theory about it only starting in July, presuming the data on-disk is actually «corrupt» and not an artifact of some recent code, is that your log on pool «o» says one or more blocks in the snapshot from June 28th are munged, which implies any such mangling code dates back at least that far.

Oh, corrupt files are always reported by the name of the oldest snapshot it appears in. It has nothing to do with when the corruption actually happened, it’s just ZFS’s bookkeeping. If you try reading the file from every snapshot, and the current live filesystem, zpool status will tell you that you have dozens of corrupt files, which are all really just the same file appearing in a dozen snapshots. I have a pool of photos I’ve been moving around since 2011, and one of the recent copies of it developed an uncorrectable checksum error in a file from 2012, which was in a snapshot from 2013, on a set of hard drives manufactured in 2015, which had the error in 2016.

For the past 20 years or so, every time I’d get a new hard drive, I would keep the old one(s) as a sort-of backup. I’m cleaning out my garage and storage unit, and I have, like, a hundred hard drives! So, I’ve been imaging the drives to make sure I really really do have all of the data off them, wiping them, and giving them away. Most of them are less than 250G, so I’ve just been dumping raw images, and I’ll actually spend the time to look though the data later. I can compress 30 old drives onto one new drive, and then I no longer have boxes and boxes full of hard drives in my garage.

Actually, dedup isn’t «actually» on. I think I was going to try it on some disk images, but I never actually wrote the data. The filesystem named «dedup1» was dedupped once upon a time, but when I zfs send|recv it a while ago, it was un-dedupped, but I never changed the name of the filesystem. (The filesystem on these pools is not dedupped.)

localhost ~ # cat /sys/module/zfs/version
0.6.5-1

localhost ~ # uname -a
Linux localhost 4.4.6-gentoo-debug2 #1 SMP Sat Aug 13 07:21:18 Local time zone must be set--see zic  x86_64 Intel(R) Xeon(R) CPU E3-1220 v3 @ 3.10GHz GenuineIntel GNU/Linux

Let’s see, on 2016-04-25 I was using 3.10.7-gentoo-r1 with ZFS 6.5.4-r1-gentoo from the regular Gentoo package manager.

localhost ~ # modinfo  /lib/modules/3.10.7-gentoo-r1/extra/zfs/zfs.ko 
version:        0.6.5.4-r1-gentoo
[...]
srcversion:     4251E810337436FD7B850DA

Then apparently the next day on 2016-04-26 I upgraded to 4.4.6-gentoo, and probably using the same version of the Gentoo package.

On 2016-07-01 I upgraded ZFS again to the current GIT version, at the time…

filename:       /lib/modules/4.4.6-gentoo/extra/zfs/zfs.ko
version:        0.6.5-329_g5c27b29
srcversion:     CC978DF57728461C914D24D

On 2016-08-13 I upgraded to the current GIT version again, and also built the kernel with more debugging turned on. (And ZFS and ZPL with debugging turned on.)

filename:       /lib/modules/4.4.6-gentoo-debug/extra/zfs/zfs.ko
version:        0.6.5-1
srcversion:     1B0E25441FFC82D8549AB1B

I rebuilt ZFS again on 2016-08-22, but mostly just to add the that two line patch #4998

Here’s stuff from my emerge log…

1461536603: Started emerge on: Apr 24, 2016 22:23:22
1461536603:  *** emerge --update zfs
[...]
1461551620: Started emerge on: Apr 25, 2016 02:33:40
1461551620:  *** emerge  zfs
1461551623:  >>> emerge (1 of 3) sys-kernel/spl-0.6.5.4-r1 to /
1461551625:  === (1 of 3) Cleaning (sys-kernel/spl-0.6.5.4-r1::/usr/portage/sys-kernel/spl/spl-0.6.5.4-r1.ebuild)
1461551625:  === (1 of 3) Compiling/Merging (sys-kernel/spl-0.6.5.4-r1::/usr/portage/sys-kernel/spl/spl-0.6.5.4-r1.ebuild)
1461551645:  === (1 of 3) Merging (sys-kernel/spl-0.6.5.4-r1::/usr/portage/sys-kernel/spl/spl-0.6.5.4-r1.ebuild)
1461551646:  >>> AUTOCLEAN: sys-kernel/spl:0
1461551646:  === Unmerging... (sys-kernel/spl-0.6.2-r1)
1461551646:  >>> unmerge success: sys-kernel/spl-0.6.2-r1
1461551650:  === (1 of 3) Post-Build Cleaning (sys-kernel/spl-0.6.5.4-r1::/usr/portage/sys-kernel/spl/spl-0.6.5.4-r1.ebuild)
1461551650:  ::: completed emerge (1 of 3) sys-kernel/spl-0.6.5.4-r1 to /
1461551650:  >>> emerge (2 of 3) sys-fs/zfs-kmod-0.6.5.4-r1 to /
1461551650:  === (2 of 3) Cleaning (sys-fs/zfs-kmod-0.6.5.4-r1::/usr/portage/sys-fs/zfs-kmod/zfs-kmod-0.6.5.4-r1.ebuild)
1461551650:  === (2 of 3) Compiling/Merging (sys-fs/zfs-kmod-0.6.5.4-r1::/usr/portage/sys-fs/zfs-kmod/zfs-kmod-0.6.5.4-r1.ebuild)
1461551719:  === (2 of 3) Merging (sys-fs/zfs-kmod-0.6.5.4-r1::/usr/portage/sys-fs/zfs-kmod/zfs-kmod-0.6.5.4-r1.ebuild)
1461551720:  >>> AUTOCLEAN: sys-fs/zfs-kmod:0
1461551720:  === Unmerging... (sys-fs/zfs-kmod-0.6.2-r2)
1461551720:  >>> unmerge success: sys-fs/zfs-kmod-0.6.2-r2
1461551725:  === (2 of 3) Post-Build Cleaning (sys-fs/zfs-kmod-0.6.5.4-r1::/usr/portage/sys-fs/zfs-kmod/zfs-kmod-0.6.5.4-r1.ebuild)
1461551725:  ::: completed emerge (2 of 3) sys-fs/zfs-kmod-0.6.5.4-r1 to /
1461551725:  >>> emerge (3 of 3) sys-fs/zfs-0.6.5.4-r2 to /
1461551725:  === (3 of 3) Cleaning (sys-fs/zfs-0.6.5.4-r2::/usr/portage/sys-fs/zfs/zfs-0.6.5.4-r2.ebuild)
1461551725:  === (3 of 3) Compiling/Merging (sys-fs/zfs-0.6.5.4-r2::/usr/portage/sys-fs/zfs/zfs-0.6.5.4-r2.ebuild)
1461551754:  === (3 of 3) Merging (sys-fs/zfs-0.6.5.4-r2::/usr/portage/sys-fs/zfs/zfs-0.6.5.4-r2.ebuild)
1461551755:  >>> AUTOCLEAN: sys-fs/zfs:0
1461551755:  === Unmerging... (sys-fs/zfs-0.6.2-r2)
1461551755:  >>> unmerge success: sys-fs/zfs-0.6.2-r2
1461551757:  === (3 of 3) Post-Build Cleaning (sys-fs/zfs-0.6.5.4-r2::/usr/portage/sys-fs/zfs/zfs-0.6.5.4-r2.ebuild)
1461551757:  ::: completed emerge (3 of 3) sys-fs/zfs-0.6.5.4-r2 to /
1461551757:  *** Finished. Cleaning up...
1461551757:  *** exiting successfully.
1461551757:  *** terminating.
1461554684: Started emerge on: Apr 25, 2016 03:24:43
1461554684:  *** emerge  zfs
1461554686:  >>> emerge (1 of 1) sys-fs/zfs-0.6.5.4-r2 to /
1461554686:  === (1 of 1) Cleaning (sys-fs/zfs-0.6.5.4-r2::/usr/portage/sys-fs/zfs/zfs-0.6.5.4-r2.ebuild)
1461554686:  === (1 of 1) Compiling/Merging (sys-fs/zfs-0.6.5.4-r2::/usr/portage/sys-fs/zfs/zfs-0.6.5.4-r2.ebuild)
1461554715:  === (1 of 1) Merging (sys-fs/zfs-0.6.5.4-r2::/usr/portage/sys-fs/zfs/zfs-0.6.5.4-r2.ebuild)
1461554715:  >>> AUTOCLEAN: sys-fs/zfs:0
1461554715:  === Unmerging... (sys-fs/zfs-0.6.5.4-r2)
1461554716:  >>> unmerge success: sys-fs/zfs-0.6.5.4-r2
1461554717:  === (1 of 1) Post-Build Cleaning (sys-fs/zfs-0.6.5.4-r2::/usr/portage/sys-fs/zfs/zfs-0.6.5.4-r2.ebuild)
1461554717:  ::: completed emerge (1 of 1) sys-fs/zfs-0.6.5.4-r2 to /
1461554717:  *** Finished. Cleaning up...
1461554717:  *** exiting successfully.
1461554717:  *** terminating.
[...]
1463613698: Started emerge on: May 18, 2016 23:21:38
1463613698:  *** emerge  =zfs-9999
1463613702:  >>> emerge (1 of 3) sys-kernel/spl-9999 to /
1463613702:  === (1 of 3) Cleaning (sys-kernel/spl-9999::/usr/portage/sys-kernel/spl/spl-9999.ebuild)
1463613702:  === (1 of 3) Compiling/Merging (sys-kernel/spl-9999::/usr/portage/sys-kernel/spl/spl-9999.ebuild)
1463613735:  === (1 of 3) Merging (sys-kernel/spl-9999::/usr/portage/sys-kernel/spl/spl-9999.ebuild)
1463613735:  >>> AUTOCLEAN: sys-kernel/spl:0
1463613735:  === Unmerging... (sys-kernel/spl-0.6.5.4-r1)
1463613736:  >>> unmerge success: sys-kernel/spl-0.6.5.4-r1
1463613737:  === (1 of 3) Post-Build Cleaning (sys-kernel/spl-9999::/usr/portage/sys-kernel/spl/spl-9999.ebuild)
1463613737:  ::: completed emerge (1 of 3) sys-kernel/spl-9999 to /
1463613737:  >>> emerge (2 of 3) sys-fs/zfs-kmod-9999 to /
1463613737:  === (2 of 3) Cleaning (sys-fs/zfs-kmod-9999::/usr/portage/sys-fs/zfs-kmod/zfs-kmod-9999.ebuild)
1463613737:  === (2 of 3) Compiling/Merging (sys-fs/zfs-kmod-9999::/usr/portage/sys-fs/zfs-kmod/zfs-kmod-9999.ebuild)
1463613843:  === (2 of 3) Merging (sys-fs/zfs-kmod-9999::/usr/portage/sys-fs/zfs-kmod/zfs-kmod-9999.ebuild)
1463613844:  >>> AUTOCLEAN: sys-fs/zfs-kmod:0
1463613844:  === Unmerging... (sys-fs/zfs-kmod-0.6.5.4-r1)
1463613844:  >>> unmerge success: sys-fs/zfs-kmod-0.6.5.4-r1
1463613845:  === (2 of 3) Post-Build Cleaning (sys-fs/zfs-kmod-9999::/usr/portage/sys-fs/zfs-kmod/zfs-kmod-9999.ebuild)
1463613845:  ::: completed emerge (2 of 3) sys-fs/zfs-kmod-9999 to /
1463613845:  >>> emerge (3 of 3) sys-fs/zfs-9999 to /
1463613845:  === (3 of 3) Cleaning (sys-fs/zfs-9999::/usr/portage/sys-fs/zfs/zfs-9999.ebuild)
1463613845:  === (3 of 3) Compiling/Merging (sys-fs/zfs-9999::/usr/portage/sys-fs/zfs/zfs-9999.ebuild)
1463613887:  === (3 of 3) Merging (sys-fs/zfs-9999::/usr/portage/sys-fs/zfs/zfs-9999.ebuild)
1463613887:  >>> AUTOCLEAN: sys-fs/zfs:0
1463613887:  === Unmerging... (sys-fs/zfs-0.6.5.4-r2)
1463613888:  >>> unmerge success: sys-fs/zfs-0.6.5.4-r2
1463613889:  === (3 of 3) Post-Build Cleaning (sys-fs/zfs-9999::/usr/portage/sys-fs/zfs/zfs-9999.ebuild)
1463613889:  ::: completed emerge (3 of 3) sys-fs/zfs-9999 to /
1463613889:  *** Finished. Cleaning up...
1463613889:  *** exiting successfully.
1463613889:  *** terminating.
[...]
1467362575: Started emerge on: Jul 01, 2016 08:42:55
1467362575:  *** emerge  =zfs-kmod-9999
1467362578:  >>> emerge (1 of 1) sys-fs/zfs-kmod-9999 to /
1467362578:  === (1 of 1) Cleaning (sys-fs/zfs-kmod-9999::/usr/portage/sys-fs/zfs-kmod/zfs-kmod-9999.ebuild)
1467362578:  === (1 of 1) Compiling/Merging (sys-fs/zfs-kmod-9999::/usr/portage/sys-fs/zfs-kmod/zfs-kmod-9999.ebuild)
1467362672:  === (1 of 1) Merging (sys-fs/zfs-kmod-9999::/usr/portage/sys-fs/zfs-kmod/zfs-kmod-9999.ebuild)
1467362673:  >>> AUTOCLEAN: sys-fs/zfs-kmod:0
1467362673:  === Unmerging... (sys-fs/zfs-kmod-9999)
1467362673:  >>> unmerge success: sys-fs/zfs-kmod-9999
1467362674:  === (1 of 1) Post-Build Cleaning (sys-fs/zfs-kmod-9999::/usr/portage/sys-fs/zfs-kmod/zfs-kmod-9999.ebuild)
1467362674:  ::: completed emerge (1 of 1) sys-fs/zfs-kmod-9999 to /
1467362674:  *** Finished. Cleaning up...
1467362675:  *** exiting successfully.
1467362675:  *** terminating.
1467362787: Started emerge on: Jul 01, 2016 08:46:26
1467362787:  *** emerge  =zfs-9999
1467362790:  >>> emerge (1 of 1) sys-fs/zfs-9999 to /
1467362790:  === (1 of 1) Cleaning (sys-fs/zfs-9999::/usr/portage/sys-fs/zfs/zfs-9999.ebuild)
1467362790:  === (1 of 1) Compiling/Merging (sys-fs/zfs-9999::/usr/portage/sys-fs/zfs/zfs-9999.ebuild)
1467362837:  === (1 of 1) Merging (sys-fs/zfs-9999::/usr/portage/sys-fs/zfs/zfs-9999.ebuild)
1467362838:  >>> AUTOCLEAN: sys-fs/zfs:0
1467362838:  === Unmerging... (sys-fs/zfs-9999)
1467362839:  >>> unmerge success: sys-fs/zfs-9999
1467362841:  === (1 of 1) Post-Build Cleaning (sys-fs/zfs-9999::/usr/portage/sys-fs/zfs/zfs-9999.ebuild)
1467362841:  ::: completed emerge (1 of 1) sys-fs/zfs-9999 to /
1467362841:  *** Finished. Cleaning up...
1467362841:  *** exiting successfully.
1467362841:  *** terminating.
[...]
1467491436: Started emerge on: Jul 02, 2016 20:30:35
1467491436:  *** emerge  =zfs-kmod-9999
1467491438:  >>> emerge (1 of 1) sys-fs/zfs-kmod-9999 to /
1467491438:  === (1 of 1) Cleaning (sys-fs/zfs-kmod-9999::/usr/portage/sys-fs/zfs-kmod/zfs-kmod-9999.ebuild)
1467491438:  === (1 of 1) Compiling/Merging (sys-fs/zfs-kmod-9999::/usr/portage/sys-fs/zfs-kmod/zfs-kmod-9999.ebuild)
1467491530:  === (1 of 1) Merging (sys-fs/zfs-kmod-9999::/usr/portage/sys-fs/zfs-kmod/zfs-kmod-9999.ebuild)
1467491530:  >>> AUTOCLEAN: sys-fs/zfs-kmod:0
1467491530:  === Unmerging... (sys-fs/zfs-kmod-9999)
1467491531:  >>> unmerge success: sys-fs/zfs-kmod-9999
1467491532:  === (1 of 1) Post-Build Cleaning (sys-fs/zfs-kmod-9999::/usr/portage/sys-fs/zfs-kmod/zfs-kmod-9999.ebuild)
1467491532:  ::: completed emerge (1 of 1) sys-fs/zfs-kmod-9999 to /
1467491532:  *** Finished. Cleaning up...
1467491533:  *** exiting successfully.
1467491533:  *** terminating.
1467491576: Started emerge on: Jul 02, 2016 20:32:55
1467491576:  *** emerge  =zfs-9999
1467491579:  >>> emerge (1 of 1) sys-fs/zfs-9999 to /
1467491579:  === (1 of 1) Cleaning (sys-fs/zfs-9999::/usr/portage/sys-fs/zfs/zfs-9999.ebuild)
1467491579:  === (1 of 1) Compiling/Merging (sys-fs/zfs-9999::/usr/portage/sys-fs/zfs/zfs-9999.ebuild)
1467491623:  === (1 of 1) Merging (sys-fs/zfs-9999::/usr/portage/sys-fs/zfs/zfs-9999.ebuild)
1467491624:  >>> AUTOCLEAN: sys-fs/zfs:0
1467491624:  === Unmerging... (sys-fs/zfs-9999)
1467491624:  >>> unmerge success: sys-fs/zfs-9999
1467491626:  === (1 of 1) Post-Build Cleaning (sys-fs/zfs-9999::/usr/portage/sys-fs/zfs/zfs-9999.ebuild)
1467491626:  ::: completed emerge (1 of 1) sys-fs/zfs-9999 to /
1467491626:  *** Finished. Cleaning up...
1467491626:  *** exiting successfully.
1467491626:  *** terminating.
[...]
1468013181: Started emerge on: Jul 08, 2016 21:26:21
1468013181:  *** emerge  =zfs-9999
1468013184:  >>> emerge (1 of 1) sys-fs/zfs-9999 to /
1468013184:  === (1 of 1) Cleaning (sys-fs/zfs-9999::/usr/portage/sys-fs/zfs/zfs-9999.ebuild)
1468013184:  === (1 of 1) Compiling/Merging (sys-fs/zfs-9999::/usr/portage/sys-fs/zfs/zfs-9999.ebuild)
1468013229:  === (1 of 1) Merging (sys-fs/zfs-9999::/usr/portage/sys-fs/zfs/zfs-9999.ebuild)
1468013230:  >>> AUTOCLEAN: sys-fs/zfs:0
1468013230:  === Unmerging... (sys-fs/zfs-9999)
1468013230:  >>> unmerge success: sys-fs/zfs-9999
1468013232:  === (1 of 1) Post-Build Cleaning (sys-fs/zfs-9999::/usr/portage/sys-fs/zfs/zfs-9999.ebuild)
1468013232:  ::: completed emerge (1 of 1) sys-fs/zfs-9999 to /
1468013232:  *** Finished. Cleaning up...
1468013232:  *** exiting successfully.
1468013232:  *** terminating.
1468013239: Started emerge on: Jul 08, 2016 21:27:19
1468013239:  *** emerge  =zfs-kmod-9999
1468013242:  >>> emerge (1 of 1) sys-fs/zfs-kmod-9999 to /
1468013242:  === (1 of 1) Cleaning (sys-fs/zfs-kmod-9999::/usr/portage/sys-fs/zfs-kmod/zfs-kmod-9999.ebuild)
1468013242:  === (1 of 1) Compiling/Merging (sys-fs/zfs-kmod-9999::/usr/portage/sys-fs/zfs-kmod/zfs-kmod-9999.ebuild)
1468013334:  === (1 of 1) Merging (sys-fs/zfs-kmod-9999::/usr/portage/sys-fs/zfs-kmod/zfs-kmod-9999.ebuild)
1468013335:  >>> AUTOCLEAN: sys-fs/zfs-kmod:0
1468013335:  === Unmerging... (sys-fs/zfs-kmod-9999)
1468013335:  >>> unmerge success: sys-fs/zfs-kmod-9999
1468013336:  === (1 of 1) Post-Build Cleaning (sys-fs/zfs-kmod-9999::/usr/portage/sys-fs/zfs-kmod/zfs-kmod-9999.ebuild)
1468013336:  ::: completed emerge (1 of 1) sys-fs/zfs-kmod-9999 to /
1468013336:  *** Finished. Cleaning up...
1468013336:  *** exiting successfully.
1468013336:  *** terminating.

After mid-July, I started just downloading the zfs-master.zip file off Github and building that directly. (So I could make sure it had all the debugging turned on.) It seems I have one dated 2016-07-09 and another one 2016-08-13. I thought I had a snapshot between those, but I’m not finding any.

Oh yeah, syslog…

Apr 25 04:21:19 localhost kernel: ZFS: Loaded module v0.6.5.4-r1-gentoo, ZFS pool version 5000, ZFS filesystem version 5
Apr 25 04:22:51 localhost kernel: ZFS: Loaded module v0.6.5.4-r1-gentoo, ZFS pool version 5000, ZFS filesystem version 5
May  2 08:41:43 localhost kernel: ZFS: Loaded module v0.6.5.4-r1-gentoo, ZFS pool version 5000, ZFS filesystem version 5
May 18 23:27:53 localhost kernel: ZFS: Loaded module v0.6.5-281_gbc2d809, ZFS pool version 5000, ZFS filesystem version 5
May 19 00:59:53 localhost kernel: ZFS: Loaded module v0.6.5-281_gbc2d809, ZFS pool version 5000, ZFS filesystem version 5
May 20 07:42:06 localhost kernel: ZFS: Loaded module v0.6.5-281_gbc2d809, ZFS pool version 5000, ZFS filesystem version 5
May 21 04:16:08 localhost kernel: ZFS: Loaded module v0.6.5-281_gbc2d809, ZFS pool version 5000, ZFS filesystem version 5
May 25 09:09:24 localhost kernel: ZFS: Loaded module v0.6.5-281_gbc2d809, ZFS pool version 5000, ZFS filesystem version 5
Jun  7 07:31:14 localhost kernel: ZFS: Loaded module v0.6.5-281_gbc2d809, ZFS pool version 5000, ZFS filesystem version 5
Jun  9 10:57:29 localhost kernel: ZFS: Loaded module v0.6.5-281_gbc2d809, ZFS pool version 5000, ZFS filesystem version 5
Jun 24 09:23:49 localhost kernel: ZFS: Loaded module v0.6.5-281_gbc2d809, ZFS pool version 5000, ZFS filesystem version 5
Jun 25 19:01:44 localhost kernel: ZFS: Loaded module v0.6.5-281_gbc2d809, ZFS pool version 5000, ZFS filesystem version 5
Jun 29 20:36:09 localhost kernel: ZFS: Loaded module v0.6.5-281_gbc2d809, ZFS pool version 5000, ZFS filesystem version 5
Jun 30 05:20:03 localhost kernel: ZFS: Loaded module v0.6.5-281_gbc2d809, ZFS pool version 5000, ZFS filesystem version 5
Jul  2 18:08:58 localhost kernel: ZFS: Loaded module v0.6.5-329_g5c27b29, ZFS pool version 5000, ZFS filesystem version 5
Jul  2 18:26:21 localhost kernel: ZFS: Loaded module v0.6.5-329_g5c27b29, ZFS pool version 5000, ZFS filesystem version 5
Jul  2 19:50:50 localhost kernel: ZFS: Loaded module v0.6.5-329_g5c27b29, ZFS pool version 5000, ZFS filesystem version 5
Jul  2 20:00:00 localhost kernel: ZFS: Unloaded module v0.6.5-329_g5c27b29
Jul  2 20:02:08 localhost kernel: ZFS: Loaded module v0.6.5-329_g5c27b29, ZFS pool version 5000, ZFS filesystem version 5
Jul  2 20:16:28 localhost kernel: ZFS: Unloaded module v0.6.5-329_g5c27b29
Jul  2 20:21:26 localhost kernel: ZFS: Loaded module v0.6.5-329_g5c27b29, ZFS pool version 5000, ZFS filesystem version 5
Jul  2 20:27:00 localhost kernel: ZFS: Unloaded module v0.6.5-329_g5c27b29
Jul  2 20:33:17 localhost kernel: ZFS: Loaded module v0.6.5-329_g5c27b29, ZFS pool version 5000, ZFS filesystem version 5
Jul  2 20:57:18 localhost kernel: ZFS: Loaded module v0.6.5-329_g5c27b29, ZFS pool version 5000, ZFS filesystem version 5
Jul  7 05:47:39 localhost kernel: ZFS: Loaded module v0.6.5-329_g5c27b29, ZFS pool version 5000, ZFS filesystem version 5
Jul  8 04:50:41 localhost kernel: ZFS: Loaded module v0.6.5-329_g5c27b29, ZFS pool version 5000, ZFS filesystem version 5
Jul  8 09:45:00 localhost kernel: ZFS: Loaded module v0.6.5-329_g5c27b29, ZFS pool version 5000, ZFS filesystem version 5
Jul  8 18:09:38 localhost kernel: ZFS: Loaded module v0.6.5-329_g5c27b29, ZFS pool version 5000, ZFS filesystem version 5
Jul  8 18:15:28 localhost kernel: ZFS: Loaded module v0.6.5-329_g5c27b29, ZFS pool version 5000, ZFS filesystem version 5
Jul  8 21:31:28 localhost kernel: ZFS: Loaded module v0.6.5-329_g5c27b29, ZFS pool version 5000, ZFS filesystem version 5
Jul  8 21:46:12 localhost kernel: ZFS: Loaded module v0.6.5-329_g5c27b29, ZFS pool version 5000, ZFS filesystem version 5
Jul  9 21:18:17 localhost kernel: ZFS: Loaded module v0.6.5-329_g5c27b29 (DEBUG mode), ZFS pool version 5000, ZFS filesystem version 5
Jul  9 22:06:36 localhost kernel: ZFS: Loaded module v0.6.5-329_g5c27b29 (DEBUG mode), ZFS pool version 5000, ZFS filesystem version 5
Jul 19 21:30:09 localhost kernel: ZFS: Loaded module v0.6.5-329_g5c27b29 (DEBUG mode), ZFS pool version 5000, ZFS filesystem version 5
Jul 27 10:06:45 localhost kernel: ZFS: Loaded module v0.6.5-329_g5c27b29 (DEBUG mode), ZFS pool version 5000, ZFS filesystem version 5
Jul 29 08:42:10 localhost kernel: ZFS: Loaded module v0.6.5-329_g5c27b29 (DEBUG mode), ZFS pool version 5000, ZFS filesystem version 5
Aug  1 09:42:22 localhost kernel: ZFS: Loaded module v0.6.5-329_g5c27b29 (DEBUG mode), ZFS pool version 5000, ZFS filesystem version 5
Aug  1 11:11:14 localhost kernel: ZFS: Loaded module v0.6.5-329_g5c27b29 (DEBUG mode), ZFS pool version 5000, ZFS filesystem version 5
Aug  1 22:35:59 localhost kernel: ZFS: Loaded module v0.6.5-329_g5c27b29 (DEBUG mode), ZFS pool version 5000, ZFS filesystem version 5
Aug  3 10:51:08 localhost kernel: ZFS: Loaded module v0.6.5-329_g5c27b29 (DEBUG mode), ZFS pool version 5000, ZFS filesystem version 5
Aug 10 22:24:40 localhost kernel: ZFS: Loaded module v0.6.5-329_g5c27b29 (DEBUG mode), ZFS pool version 5000, ZFS filesystem version 5
Aug 13 08:02:52 localhost kernel: ZFS: Loaded module v0.6.5-1 (DEBUG mode), ZFS pool version 5000, ZFS filesystem version 5
Aug 14 10:12:13 localhost kernel: ZFS: Loaded module v0.6.5-1 (DEBUG mode), ZFS pool version 5000, ZFS filesystem version 5
Aug 14 11:19:08 localhost kernel: ZFS: Loaded module v0.6.5-1 (DEBUG mode), ZFS pool version 5000, ZFS filesystem version 5
Aug 15 22:07:58 localhost kernel: ZFS: Loaded module v0.6.5-1 (DEBUG mode), ZFS pool version 5000, ZFS filesystem version 5
Aug 16 00:25:24 localhost kernel: ZFS: Loaded module v0.6.5-1 (DEBUG mode), ZFS pool version 5000, ZFS filesystem version 5
Aug 17 00:11:22 localhost kernel: ZFS: Loaded module v0.6.5-1 (DEBUG mode), ZFS pool version 5000, ZFS filesystem version 5
Aug 17 00:19:57 localhost kernel: ZFS: Unloaded module v0.6.5-1 (DEBUG mode)
Aug 17 00:22:08 localhost kernel: ZFS: Loaded module v0.6.5-1 (DEBUG mode), ZFS pool version 5000, ZFS filesystem version 5
Aug 17 00:31:31 localhost kernel: ZFS: Unloaded module v0.6.5-1 (DEBUG mode)
Aug 17 01:53:49 localhost kernel: ZFS: Loaded module v0.6.5-1 (DEBUG mode), ZFS pool version 5000, ZFS filesystem version 5
Aug 17 03:40:59 localhost kernel: ZFS: Loaded module v0.6.5-1 (DEBUG mode), ZFS pool version 5000, ZFS filesystem version 5
Aug 17 05:32:04 localhost kernel: ZFS: Loaded module v0.6.5-1 (DEBUG mode), ZFS pool version 5000, ZFS filesystem version 5
Aug 17 05:32:17 localhost kernel: ZFS: Unloaded module v0.6.5-1 (DEBUG mode)
Aug 17 05:34:23 localhost kernel: ZFS: Loaded module v0.6.5-329_g5c27b29, ZFS pool version 5000, ZFS filesystem version 5
Aug 17 05:48:42 localhost kernel: ZFS: Loaded module v0.6.5-1 (DEBUG mode), ZFS pool version 5000, ZFS filesystem version 5
Aug 21 04:17:02 localhost kernel: ZFS: Loaded module v0.6.5-1 (DEBUG mode), ZFS pool version 5000, ZFS filesystem version 5
Aug 22 21:21:12 localhost kernel: ZFS: Loaded module v0.6.5-1 (DEBUG mode), ZFS pool version 5000, ZFS filesystem version 5
Aug 22 21:23:55 localhost kernel: ZFS: Unloaded module v0.6.5-1 (DEBUG mode)
Aug 22 21:25:13 localhost kernel: ZFS: Loaded module v0.6.5-1 (DEBUG mode), ZFS pool version 5000, ZFS filesystem version 5
localhost ~ # cat /sys/module/zcommon/parameters/zfs_fletcher_4_impl
[fastest] scalar sse2 ssse3 avx2 
localhost ~ # cat /proc/cpuinfo
processor   : 0
vendor_id   : GenuineIntel
cpu family  : 6
model       : 60
model name  : Intel(R) Xeon(R) CPU E3-1220 v3 @ 3.10GHz
stepping    : 3
microcode   : 0x9
cpu MHz     : 3101.000
cache size  : 8192 KB
physical id : 0
siblings    : 4
core id     : 0
cpu cores   : 4
apicid      : 0
initial apicid  : 0
fpu     : yes
fpu_exception   : yes
cpuid level : 13
wp      : yes
flags       : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat epb pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm xsaveopt
bugs        :
bogomips    : 6200.42
clflush size    : 64
cache_alignment : 64
address sizes   : 39 bits physical, 48 bits virtual
power management:

processor   : 1
vendor_id   : GenuineIntel
cpu family  : 6
model       : 60
model name  : Intel(R) Xeon(R) CPU E3-1220 v3 @ 3.10GHz
stepping    : 3
microcode   : 0x9
cpu MHz     : 3101.000
cache size  : 8192 KB
physical id : 0
siblings    : 4
core id     : 1
cpu cores   : 4
apicid      : 2
initial apicid  : 2
fpu     : yes
fpu_exception   : yes
cpuid level : 13
wp      : yes
flags       : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat epb pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm xsaveopt
bugs        :
bogomips    : 6200.42
clflush size    : 64
cache_alignment : 64
address sizes   : 39 bits physical, 48 bits virtual
power management:

processor   : 2
vendor_id   : GenuineIntel
cpu family  : 6
model       : 60
model name  : Intel(R) Xeon(R) CPU E3-1220 v3 @ 3.10GHz
stepping    : 3
microcode   : 0x9
cpu MHz     : 3101.000
cache size  : 8192 KB
physical id : 0
siblings    : 4
core id     : 2
cpu cores   : 4
apicid      : 4
initial apicid  : 4
fpu     : yes
fpu_exception   : yes
cpuid level : 13
wp      : yes
flags       : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat epb pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm xsaveopt
bugs        :
bogomips    : 6200.42
clflush size    : 64
cache_alignment : 64
address sizes   : 39 bits physical, 48 bits virtual
power management:

processor   : 3
vendor_id   : GenuineIntel
cpu family  : 6
model       : 60
model name  : Intel(R) Xeon(R) CPU E3-1220 v3 @ 3.10GHz
stepping    : 3
microcode   : 0x9
cpu MHz     : 3101.000
cache size  : 8192 KB
physical id : 0
siblings    : 4
core id     : 3
cpu cores   : 4
apicid      : 6
initial apicid  : 6
fpu     : yes
fpu_exception   : yes
cpuid level : 13
wp      : yes
flags       : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat epb pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm xsaveopt
bugs        :
bogomips    : 6200.42
clflush size    : 64
cache_alignment : 64
address sizes   : 39 bits physical, 48 bits virtual
power management:

Понравилась статья? Поделить с друзьями:

Читайте также:

  • Zf astronic коды ошибок
  • Zanussi zwp 580 коды ошибок
  • Zf 5hp24 p0720 ошибка
  • Zanussi zwi 71201 wa error
  • Zet gaming damage клавиатура как изменить подсветку

  • 0 0 голоса
    Рейтинг статьи
    Подписаться
    Уведомить о
    guest

    0 комментариев
    Старые
    Новые Популярные
    Межтекстовые Отзывы
    Посмотреть все комментарии