Bash echo write error invalid argument

I am new to bash and trying to write a script that disables kworker business as in aMaia's answer here. So far, I have this, which I run from root: 1 #!/bin/bash ...

I am new to bash and trying to write a script that disables kworker business as in aMaia’s answer here.

So far, I have this, which I run from root:

  1 #!/bin/bash                                                                      
  2                                                                                  
  3 cd /sys/firmware/acpi/interrupts                                                 
  4 for i in gpe[[:digit:]]* # Don't mess with gpe_all                               
  5 do                                                                               
  6     num=`awk '{print $1}' $i`                                                    
  7     if (( $num >= 1000 )); then  # potential CPU hogs?                           
  8         # Back it up and then disable it!!                                       
  9         cp $i /root/${i}.backup                                                  
 10         echo "disable" > $i                                                      
 11     fi                                                                           
 12 done  

But running it results in:

./kkiller: line 10: echo: write error: Invalid argument

What is going on here? I thought $i was just the file name, which seems like the correct syntax for echo.

Suggestions for cleaning up/improving the script in general are also appreciated!

Update: With set -vx added to the top of the script, here is a problematic iteration:

+ for i in 'gpe[[:digit:]]*'
awk '{print $1}' $i
++ awk '{print $1}' gpe66
+ num=1024908
+ ((  1024908 >= 1000  ))
+ cp gpe66 /root/gpe66.backup
+ echo disable
./kkiller: line 10: echo: write error: Invalid argument

Содержание

  1. Bash Script: Invalid argument
  2. 4 Answers 4
  3. Related
  4. Hot Network Questions
  5. Subscribe to RSS
  6. Talk : Bcache
  7. Example installation
  8. Prepare the storage drive
  9. Bcache-tools installation
  10. Making the partitions
  11. Making Bcache Device
  12. Formating the drives
  13. Mount the partitions
  14. Bcache Check After mounting
  15. Generate an fstab
  16. Install and configure kernel and bootloader
  17. Adding Bcache module and hook
  18. Registering and attaching Bcache
  19. Using LVM on a Bcache volume
  20. btrfs on bcache

Bash Script: Invalid argument

Why can’t I use echo $1 > /sys/class/backlight/acpi_video0/brightness in a simple bash script?

It gives me the error: echo: write error: Invalid argument .

4 Answers 4

Try echo «$1» > /sys/class/backlight/acpi_video0/brightness .

I bet the shell is expanding $1 and thus echo thinks it is receiving a bunch of arguments, rather than a string.

That file is a special file. It cannot be written to if what is written is not solely a number. If you try writing a number with echo , you will get a newline character at the end. echo -n solves the problem.

EDIT: Also, you might having the problem which I just had; that you need to be root and sudo won’t help you for whatever reason, making it very tedious to type su ; ; exit all the time. For this I made an (overly ambitious) python script:

You should check what the actual value of $1 is. This error means you are trying to write an invalid value — either it’s out of range or just in general not a meaningful value.

At a glance, it appears that it accepts an integer in the range 0 to 8 (for me at least).

Hot Network Questions

To subscribe to this RSS feed, copy and paste this URL into your RSS reader.

Site design / logo © 2023 Stack Exchange Inc; user contributions licensed under CC BY-SA . rev 2023.1.14.43159

By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.

Источник

Talk : Bcache

Example installation

Here is mine Bcache installation. Maybe you can use some idea’s. 🙂

Prepare the storage drive

Bcache-tools installation

fakeroot and binutils are only needed. but it will fail in a error if you only install them. wget is from a failed archbang instal 🙁

Making the partitions

For /boot i use the slower HDD because i had troubles that mine SSD loaded the kernel while mine HDD was still spinning up 😛

Starting with the faster SSD drive

Now the slower 2 TB Harddrive

Making Bcache Device

The —wipe-bcache is to remove a error 🙂 and the dd commands is or cleaning out old partition data.

If next commands give a error: invalid argument Then the device is already registered.

Formating the drives

Mount the partitions

Bcache Check After mounting

Reason for /boot on the HDD is because of boot up failure because of HDD spin-up.

Generate an fstab

Generate an fstab file with the following command. UUIDs will be used because they have certain advantages.

Install and configure kernel and bootloader

Adding Bcache module and hook

Edit /etc/mkinitcpio.conf as needed and re-generate the initramfs image with: adding the «bcache» module adding the «bcache_udev» hook between block and filesystems

Registering and attaching Bcache

This part is a pain in the A.. The problem is that Bcache wil give a error at startup if this is not done.

First get the correct «SET UUID» with this ls command. And use the UUID like the example underneath. Also check for the use of the correct devices. echo: write error: Invalid argument Installing the Kernel

I hope some stuff is helpfull 😛 17:58, 11 November 2013 (UTC)

I’m curious about two things. First, though I forget what the ‘-E’ switch signifies, it seems you’re formatting the raw, underlayer partitions themselves. I don’t understand why. Also, the «pain in the a?» So udev is not assembling your array automatically as the devices are discovered? That is unexpected behavior, and I suspect it has to do with the formatting you performed and udev’s superblock discovery. Check the very bottom of the bcache wiki on their site. At least try lsblk to verify «bcache» partition types are not registering as «ext4» before assembling the array. Oh. And please sign talk pages.

Mikeserv (talk) 03:35, 12 November 2013 (UTC)

Well this is a experiment with Bcache with mine own Wiki page and 100x testing the install, i am not a linux expert just a newbie who was reading a lot of howto’s. Just see it as a compilation of different web pages and some logical thinking. 🙂

The -E is just a copy and past . not sure if it was needed.I just thought it was needed to get the discard flag while formating.

The pain in the A. this was before Udev . i was busy getting Bcache to work 3 months ago . 75% of boot up failed when booting up cold. Later i found out that the SSD was too fast with booting up and the normal HDD was still spinning up. After i splitted up the SSD for root directory and bcache and moved the /boot to the HDD since then i can bootup 100% of the time without any error. This was before the udev thingy (wich i dont understand and need to read up) 🙂

So please understand that i am just a linux noob trying to contribute 🙂

No, I get it, and I didn’t mean to insinuate that you were doing anything wrong, exactly, just that, as designed, it could operate better. Basically, and you should check the man-pages and the wiki page here to be sure just in case I’m not entirely correct (which does happen), udev is the program that runs during boot (and pretty much all of the rest of the time, too) to detect your hardware and load drivers as necessary. Udev is a relatively recent addition to the Linux boot process and is unique in that it discovers and handles hardware as it shows up, can handle multiple devices simultaneously with non-blocking parallell operations, and, as a result, is significantly faster than the methods it has replaced. I was the one who added the udev rules and step 7b to this wiki, and I was the one who wrote the «Rogue Superblock» section of «Troubleshooting» at the bottom of bcache’s wiki. I only gained the knowledge to write either after spending several frustrating days with an issue that appears very similar to your «pain in the a.» As step 7a shows, the prevailing Arch Linux method for reassembling a bcache array at boot used to look something like: 1) Wait 5 secs. 2) Check to see if udev has discovered and added . If yes goto 3), no, repeat 1) and 2). 3) echo /sys/bla/bla/bla This does work, of course, but you’re adding an unnecessary wait loop to your boot process and you’re performing a task with brittle shell script that depends on specific UUIDs that could be performed flexibly and at discovery time. That’s what the udev rules do. Basically, the rules instruct udev to treat as part of your bcache array any partition it finds that reports itself as a «bcache» partition type. It builds the bcache array from disk partitions at every boot only as soon as those partitions are ready, which would of course resolve the race condition you mentioned and allow you to boot from SSD if you liked. And here’s where our experience might converge: I added a partition to my bcache array after previously formatting it as ext4. If you don’t know about superblocks (or magic blocks or whatever they’re called) then you’re just like I was a few months ago. In short the filesystem has to have a way to report on itself, so it sets aside a small marker on disk that says, «Hey, OS, I’m a disk partition of type EXT4 and I start at block offset whatever and end at block offset whatever. Also, I enjoy long walks on the beach and prefer red wine to white. See you around.» The problem I experienced was that by default ext4 tended to begin at an earlier offset than bcache, and so even though I formatted over the previously ext4 partition with bcache as instructed, bcache never overwrote ext4’s first superblock. When udev attempted to build my array, it would always miss my first partition because it would read the first superblock, identify the partition as ext4, then skip it. Echoing the add instruction to /sys/ after boot was my fix for a couple of days, but eventually I figured it all out and wrote that short bit on superblocks in bcache’s wiki. Maybe that helps? EDIT: Just looked back at the wiki proper and somebody has performed some major changes recently. While it does appear cleaner, it no longer makes sense. There are references to non-existent steps throughout, and the actual why’s that were included at least to some degree seem to have been occluded in favor of dubious, though possibly more efficient how’s. Anyway, step 7b no longer exists, but it used to include the bcache_udev mkinitcpio hook I (very barely) adapted from mdadm_udev when trying to fix Arch’s bcache boot problems. It’s now included in the AUR package the wiki recommends I guess, despite the wiki also telling us we must «echo . /sys/. at every bootup.»

Mikeserv (talk) 07:08, 14 November 2013 (UTC)

Sorry about that, I missed a couple of references back to the sections I changed. Hopefully it makes more sense now. Mikeserv’s udev rule is crucial to having a working bcache device on boot, as Emesix had discovered, it will often result in a failed boot. The echo in to /sys/bcache* step was missed when I updated it to mention the udev rule as the ‘default’ method.

—Justin8 (talk) 10:45, 14 November 2013 (UTC)

For clarity’s sake, the udev rule was never mine. The used rule has been included in the AUR package’s git pull from at least the time I first installed bcache some months ago. The package maintainer at that time opted not to use it, I guess, and instead installed the legacy-type shell-scripted for-loop mkinitcpio hook I outlined above, which was, to be fair, also the process recommended by every other source of information I was able to dig up at the time to include bcache’s own wiki. Frustrated with having to wait for such things which seemed to me to defeat the whole purpose of buying and configuring an SSD boot device, I eventually stumbled upon the mdadm_udev hook in /use/share/initcpio and (only slightly) modified it to apply the 69-bcache-rules instead. Justin, thanks for your recent change. NEVERMIND FOLLOWING: I’m curious why you specify «unless assembling after boot?» Why include anything to do with bcache in the initramfs at all if you’re building the array after the boot sequence? MAKES PERFECT SENSE OF COURSE. GOT A 1 TRACK MIND SOMETIMES I GUESS. SORRY.

Mikeserv (talk) 11:11, 14 November 2013 (UTC)

Emesix, just looked closer at your instructions, and you actually do the thing I had to do and outlined in the bcache wiki to overwrite the rogue superblock here: dd if=/dev/zero count=1 bs=1024 seek=1 of=/dev/sd And the ext4 formatting you do after that I earlier incorrectly assumed to be for the «raw, underlayer partitions» is revealed upon further inspection to be for non-bcache partitions and so isn’t even relevant; I apologize for jumping to conclusions before reviewing it fully, the quick check I did a couple of days ago just struck me as really familiar. The udev stuff above should still be relevant, and it might be worth noting that the particular dd command above is by no means a cure-all. It should only be used as written if you have previously identified a superblock located at byte offset 1024 that needs overwriting, though the actual superblock location can vary by filesystem. I’ve since learned that a much more flexible and useful tool for this is wipefs , which is much less dangerous than it sounds. Mikeserv (talk) 14:28, 14 November 2013 (UTC)

Thanks for the explanation of the super-block Mikeserv’s was very insightful. But that Udev stuff is still a layer to high for me 😛 I don’t know if its possible to do a little differently to avoid a unwanted wait loop:

1) Check to see if udev has discovered and added . If yes goto 3) 2) Wait 5 secs and goto 1. 3) echo /sys/bla/bla/bla

The dd command is just a safety step. 🙂 And because formatting the /dev/bcache0 was soon to follow, it didn’t hurt if i accidentally killed the file system on the bcache 🙂 I got the feeling that resizing the bcache partition was also giving problems wich the dd command fixed. BTW i like the piece added with this command set:

p.s. I don’t get why the wiki page is full of btrfs stuff. Is this not a combination of commands? And shouldn’t a more conventional partition/file system scheme be more useful for novice users (like me) ??

Emesix (talk) 22:47, 14 November 2013 (UTC) Regarding btrfs and its usefulness for new users. I don’t know. With btrfs for instance, there are many normally high-level file-system concepts rendered nearly as mundane as directory maintenance like snapshotting and subvolumes, and still others that require only a single edit to a boot config like autodefrag and compress=lzo delivering inline filesystem compression and cleanup. In practice, you really don’t have to understand the hows of these things at all to get incredible use out of them. Consider just sub volumes: they can deliver, at the very least, a separated partition structure for the various segments of the Linux filesystem as is often recommended people set at installation with syntax and simplicity like unto cd and mkdir , only the btrfs —help implementation is probably a little friendlier. With traditional filesystems this same setup required knowledge of disk geometry, extent counts and all manner of other arcane things to configure in any sanely optimized way. Btrfs will create you a RAID in less than 30 seconds. I guess what I’m saying is the research vs reward ratio is pretty significantly in your favor with a btrfs disk. Of course, should the btrfs filesystem crash, well, you should probably call someone else is all I’ll say about that. In this wiki, I definitely see your point, especially considering the added complexity bcache brings. When I was trying to set mine up I spent a good deal of time in both the btrfs and bcache IRC dev channels and both sides were less than optimistic about friendly cooperation between the two. Still, it works and has done for months. I know, don’t call you, right? Now about udev, I’ll try again. So, your script implements a wait loop to check occasionally if udev has done a thing (bring up the disks and pull in basic drivers like SCSI for block devices and ext4 for their component filesystems) so it can do a thing (register the disks as small parts of a larger whole and pull in the bcache driver). The udev rules eschew the notion of a secondary monitor script such as yours and instead have udev do it all (any disk it brings up as a «bcache» type is registered as a small part of a larger whole and the bcache driver is pulled in). So you couldn’t boot from SSD reliably before because you couldn’t reliably script the order in which udev would bring up the disks (a consequence of its faster parallel operations), but udev doesn’t have to script its order, udev just udevs as udev udevs best. See? You will always boot reliably from your chosen device because udev won’t misinterpret its own report on your chosen device’s readiness. And that’s all — there’s nothing wrong with the shell script exactly, except that it’s just plain unnecessary and added complication.

Mikeserv (talk) 23:38, 14 November 2013 (UTC)

I have to say, I want to try out using btrfs instead of LVM on my desktop sometime, but the cache article is not the place for it. Mentioning it as a good alternative on the installation and beginners guide and having details in the btrfs article is going to be more productive and logical. I added the section near the top of the page for configuring a simple cache setup with no regards to file systems and partition layouts as they should be beyond the scope of this article.

—Justin8 (talk) 02:13, 15 November 2013 (UTC)

Agreed. There’s a lot of it, too. I guess that sometimes happens on less visited pages. Probably it will level out as the module is more established. I compromised on some of the grub stuff in a minor edit yesterday after you rightly pointed out how much unrelated stuff is in there. Never thought about it before, but I’ve only ever written 7b and 2 notes here and a short section on UEFI kernel update syncing. Above is just about everything I know relatable to bcache by now, I guess. I wish there was some more in the wiki here on benchmarking. The bcache wiki itself reads like a stereo repair manual on the subject. I never have managed to wrap my mind round it before my eyes would shut of their own accord.

Mikeserv (talk) 03:00, 15 November 2013 (UTC)

Ha, that is the perfect description for their wiki; it’s all failure modes and fixes but no reasoning. From what I could see, benchmarking of bcache is really only useful to benchmark writes; reads will vary exponentially depending on whether or not you get a cache hit. On top of that, sequential reads/writes bypass the cache. the only real way to benchmark it would be in ‘real-world’ style tests instead of synthetic benchmarks. I.e. how long it takes to boot your system. how long it takes to compile X, Y or Z. I put a little bit up on my blog when testing out bcache, mostly focussing on IOps performance changes since everything else is relatively the same with or without bcache (other than the smart re-ordering of metadata changes) I was going to just paste the graphs, but they are all js objects; you can see the results I got here: https://www.dray.be/?p=7

Justin8 (talk) 04:05, 15 November 2013 (UTC)

Justin: so you’re the AUR bcache-tools maintainer now? I guess you dropped enough hints. That’s cool. Reading through your blog (pretty colors) I was reminded of another little bcache fact that may not be immediately apparent to all. It seems most tend to assume that the cache:backing-store must be on a 1:1 ratio. This is not the case. A single cache can serve as many storage devices as you might like. Conversely, there’s nothing preventing one from configuring several mechanical discs in a RAID and then backing it on a 1:1 ratio. Regarding a cached RAID, though, I do recall reading a bcache IRC conversation in which a dev mentioned hardware configuration of the device would be desirable as bcache is designed to interface with a block layer and dropping a filesystem below it can have unexpected results. I only bring this up because your blog mentions you will switch from a 1tb to a 3tb soon. Personally, I use a 120GB Kingston SSD to cache 2 3tb WD Greens which are then formatted as a single btrfs RAID1 metadata/RAID0 data. The data is mostly downloaded video for a home theater server and as such is largely expendable in the event of catastrophe. Probably you know this, but just in case, I thought I’d point out that there’s nothing preventing you from merely adding the the 3tb to the 1tb as you please. Lastly, you can format and configure an entire array with basically one line like: make-bcache -B /dev/ <1tb>/dev/ -C /dev/ <60gb_ssd>. Then just reboot (or even reload udev) if you’ve got the udev rules loaded — udev will register the devices for you based on the auto-attach data recorded in the partition’s superblocks as a result of the all-in-one add command. You wouldn’t even need the bcache_udev hook or anything else in initramfs at this point, because obviously you shouldn’t expect to boot to the bcache array yet as you haven’t had a chance to format it with a usable filesystem or to copy any files to it, but it can simplify things. Mikeserv (talk) 07:04, 15 November 2013 (UTC) I did notice a reference to that from the previous maintainer in the aur comments. I currently have a 9x2tb RAID6 mdadm array that I really don’t want to have to backup and restore, so I did the tests as it would be in my server (just on a smaller scale). I don’t believe it would be a simple task to remove the device, bcache-ify it and put it back in. The blocks utility says that it supports it for any block device; but mdadm stores data about the offset in the superblocks instead of using the partition layout (which I discovered the hard way in the past and had to restore 10TB of backups from off-site 🙁 ) It should be fine though doing it to the MDADM device itself. I wouldn’t mind knowing the performance difference, so once I get an SSD for my server I might test it with both methods to do a bit of a comparison. I didn’t realize you could specify the backing and cache devices all in one line like that. I didn’t see it in the wiki at all when I did my first lot of edits; seems useful. Does it automatically attach the cache to the backing devices as well? Although I reboot fairly often for things like kernel updates/making sure pacman hasn’t broken things, I am somewhat averse to rebooting when I don’t have to. I migrated from a root device on an SSD and /home and /opt on a 3TB disk, on to a spare 1TB disk, did the tests for my blog on a different computer and then created the bcache device with the 3TB disk and 60GB SSD and migrated back all without a reboot. Sometimes rebooting can be easier, but it’s more ‘fun’ not to 🙂 Justin8 (talk) 08:28, 15 November 2013 (UTC) Well, as I mentioned, you could opt to restart udev rather than reboot, or alternatively you could choose to throw a udevadm trigger, or to have some fun with kexec , or merely to echo the block-device register commands to /sys/ as per usual. You have correctly surmised that the one-liner auto-attaches the backing-stores to the cache, but it does not perform the registration, which is necessary anyway every time an array is re/assembled as it is. I added it to the bcache Arch wiki page myself after discovering it quite by accident between nods while browsing bcache’s own documentation a few months ago. The command is represented in the wiki even now, if rather humbly as a side note to the initial backing-store formatting process. I mentioned it on your talk page as well, because you had apparently previously edited it not to fix its then anachronistic reference to no longer needing a now completely different step 4, but to note that it would no longer be necessary at boot. At first I was bewildered, but I’ve since corrected it to note that it now obviates step 6. By the way, I might know of another thing worth saying, of which kexec ‘s mention has reminded me: because of the way SystemD performs its unmounts at shutdown, it can be very slightly dangerous to run a multi-device write-back cache outside of its purview, as you must do if you boot to a bcache array but have not incorporated SystemD into your initramfs. If this is your situation currently, then likely you will be able to catch an occasional error message printed to the console about a forced unmount in the second or two just before your screen blanks its last. Of course bcache is supposed to be able to handle this and much more, and it never caused me any issues (though I run in write-through mode), but I did switch to the new SystemD hook just as soon as I noticed it. Specifically the SystemD kexec function should almost certainly be avoided whenever possible when operating a write-back cache without a true PID1 SystemD configuration. Mikeserv (talk) 09:13, 15 November 2013 (UTC) That is interesting. (I assume you’re talking about the systemd hook for mkinitcpio that is intended to replace udev/usr/base/etc) How does using that new systemd hook fix it unmounting drives too soon? Also the wiki notes that it is incompatible with lvm2 currently; which just so happens to be what my root filesystem is using. Maybe it is yet another reason to change to btrfs finally. I am using writeback on my bcache device as well. But so long as the cache device is available on a reboot it shouldn’t affect anything. Justin8 (talk) 07:58, 16 November 2013 (UTC) Well, depending on your use-case, you may find btrfs+bcache to be a less than ideal combination. As I understand it, btrfs is designed to concatenate write operations in memory at least until it has compiled enough of them to deliver a sequentially written block to disk. Obviously any write operations given bcache will come from btrfs when the two are combined, and because bcache is designed to pass-through sequential writes, most disk writes should bypass your ssd cache entirely. This doesn’t concern me overmuch because the majority of my disk space is consumed by large video files which, when first written, are sequential anyway, and the slower write speed is mostly mitigated with the two-disk btrfs RAID I configured. It is very nice, however, to have the most used applications and most watched videos dynamically located on the ssd, especially since the one system’s disks serve several other machine’s multimedia requests. It is my theory that bcache can assist a btrfs file-system in avoiding some of its infamous disk-thrashing tendencies associated with its relatively huge metadata structures by caching and consolidating even these write operations into sequential blocks, which, if true, is probably especially useful in my particular configuration of striped data and mirrored metadata. Unfortunately, as you might have gleaned from a previous statement I’ve made, I can hardly back this up with any hard data as proof. I also theorize that the btrfs kernel parameter autodefrag, which in a conventional configuration can severely affect write-speeds, becomes a much more sensible option with the addition of an ssd configured with bcache, but I can’t offer any evidence to support that either. As I said when first replying to E, though a repeat is probably long overdue, I am sometimes wrong (some may have cause even to sneer at the «sometimes»), and before I go on I should say again that anything I write is likely to serve one best if considered in combination with other sources of information. If it seems to anyone that what I say makes no sense or is contradictory in some way then it’s probably best to verify that doubt one way or another before proceeding with any operation that might affect valued data, or even that might just result in an unnecessarily wasted day spent reconfiguring twice what used to be a computer system that would boot with the press of a button. Also, if anyone catches something wrong, please correct me. Sorry, it just occurred to me that others might venture this way eventually and act on what they read so I figured another disclaimer was called for. By the way, if you ever wind up as bewildered by the Arch Wiki LXC page as I was, definitely read through its talk page before giving up and moving on. Ok, so about systemd, we have to start here: I’ve noticed a lot of people, as I used to do, tend to ascribe unwarranted mystery to initramfs. Probably, at least in Arch’s case, this is an unavoidable consequence to the level of automated excellence provided by its mkinitcpio script; it is because we so rarely have any cause to understand it that so few of us do. Certainly I didn’t anyway, until very recently. The initramfs is a file containing an image of the first filesystem mounted by the kernel after it is called. This image-file can be compiled into the kernel itself at build-time, but, because it can be more convenient to swap in other images at will, it is conventional instead to feed its path to the kernel as a parameter. The kernel expects the image’s final archived format to be «cpio» (CoPy In/Out archives can handle items many others can’t like /dev/, /sys/ and similar), but it can decompress that from any format you can compile into it (like lzma, lz4, gz, xz, etc). In its current iteration (as of kernel 2.6.something), when loaded the kernel mounts initramfs as «rootfs,» which is basically the contents of this image mounted to / as a tmpfs. As far as the kernel is concerned its primary boot job is to find and call /init, and all the rest is up to user-space. And because the linux kernel can potentially handle more / disk configurations than can practically be accounted for in a single compile without growing exponentially every year, it was 2.6.something when initramfs became mandatory. For all intents and purposes, you can still compile in any modules required to mount your root filesystem, run /init from there, and skip both compiling as built-in and parameterizing an initramfs image at load-time, but the kernel still contains and mounts a (basically) empty / image no matter what you do. mkinitcpio’s «base» hook is basically just busybox and a couple of shell scripts, one of which is /init. Arch’s default initramfs behavior is pretty similar to most others; it packs in as little as necessary to find and mount your filesystem at /newroot then uses «switchroot» (the only really out-of-the-ordinary part of the whole process) to simultaneously dismount the initramfs /, mount /newroot in its place and call systemd. Ok, sorry for the lecture, but there’s already enough here to warrant my coming back when I’ve forgotten any of it, so I figured i’d keep my future self on the right track if I could manage. Anyway, so have a look at the lsblk printouts at the bottom of bcache’s wiki here: http://bcache.evilpiepirate.org/#index6h1 Notice that after the array is built successfully (shown in the second lsblk output) the /dev/bcache <0,1>partitions are represented twice; they’re children of both the physical disks on which they actually reside and the cache disk that buffers their i/o. The first lsblk, pulled before the bcache device to which they’re attached is registered, is representative of a more typical configuration in which each partition is descended only from the physical device on which it resides. That added complexity is, as near as I can figure, what trips systemd’s shutdown unmounts up. The bcache module, its user-space tools, and your / array are all called by the initramfs udev which is killed at «switchroot» when systemd loads so it can load and properly manage its own child udev process. systemd is designed from the ground up to be PID1, /init, the daemons’ daemon. Through /sys/, it manages a pid tree with itself as the root reliably tracking and killing even forked child processes when the original parent processes quit. Asking it to undo complicated mount structures built before ever it can hook into /sys/, however, is probably asking for trouble. Still, as you say, dirty or not the data is still there. bcache is a disk cache after all, and it will persist between reboots in the cache on the ssd even if the array is not cleanly disassembled before the disks are unmounted. And bcache is specifically designed to track in its own btrees the cached data’s final disk targets in order to reliably handle exactly these kinds of situations. I qualified my earlier description of the scenario as only «very slightly» unsafe because the redundancy is reduced. There is also the outside possibility that the cached data could be orphaned if the original backing store > cache attachment relationship written to the array members’ bcache partition superblocks is somehow broken, which, at least when I configured my disks only a few months ago, was noted in the bcache docs as rare but possible. This last, I should say, I have never experienced, but, and only for reasons I can’t exactly put my finger on, I have a hunch it could be related to the whole systemd dirty unmount scenario described above. I specifically cautioned against kexec before because it’s sort of a «switchkernel» version of the «switchroot» process I referenced above. kexec first writes an OS kernel to system memory, then it simultaneously writes out the current kernel’s mapped region and writes the to-be-loaded kernel over it, before finally, in its last gasp, as it were, calling the new kernel to execute. It seems to me that if the daemons’ daemon has difficulty handling preexisting conditions then testing the OS kernel in such circumstances is probably not something I’d like to do. Geez. That’s a lot. But there’s more. While I can’t say for certain because I haven’t looked into it, I suspect that an lvm2 / is only contra-indicated for the systemd mkinitcpio hook because the particular combination has not yet been scripted to the same level of automatic detection and reliable configuration as the rest. There’s a huge difference between reliably configuring a thing and reliably automating a configuration of course, and as I’m sure Arch’s core maintainers are painfully aware. It is probably for precisely this reason that Arch enables the bcache kernel module in its default kernel compile but provides no supported means of making use of it. Still, as far as I know, systemd is not inherently incompatible with lvm2, and because the initramfs is just a / doing (mostly) plain old linuxy / things, there’s no reason you can’t set it up any way you please. Maybe get to know lsinitcpio a little better if you’re interested. And here’s one of perhaps special import to the AUR’s bcache-tools package maintainer; while putting systemd in the initramfs is the one way I’ve handled the shutdown problem, I doubt very seriously it is the only way. Two other possibilities come immediately to mind: either hooking into or emulating in some way the mkinitcpio shutdown (or is it suspend?) hook which I expect must have some means of handling an end event or two; or, and probably the way i’d go, systemd’s own shutdown target, which definitely does and is probably specifically designed to handle just such a thing. Probably there are many others as well, but none spring to mind. You can wake up now. I’m through. Mikeserv (talk) 05:56, 17 November 2013 (UTC) That was long! Perhaps we should take this conversation off the talk page so that we don’t fill it up too much. Do you use IRC/gtalk/skype/facebook at all? I’d like to discuess some more in regards to the possiblity of a shutdown hook for bcache. Yes, long, but I think its relevant. I was kind of hoping some kind-hearted soul would happen by and, perhaps having learned something, convert some of my pedantic stream-of-consciousness into something more usefully accessible. Wanna know the worst of it? It was written almost entirely on my Nexus 4 at various times throughout the day. I guess I’m something of a masochist. Anyway, anyone can edit it out, of course. I saved a copy. Mikeserv (talk) 06:22, 17 November 2013 (UTC)

Dont leave me out 😛 I am learning alott from the info given here 🙂 And i have 2 systems with Bcache on which i can test or benchmark 🙂 —Emesix (talk) 00:38, 18 November 2013 (UTC)

Ok, E. I’m pretty sure a simple unregister call is what’s needed in systemd’s unmounts.target, but before we get there can you verify you’ve got the udev rules working as intended? Mikeserv (talk) 10:14, 19 November 2013 (UTC)

I’m at work currently, but I’ve made a pkgbuild for v1.0.5 tools that use the upstream mkinitcpio hook (it’s almost identical to yours Mikeserv!) I want to test it a bit before I push it out. The changes are now in the bcache-tools-git package however as otherwise it wouldn’t really build against the latest upstream patch. I should hopefully have the new version up in the next 10 or so hours.

Justin8 — I just saw the email that you’d changed the ArchWiki and was at the link to your new AUR package when I received the second email to inform me you’d written this. Thanks for letting me know. By the way, the hook was never mine. It’s almost an exact copy of the mdadm_udev hook, and the hook itself only sets the install path for the udev rules which do the real work and are included in your AUR package. Last, in case you’re still wondering, the shutdown race condition we last discussed is also pretty much already handled in existing hooks. The «shutdown» hook is included by default with «mkinitcpio». I wanted to link to it, but the GITHUB page is sorely out-of-date: https://github.com/falconindy/mkinitcpio/commits/03deaed9f3f5b0c0537eb65e8f1862f53bc21fec/hooks Still, nano /usr/lib/mkinitcpio/shutdown will give you the gist. I also noticed the hooks for Arch’s archiso live-install system include a bunch of extras to handle safely unmounting its device-mapper loop-mounts, and those you can browse online. Should be pretty straightforward — maybe 10,15 lines of shell script: https://github.com/djgera/archiso/tree/master/archiso/initcpio/hooks https://github.com/djgera/archiso/tree/master/archiso/initcpio/hooks

Mikeserv (talk) 04:45, 7 January 2014 (UTC)

Using LVM on a Bcache volume

Initially, LVM did not recognize my /dev/bcache0 when I wanted to create a physical volume on it. For anyone else who has that issue, this may be relevant: http://www.redhat.com/archives/linux-lvm/2012-March/msg00005.html.

Mmmm cake (talk) 20:01, 28 January 2015 (UTC)

btrfs on bcache

I’ve been testing running btrfs on my bcache0 device recently (kernel 3.19.2). I found that btrfs decides that bcache0 is a ssd and by default mounts it with the ssd option. This may be the cause of the file system corruption seen in the past. If I mount btrfs with ‘-o nossd’ I don’t seem to have any issues. I’d love to hear experiences from anyone else who is willing to try this. Greyltc (talk) 07:21, 3 April 2015 (UTC)

Источник

I am trying to setup a USB dongle on my device by following the post here. I wasn’t successful in setting it up and while tracing my steps discovered that

echo "1c9e:9ba1" > /sys/bus/usb-serial/drivers/generic/new_id

was resulting in an error. I ran this statement from the terminal and got the following response

bash: echo: write error: Invalid argument

according to the post here it means that the device doesn’t implement a WRITE method.

Wondering if there is a way to get the echo command to work so that I can get my USB modem working.

Community's user avatar

asked Feb 26, 2017 at 8:49

sridhar pandurangiah's user avatar

What happened when you remove the «:» as appear in the link you provided?

Original Answer:

echo 1c9e 6061 > /sys/bus/usb-serial/drivers/option1/new_id

Your case:

Instead of:

echo "1c9e:9ba1" > /sys/bus/usb-serial/drivers/generic/new_id

Try running:

echo "1c9e 9ba1" > /sys/bus/usb-serial/drivers/generic/new_id

answered Feb 26, 2017 at 9:09

Yaron's user avatar

4

Why can’t I use echo $1 > /sys/class/backlight/acpi_video0/brightness in a simple bash script?

It gives me the error: echo: write error: Invalid argument.

Garrett's user avatar

Garrett

4,0891 gold badge22 silver badges32 bronze badges

asked Feb 3, 2012 at 22:31

David Thorisson's user avatar

David ThorissonDavid Thorisson

1631 gold badge2 silver badges7 bronze badges

1

Try echo "$1" > /sys/class/backlight/acpi_video0/brightness.

I bet the shell is expanding $1 and thus echo thinks it is receiving a bunch of arguments, rather than a string.

answered Feb 4, 2012 at 0:35

surfasb's user avatar

That file is a special file. It cannot be written to if what is written is not solely a number. If you try writing a number with echo, you will get a newline character at the end. echo -n solves the problem.

EDIT: Also, you might having the problem which I just had; that you need to be root and sudo won’t help you for whatever reason, making it very tedious to type su; <your command>; exit all the time. For this I made an (overly ambitious) python script:

#!/usr/bin/python

from sys import *

PATH = "/sys/class/backlight/intel_backlight/brightness"

if len(argv) != 2:
    print("Usage: bright.py <brightness>")
    exit()

try:
    brightness = int(argv[1])
    if not 0 <= brightness <= 825:
        raise Exception()
except:
    print("<brightness> must be an integer between 0 and 825.")
    exit()

if brightness == 0:
    readString = raw_input("A value of 0 will turn off your screen. Are you sure you want to continue? [y/N] ")
    if readString != "y":
        exit()
elif brightness <= 5:
    with open(PATH, "r") as f:
        oldBrightness = int(f.read())
        if brightness < oldBrightness:
            readString = raw_input("A value of %i will make your screen very dark. Are you sure you want to continue? [y/N] " % brightness)
            if readString != "y":
                exit()

try:
    with open(PATH, "w") as f:
        f.write(str(brightness))
except:
    print("Failed to write to file. Are you root?")
    exit()

answered Jun 8, 2014 at 21:08

nijoakim's user avatar

nijoakimnijoakim

1431 silver badge6 bronze badges

You should check what the actual value of $1 is. This error means you are trying to write an invalid value — either it’s out of range or just in general not a meaningful value.

At a glance, it appears that it accepts an integer in the range 0 to 8 (for me at least).

answered Feb 4, 2012 at 6:42

FatalError's user avatar

FatalErrorFatalError

2,1081 gold badge17 silver badges15 bronze badges

Try using let

#!/bin/bash

POLKU='/sys/class/backlight/radeon_bl0/brightness'


if [ $# -eq "0" ]
    then
        echo 100 > $POLKU
    else
        let gg=$1
        echo $gg > $POLKU
fi

answered Dec 8, 2014 at 4:40

Guest's user avatar

GuestGuest

313 bronze badges

Почему я не могу использовать echo $1 > /sys/class/backlight/acpi_video0/brightness в простом скрипте bash?

Это дает мне ошибку: echo: write error: Invalid argument .

Попробуйте echo "$1" > /sys/class/backlight/acpi_video0/brightness .

Могу поспорить, что оболочка расширяет 1 доллар, и, следовательно, echo считает, что она получает кучу аргументов, а не строку.

Этот файл является специальным файлом. Он не может быть записан, если написанное не является только числом. Если вы попытаетесь написать число с помощью echo , в конце вы получите символ новой строки. echo -n решает проблему.

РЕДАКТИРОВАТЬ: Кроме того, у вас может быть проблема, которая у меня была; что вам нужно быть пользователем root, и sudo не поможет вам по какой-либо причине, поэтому набирать su очень утомительно; <ваша команда>; exit все время. Для этого я сделал (слишком амбициозный) скрипт на python:

#!/usr/bin/python

from sys import *

PATH = "/sys/class/backlight/intel_backlight/brightness"

if len(argv) != 2:
    print("Usage: bright.py <brightness>")
    exit()

try:
    brightness = int(argv[1])
    if not 0 <= brightness <= 825:
        raise Exception()
except:
    print("<brightness> must be an integer between 0 and 825.")
    exit()

if brightness == 0:
    readString = raw_input("A value of 0 will turn off your screen. Are you sure you want to continue? [y/N] ")
    if readString != "y":
        exit()
elif brightness <= 5:
    with open(PATH, "r") as f:
        oldBrightness = int(f.read())
        if brightness < oldBrightness:
            readString = raw_input("A value of %i will make your screen very dark. Are you sure you want to continue? [y/N] " % brightness)
            if readString != "y":
                exit()

try:
    with open(PATH, "w") as f:
        f.write(str(brightness))
except:
    print("Failed to write to file. Are you root?")
    exit()

Вы должны проверить, какова действительная стоимость $1 . Эта ошибка означает, что вы пытаетесь записать недопустимое значение — либо оно выходит за пределы диапазона, либо просто не является значимым значением.

На первый взгляд кажется, что он принимает целое число в диапазоне от 0 до 8 (по крайней мере, для меня).

Попробуйте использовать let

#!/bin/bash

POLKU='/sys/class/backlight/radeon_bl0/brightness'


if [ $# -eq "0" ]
    then
        echo 100 > $POLKU
    else
        let gg=$1
        echo $gg > $POLKU
fi

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and
privacy statement. We’ll occasionally send you account related emails.

Already on GitHub?
Sign in
to your account

Assignees

@Sepero

Comments

@sophieforceno

When I run temp_throttle.sh with root privileges, the following error is displayed in the shell terminal:
image

Lines 80 — 83 are:
echo $FREQ_TO_SET
for i in $(seq 0 $CORES); do
echo $FREQ_TO_SET > /sys/devices/system/cpu/cpu$i/cpufreq/scaling_max_freq
done

I’m wondering will this error affect the function of the script? It seems to be working OK, my laptop isn’t going above 60c, which is the desired threshold.

I am running Peppermint Linux 3. If you need any other info, let me know!

@Sepero

Fascinating error. I can try to solve it for you, but I’ll need some more info about your system. Run this-

find /sys/ -iname "*freq*" > freq_list.txt

It will save output into a file named «freq_list.txt». I need to view the that output.

@sophieforceno

Here is the output from that file:

/sys/devices/pnp0/00:05/rtc/rtc0/max_user_freq
/sys/devices/pci0000:00/0000:00:02.0/drm/card0/gt_RP1_freq_mhz
/sys/devices/pci0000:00/0000:00:02.0/drm/card0/gt_min_freq_mhz
/sys/devices/pci0000:00/0000:00:02.0/drm/card0/gt_RPn_freq_mhz
/sys/devices/pci0000:00/0000:00:02.0/drm/card0/gt_RP0_freq_mhz
/sys/devices/pci0000:00/0000:00:02.0/drm/card0/gt_cur_freq_mhz
/sys/devices/pci0000:00/0000:00:02.0/drm/card0/gt_max_freq_mhz
/sys/devices/system/cpu/cpu0/cpufreq
/sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_cur_freq
/sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq
/sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_min_freq
/sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq
/sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq
/sys/devices/system/cpu/cpu1/cpufreq
/sys/devices/system/cpu/cpu1/cpufreq/cpuinfo_cur_freq
/sys/devices/system/cpu/cpu1/cpufreq/cpuinfo_max_freq
/sys/devices/system/cpu/cpu1/cpufreq/cpuinfo_min_freq
/sys/devices/system/cpu/cpu1/cpufreq/scaling_max_freq
/sys/devices/system/cpu/cpu1/cpufreq/scaling_min_freq
/sys/devices/system/cpu/cpu2/cpufreq
/sys/devices/system/cpu/cpu2/cpufreq/cpuinfo_cur_freq
/sys/devices/system/cpu/cpu2/cpufreq/cpuinfo_max_freq
/sys/devices/system/cpu/cpu2/cpufreq/cpuinfo_min_freq
/sys/devices/system/cpu/cpu2/cpufreq/scaling_max_freq
/sys/devices/system/cpu/cpu2/cpufreq/scaling_min_freq
/sys/devices/system/cpu/cpu3/cpufreq
/sys/devices/system/cpu/cpu3/cpufreq/cpuinfo_cur_freq
/sys/devices/system/cpu/cpu3/cpufreq/cpuinfo_max_freq
/sys/devices/system/cpu/cpu3/cpufreq/cpuinfo_min_freq
/sys/devices/system/cpu/cpu3/cpufreq/scaling_max_freq
/sys/devices/system/cpu/cpu3/cpufreq/scaling_min_freq
/sys/class/devfreq
/sys/kernel/debug/dri/0/i915_ring_freq_table
/sys/kernel/debug/dri/0/i915_delayfreq_table
/sys/kernel/debug/dri/0/i915_min_freq
/sys/kernel/debug/dri/0/i915_max_freq
/sys/kernel/debug/dri/64/i915_ring_freq_table
/sys/kernel/debug/dri/64/i915_delayfreq_table
/sys/kernel/debug/dri/64/i915_min_freq
/sys/kernel/debug/dri/64/i915_max_freq
/sys/kernel/debug/tracing/events/i915/intel_gpu_freq_change
/sys/kernel/debug/tracing/events/power/cpu_frequency
/sys/module/acpi_cpufreq
/sys/module/pcc_cpufreq
/sys/module/cpufreq_nforce2

@Sepero

Is it possible that you modified the script or are using an older version?

@sophieforceno

I am using 2.00, I’ll try the newest now. I have made no modifications to the script.

@sophieforceno

OK, it looks like that issue is no longer occurring with 2.11. Sorry, I hadn’t noticed there was an updated version. Thanks for this script btw, I don’t know what I would do without it (for some reason my laptop idles at 70c+ in Linux without the script).

@Sepero

Hey, I have the same problems, haha! Glad I could help. ;)

@sophieforceno

I spoke too soon. I am getting that error again (see first post above). This time, it’s a write error for line 104, in which the frequency is set by writing it to the scaling_max_freq file for each core. I verified that the file exists. I’m wondering if it’s actually writing to that file though. There is a frequency listed in that file, but only the frequency in the scaling_max_freq file for cpu0 seems to match the output of temp_throttle.sh in bash. Scaling_max_freq for cpu0 = 1600000, but for cpus 1-3 it = 2900000.

Is this typical behavior? I would think all 4 cores would be scaled equally?

@Sepero

It appears that your CPU is only allowing you to throttle the first core of your system. I’ve never seen anyone report this before. It may be possible that throttling the first actually does throttle all cores, and that the other cores cannot be set individually. Give me the output of lscpu, and I’ll leave this issue open in case anyone else comes along with more ideas or information.

For now, you should be fine to run the program as it is. To discard any error messages, run it like this-

sudo ./temp_throttle.sh 60 2> /dev/null

@sophieforceno

-Peppermint ~ $ lscpu:
Architecture:          i686
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    2
Core(s) per socket:    2
Socket(s):             1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 42
Stepping:              7
CPU MHz:               2277.000
BogoMIPS:              4589.76
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              3072K

I’m using Jupiter for power management, which allows me to switch the governor between power saving and on demand. I wonder if that could be interfering, as now that I’ve disabled it, all the cores seem to be at the same frequency. I’m not even sure I need to be running Jupiter anymore as long as I’m using temp_throttle.sh, anyway. For now I’ll just run the temp_throttle.sh output to /dev/null as you suggested.

@Sepero

Try this: reboot, avoid running Jupiter, and run temp_throttle.sh without outputting to /dev/null. If you no longer get errors, then that will indicate Jupiter is causing the issue.

@bandwith

I am curious to know what kernel version and kernel module configuration is in use. I hadn’t heard of Peppermint OS prior to this.

On May 4, 2014 8:46:48 PM EDT, Sepero notifications@github.com wrote:

Try this: reboot, avoid running Jupiter, and run temp_throttle.sh
without outputting to /dev/null. If you no longer get errors, then that
will indicate Jupiter is causing the issue.


Reply to this email directly or view it on GitHub:
#5 (comment)

@sophieforceno

@Sepero OK, this is just tentative, I’ll need to run temp_throttle.sh for a few more days before I know for sure, but it looks like Jupiter might have been interfering with the scaling. If anything changes, I’ll let you know.

@bandwith it’s a «cloud-centric» distro based on Lubuntu. I am running Peppermint 3 (Peppermint 4 is newer but they ditched LXDE/Openbox, and I really prefer that), using kernel 3.11.6 (I can’t recall which version it shipped with).

I can give you the output of lsmod if you’re interested.

@sophieforceno

Well, I am still getting that invalid argument error (line 104 now, where it set the max temp for all cores), but the script still works fine. My CPU temp never goes above the temp I set when executing temp-throttle.

@Sepero

t4exanadu, did you also try rebooting and not running Jupiter? If so, I’ll just leave this issue open for awareness to other people.

@sophieforceno

I have rebooted since then and removed Jupiter completely. I still get those errors when I execute temp-throttle from the command line (as sudo, of course). However, I now have temp-throttle in rc.local so it starts at boot time, and I don’t see those errors appearing in any of my logs.

@technik007cz

What is output of
echo «scaling_min_freq=cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq«
echo «scaling_max_freq=cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq«
?
If «scaling_max_freq» is higher than «scaling_min_freq» then system can have problem with it like my system.

@technik007cz

Or you can try edit script. Find text «echo $FREQ_TO_SET > /sys/devices/system/cpu/cpu$i/cpufreq/scaling_max_freq» and put on single line before it «echo $FREQ_TO_SET > /sys/devices/system/cpu/cpu$i/cpufreq/scaling_min_freq». System can sometimes complain, but check temperature of your cpu first if this change helped.

@sophieforceno

cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq:
800000

cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq:
1000000

I haven’t been getting the error I used to get, though. It happens once a month or so, it’s strange.

@ghost

Hello. It seems OS not allowing directly writing to these files.
I created a pull request that fixing this error: #8

@Sepero

Hi zlay. I have reviewed your changes. I am not able to apply them as they are currently, because they introduce the dependency for the package «cpufrequtils». I think I will attempt to apply the changes similar to this logic:

try (echo $FREQ_TO_SET > max_freq)
    on fail try (cpufreq-set -c $i --max $FREQ_TO_SET)
        on fail try (echo suggestion to user to try installing cpufrequtils)

Thank you for your submission. :)

@ghost
ghost

mentioned this issue

Jan 31, 2015

@Sepero

I’m going to close this issue for now. It may be reopened if anyone writes in with future issues concerning this.

Tango-edit-clear.pngThis article or section needs language, wiki syntax or style improvements. See Help:Style for reference.Tango-edit-clear.png

Bcache (block cache) allows one to use an SSD as a read/write cache (in writeback mode) or read cache (writethrough or writearound) for another blockdevice (generally a rotating HDD or array). This article will show how to install Arch using Bcache as the root partition. For an intro to bcache itself, see the bcache homepage. Be sure to read and reference the bcache manual. Bcache is in the mainline kernel since 3.10. The kernel on the arch install disk includes the bcache module since 2013.08.01.

Tip: An alternative to Bcache is the LVM cache.

Bcache needs the backing device to be formatted as a bcache block device. In most cases, blocks to-bcache can do an in-place conversion.

Tango-view-refresh-red.pngThis article or section is out of date.Tango-view-refresh-red.png

Reason: Backwards incompatible changes to on-disk format for Linux 3.18 were reported to be false. bcache on-disk format changes had not landed upstream yet as of 3.19. Since then, Linux is at version 5.16, can the relevant sentence reflect when exactly the breaking changes took place ? Is the mention still needed ? (Discuss in Talk:Bcache)

Warning:

  • Be sure you back up any important data first.
  • The bcache-dev branch is under heavy development. The on-disk format has undergone changes in 3.18 which are not backwards compatible with previous formats[1]. Note: This only applies to users who compile-in bcache-dev. The version built-in to the upstream Linux kernel is unaffected[2].
  • Bcache and btrfs could leave you with a corrupted filesystem. Please visit this post for more information. Btrfs wiki reports that it was fixed in kernels 3.19+ [3].

Setting up bcached btrfs file systems on an existing system

Warning: make-bcache will not import an existing drive or partition – it will reformat it.

Preparation

Install bcache-toolsAUR.

Use fdisk to create the appropriate partitions on the SSD’s and hard drives to hold the cache and the backing data.

Tip: It is possible to create many partitions on a single drive. This allows for testing of elaborate setups before committing. Be aware all data will be lost when the drive fails. This will also kill performance of the drive, due to unfavorable access patterns.

Situation: 1 hard drive and 1 read cache SSD

Warning:

  • When a single hard drive fails, all data is lost.
  • Do not enable write caching, as that can cause data loss when the SSD fails
+--------------+
| btrfs /mnt   |
+--------------+
| /dev/Bcache0 |
+--------------+
| Cache        |
| /dev/sdk1    |
+--------------+
| Data         |
| /dev/sdv1    |
+--------------+

1. Format the backing device (This will typically be your mechanical drive). The backing device can be a whole device, a partition or any other standard block device. This will create /dev/bcache0

# make-bcache -B /dev/sdv1

2. Format the cache device (This will typically be your SSD). The cache device can be a whole device, a partition or any other standard block device

# make-bcache -C /dev/sdk1

In this example the default block and bucket sizes of 512B and 128kB are used. The block size should match the backing devices sector size which will usually be either 512 or 4k. The bucket size should match the erase block size of the caching device with the intent of reducing write amplification. For example, using a HDD with 4k sectors and an SSD with an erase block size of 2MB this command would look like

# make-bcache --block 4k --bucket 2M -C /dev/sdk1

3. Get the uuid of the cache device

# bcache-super-show /dev/sdk1 | grep cset
cset.uuid		f0e01318-f4fd-4fab-abbb-d76d870503ec

4. Register the cache device against your backing device. Replace the example uuid with the uuid of your cache. Udev rules will take care of this on reboot and will only need to be done once.

# echo f0e01318-f4fd-4fab-abbb-d76d870503ec > /sys/block/bcache0/bcache/attach

5. Create the btrfs filesystem.

# mkfs.btrfs /dev/bcache0

6. mount the filesystem

# mount /dev/bcache0 /mnt

7. If you want to have this partition available during the initcpio (i.e. you require it at some point in the boot process) you need to add ‘bcache’ to your modules array in /etc/mkinitcpio.conf as well as adding the ‘bcache’ hook in your list between block and filesystems. You must then regenerate the initramfs.

Situation: 4 hard drives and 1 read cache SSD

Warning:

  • Do not enable write caching, as that can cause data loss when the SSD fails
+-----------------------------------------------------------+
|                         btrfs /mnt                        |
+--------------+--------------+--------------+--------------+
| /dev/Bcache0 | /dev/Bcache1 | /dev/Bcache2 | /dev/Bcache3 |
+--------------+--------------+--------------+--------------+
|                           Cache                           |  
|                         /dev/sdk1                         |
+--------------+--------------+--------------+--------------+
| Data         | Data         | Data         | Data         |
| /dev/sdv1    | /dev/sdw1    | /dev/sdx1    | /dev/sdy1    |
+--------------+--------------+--------------+--------------+

1. Format the backing devices (These will typically be your mechanical drives). The backing devices can be whole devices, partitions or any other standard block devices. This will create /dev/bcache0, /dev/bcache1, /dev/bcache2 and /dev/bcache3

# make-bcache -B /dev/sdv1
# make-bcache -B /dev/sdw1
# make-bcache -B /dev/sdx1
# make-bcache -B /dev/sdy1

2. Format the cache device (This will typically be your SSD). The cache device can be a whole device, a partition or any other standard block device. Only one cache device can be added to a group of backing devices.

# make-bcache -C /dev/sdk1

3. Get the uuid of the cache device

# bcache-super-show /dev/sdk1 | grep cset
cset.uuid		f0e01318-f4fd-4fab-abbb-d76d870503ec

4. Register the cache device against your backing devices. Replace the example uuid with the uuid of your cache.

# echo f0e01318-f4fd-4fab-abbb-d76d870503ec > /sys/block/bcache0/bcache/attach
# echo f0e01318-f4fd-4fab-abbb-d76d870503ec > /sys/block/bcache1/bcache/attach
# echo f0e01318-f4fd-4fab-abbb-d76d870503ec > /sys/block/bcache2/bcache/attach
# echo f0e01318-f4fd-4fab-abbb-d76d870503ec > /sys/block/bcache3/bcache/attach

5. Create the btrfs filesystem. Both the data and the metadata is stored twice in the array, so there will be no data loss when a single hard drive fails. The -L argument defines the label of the filesystem.

# mkfs.btrfs -L STORAGE -f -d raid1 -m raid1 /dev/bcache0 /dev/bcache1 /dev/bcache2 /dev/bcache3 

6. mount the filesystem

# mount /dev/bcache0 /mnt

Situation: 3 hard drives and 3 read/write cache SSD’s

Warning:

  • Each HDD needs its own SSD, to avoid data loss if a SSD in writeback mode fails.
+--------------------------------------------+
|                  btrfs /mnt                |
+--------------+--------------+--------------+
| /dev/Bcache0 | /dev/Bcache1 | /dev/Bcache2 |
+--------------+--------------+--------------+
| Cache        | Cache        | Cache        |  
| /dev/sdk1    | /dev/sdl1    | /dev/sdm1    |
+--------------+--------------+--------------+
| Data         | Data         | Data         |
| /dev/sdv1    | /dev/sdw1    | /dev/sdx1    |
+--------------+--------------+--------------+

1. Format the backing devices (These will typically be your mechanical drives). The backing devices can be whole devices, partitions or any other standard block devices. This will create /dev/bcache0, /dev/bcache1 and /dev/bcache2.

# make-bcache -B /dev/sdv1
# make-bcache -B /dev/sdw1
# make-bcache -B /dev/sdx1

2. Format the cache devices (This will typically be your SSD’s). The cache devices can be whole devices, partitions or any other standard block devices. To avoid data loss in case of a failing SSD, each backing device needs its own SSD if it is in writeback mode. Cache SSD’s in writethrough and in writearound mode can be shared by multiple backing devices, as they do not cause data loss when they fail.

# make-bcache -C /dev/sdk1
# make-bcache -C /dev/sdl1
# make-bcache -C /dev/sdm1

3. Get the uuid of the cache devices

# bcache-super-show /dev/sdk1 | grep cset
cset.uuid		f0e01318-f4fd-4fab-abbb-d76d870503ec
# bcache-super-show /dev/sdl1 | grep cset
cset.uuid		4b05ce02-19f4-4cc6-8ca0-1f765671ceda
# bcache-super-show /dev/sdm1 | grep cset
cset.uuid		75ff0598-7624-46f6-bcac-c27a3cf1a09f

4. Register the cache devices against your backing devices. Replace the example uuid’s with the uuid’s of your caches.

# echo f0e01318-f4fd-4fab-abbb-d76d870503ec > /sys/block/bcache0/bcache/attach
# echo 4b05ce02-19f4-4cc6-8ca0-1f765671ceda > /sys/block/bcache1/bcache/attach
# echo 75ff0598-7624-46f6-bcac-c27a3cf1a09f > /sys/block/bcache2/bcache/attach

5. enable writeback mode

Warning:

  • Bcache write caching can cause a catastrophic failure of a btrfs filesystem.
  • Btrfs assumes the underlying device executes writes in order, but bcache writeback may violate that assumption, causing the btrfs filesystem using it to collapse.
  • Every layer or write caching adds more risk of loosing data in the event of a power loss.
  • Use bcache in writeback mode with btrfs at your own risk.
# echo writeback > /sys/block/bcache0/bcache/cache_mode
# echo writeback > /sys/block/bcache1/bcache/cache_mode
# echo writeback > /sys/block/bcache2/bcache/cache_mode

6. Create the btrfs filesystem. Both the data and the metadata is stored twice in the array, so there will be no data loss when a single hard drive fails. The -L argument defines the label of the filesystem.

# mkfs.btrfs -L STORAGE -f -d raid1 -m raid1 /dev/bcache0 /dev/bcache1 /dev/bcache2

7. mount the filesystem

# mount /dev/bcache0 /mnt

Situation: 5 hard drives and 3 cache SSD’s

Warning:

  • Each cache device in writeback mode must only be used to cache a single backing drive, to avoid data loss if that SSD fails. Writethrough and writearound SSD’s can be shared.
+--------------------------------------------------------------------------+
|                                btrfs /mnt                                |
+--------------+--------------+--------------+--------------+--------------+
| /dev/Bcache0 | /dev/Bcache1 | /dev/Bcache2 | /dev/Bcache3 | /dev/Bcache4 |
+--------------+--------------+--------------+--------------+--------------+
| WriteB Cache |     Writethrough or writearound Cache      | WriteB Cache |  
| /dev/sdk1    |                 /dev/sdl1                  | /dev/sdm1    |
+--------------+--------------+--------------+--------------+--------------+
| Data         | Data         | Data         | Data         | Data         |
| /dev/sdv1    | /dev/sdw1    | /dev/sdx1    | /dev/sdy1    | /dev/sdz1    |
+--------------+--------------+--------------+--------------+--------------+

1. Format the backing devices (These will typically be your mechanical drives). The backing devices can be whole devices, partitions or any other standard block devices. This will create /dev/bcache0, /dev/bcache1, /dev/bcache2, /dev/bcache3 and /dev/bcache4.

# make-bcache -B /dev/sdv1
# make-bcache -B /dev/sdw1
# make-bcache -B /dev/sdx1
# make-bcache -B /dev/sdy1
# make-bcache -B /dev/sdz1

2. Format the cache devices (This will typically be your SSD’s). The cache devices can be whole devices, partitions or any other standard block devices. To avoid data loss in case of a failing SSD, each backing device needs its own SSD if it is in writeback mode. Cache SSD’s in writethrough and in writearound mode can be shared by multiple backing devices, as they do not cause data loss when they fail.

# make-bcache -C /dev/sdk1
# make-bcache -C /dev/sdl1
# make-bcache -C /dev/sdm1

3. Get the uuid of the cache devices

# bcache-super-show /dev/sdk1 | grep cset
cset.uuid		f0e01318-f4fd-4fab-abbb-d76d870503ec
# bcache-super-show /dev/sdl1 | grep cset
cset.uuid		4b05ce02-19f4-4cc6-8ca0-1f765671ceda
# bcache-super-show /dev/sdm1 | grep cset
cset.uuid		75ff0598-7624-46f6-bcac-c27a3cf1a09f

4. Register the cache devices against your backing devices. Replace the example uuid’s with the uuid’s of your caches.

# echo f0e01318-f4fd-4fab-abbb-d76d870503ec > /sys/block/bcache0/bcache/attach
# echo 4b05ce02-19f4-4cc6-8ca0-1f765671ceda > /sys/block/bcache1/bcache/attach
# echo 4b05ce02-19f4-4cc6-8ca0-1f765671ceda > /sys/block/bcache2/bcache/attach
# echo 4b05ce02-19f4-4cc6-8ca0-1f765671ceda > /sys/block/bcache3/bcache/attach
# echo 75ff0598-7624-46f6-bcac-c27a3cf1a09f > /sys/block/bcache4/bcache/attach

5. enable writeback mode on non-shared caches

# echo writeback > /sys/block/bcache0/bcache/cache_mode
# echo writeback > /sys/block/bcache4/bcache/cache_mode

6. Create the btrfs filesystem. Both the data and the metadata is stored twice in the array, so there will be no data loss when a single hard drive fails. The -L argument defines the label of the filesystem.

# mkfs.btrfs -L STORAGE -f -d raid1 -m raid1 /dev/bcache0 /dev/bcache1 /dev/bcache2 /dev/bcache3 /dev/bcache4

7. mount the filesystem

# mount /dev/bcache0 /mnt

Bcache management

1. Check that everything has been correctly setup

# cat /sys/block/bcache0/bcache/state

The output can be:

  • no cache: this means you have not attached a caching device to your backing bcache device
  • clean: this means everything is ok. The cache is clean.
  • dirty: this means everything is setup fine and that you have enabled writeback and that the cache is dirty.
  • inconsistent: you are in trouble because the backing device is not in sync with the caching device

You can have a /dev/bcache0 device associated with a backing device with no caching device attached. This means that all I/O (read/write) are passed directly to the backing device (pass-through mode)

2. See what caching mode is in use

# cat /sys/block/bcache0/bcache/cache_mode
[writethrough] writeback writearound none

In the above example, the writethrough mode is enabled.

3. Show info about a bcached device:

# bcache-super-show /dev/sdXY

4. Stop the backing device:

# echo 1 > /sys/block/sdX/sdX[Y]/bcache/stop

5. Detach a caching device:

# echo 1 > /sys/block/sdX/sdX[Y]/bcache/detach

6. Safely remove the cache device

# echo cache-set-uuid > /sys/block/bcache0/bcache/detach

7. Release attached devices

# echo 1 > /sys/fs/bcache/cache-set-uuid/stop

Installation to a bcache device

1. Boot on the install disk (2013.08.01 minimum).

2. Install the bcache-toolsAUR.

3. Partition your HDD

Note: While it may be true that Grub2 does not offer support for bcache as noted below, it does, however, fully support UEFI. It follows then, that so long as the necessary modules for the linux kernel to properly handle your boot device are either compiled into the kernel or are included in an initramfs, and you can include these files on it, the separate boot partition described below may be omitted in favor of the FAT EFI system partition. See GRUB and/or UEFI for more.

grub cannot handle bcache, so you will need at least 2 partitions (boot and one for the bcache backing device). If you are doing UEFI, you will need an EFI system partition (ESP) as well. E.g.:

     1            2048           22527   10.0 MiB    EF00  EFI System
     2           22528          432127   200.0 MiB   8300  arch_boot
     3          432128       625142414   297.9 GiB   8300  bcache_backing

Note: This example has no swapfile/partition. For a swap partition on the cache, use LVM in step 7. For a swap partition outside the cache, be sure to make a swap partition now.

4. Configure your HDD as a bcache backing device.

# make-bcache -B /dev/sda3

Note:

  • When preparing any boot disk it is important to know the ramifications of any decision you may make. Please review and review again the documentation for your chosen boot-loader/-manager and consider seriously how it might relate to bcache.
  • If all associated disks are partitioned at once as below bcache will automatically attach «-B backing stores» to the «-C ssd cache» and step 5 is unnecessary.
# make-bcache -B /dev/sd? /dev/sd? -C /dev/sd?

You now have a /dev/bcache0 device.

5. Configure your SSD

Format the SSD as a caching device and link it to the backing device

# make-bcache -C /dev/sdb
# echo /dev/sdb > /sys/fs/bcache/register 
# echo UUID__from_previous_command > /sys/block/bcache0/bcache/attach

Note: If the UUID is forgotten, it can be found with ls /sys/fs/bcache/ after the cache device has been registered.

6. Format the bcache device. Use LVM or btrfs subvolumes if you want to divide up the /dev/bcache0 device how you like (ex for separate /, /home, /var, etc):

# mkfs.btrfs /dev/bcache0
# mount /dev/bcache0 /mnt/
# btrfs subvolume create /mnt/root
# btrfs subvolume create /mnt/home
# umount /mnt

You can even setup LUKS on it if you want using e.g. cryptsetup. Referencing the bcache device in the ‘cryptdevice’ kernel option will work fine, for instance.

7. Prepare the installation mount point:

# mkfs.ext4 /dev/sda2
# mkfs.msdos /dev/sda1 (if your ESP is at least 500MB, use mkfs.vfat to make a FAT32 partition instead)

Now install arch-install-scripts package. Then:

# mount /dev/bcache0 -o subvol=root,compress=lzo /mnt/
# mount --mkdir /dev/bcache0 -o subvol=home,compress=lzo /mnt/home
# mount --mkdir /dev/sda2 /mnt/boot
# mount --mkdir /dev/sda1 /mnt/efi

8. Install the system as per the Installation guide as normal except this:

Before you edit /etc/mkinitcpio.conf and run mkinitcpio -p linux:

  • Install bcache-toolsAUR.
  • Edit /etc/mkinitcpio.conf:
    • add the «bcache» module
    • add the «bcache» hook between block and filesystem hooks

Note: Should you want to open the backing device from the installation media for any reason after a reboot you must register it manually. Make sure the bcache module is loaded and then echo the relevant devices to /sys/bcache/register. You should see whether this worked or not by using dmesg.

Accessing from the install disk

This is how to access a bcache partition from the install disk that was present before the install disk was booted. Boot the install disk and install bcache-toolsAUR from the AUR, just as in the previous section. Then, add the module to the kernel:

# modprobe bcache

Your device will not appear immediately at /dev/bcache*. To force the kernel to find it, tell it to reread the partition table:

# partprobe

Now, /dev/bcache* should be present, and you can carry on mounting, reformatting, etc. from here.

To start the cache without having to configure the internet and install bcache-toolsAUR, load the kernel module just as before—it is included in the mainline kernel. Then start the cache by registering all of the slave devices:

# echo /dev/sdX > /sys/fs/bcache/register
# echo /dev/sdY > /sys/fs/bcache/register
# ...

The bcache device will appear right after the last required slave device is registered.

A writethrough backing device can be started without having to register any caches. This can be done if there are a lot of them and you are in a hurry, or if some of the caches are inaccessible for some reason. Register the device, as above, then start it:

# echo 1 > /sys/block/sdX/bcache/running

Bcache has not actually detached any caches, and will still add any cache devices if they are registered. This command will «work» on writeback backing devices, but there will be massive data corruption. Only do this if the missing cache is totally unrecoverable.

Configuring

Warning: Do not enable the discard option! It can cause unrecoverable corruption. [4][5]

There are many options that can be configured (such as cache mode, cache flush interval, sequential write heuristic, etc.) This is currently done by writing to files in /sys. See the bcache user documentation.

Changing the cache mode is done by echoing one of writethrough, writeback, writearound or none to /sys/block/bcache[0-9]/bcache/cache_mode.

Note that some changes to /sys are temporary, and will revert back after a reboot (It seems that at least cache_mode does not need this workaround). To set custom configurations at boot create a .conf file in /etc/tmpfiles.d. To set, in a persistent fashion, the sequential cutoff for bcache0 to 1 MB and write back you could create a file /etc/tmpfiles.d/my-bcache.conf with the contents:

w /sys/block/bcache0/bcache/sequential_cutoff - - - - 1M
w /sys/block/bcache0/bcache/cache_mode        - - - - writeback

Advanced operations

Resize backing device

It is possible to resize the backing device so long as you do not move the partition start. This process is described in the mailing list. Here is an example using btrfs volume directly on bcache0. For LVM containers or for other filesystems, procedure will differ.

Example of growing

In this example, I grow the filesystem by 4GB.

1. Reboot to a live CD/USB Drive (need not be bcache enabled) and use fdisk, gdisk, parted, or your other favorite tool to delete the backing partition and recreate it with the same start and a total size 4G larger.

Warning: Do not use a tool like GParted that might perform filesystem operations! It will not recognize the bcache partition and might overwrite part of it!!

2. Reboot to your normal install. Your filesystem will be currently mounted. That is fine. Issue the command to resize the partition to its maximum. For btrfs, that is

# btrfs filesystem resize max /

For ext3/4, that is:

# resize2fs /dev/bcache0

Example of shrinking

In this example, I shrink the filesystem by 4GB.

1. Disable writeback cache (switch to writethrough cache) and wait for the disk to flush.

# echo writethrough > /sys/block/bcache0/bcache/cache_mode
$ watch cat /sys/block/bcache0/bcache/state

wait until state reports «clean». This might take a while.

Force flush of cache to backing device

I suggest to use

 # echo 0 > /sys/block/bcache0/bcache/writeback_percent

This will flush the dirty data of the cache to the backing device in less a minute.

Revert back the value after with

# echo 10 > /sys/block/bcache0/bcache/writeback_percent

2. Shrink the mounted filesystem by something more than the desired amount, to ensure we do not accidentally clip it later. For btrfs, that is:

# btrfs filesystem resize -5G /

For ext3/4 you can use resize2fs, but only if the partition is unmounted

$ df -h /home
/dev/bcache0    290G   20G   270G   1% /home
# umount /home
# resize2fs /dev/bcache0 283G

3. Reboot to a LiveCD/USB drive (does not need to support bcache) and use fdisk, gdisk, parted, or your other favorite tool to delete the backing partition and recreate it with the same start and a total size 4G smaller.

Warning: Do not use a tool like GParted that might perform filesystem operations! It will not recognize the bcache partition and might overwrite part of it!!

4. Reboot to your normal install. Your filesystem will be currently mounted. That is fine. Issue the command to resize the partition to its maximum (that is, the size we shrunk the actual partition to in step 3). For btrfs, that is:

# btrfs filesystem resize max /

For ext3/4, that is:

# resize2fs /dev/bcache0

5. Re-enable writeback cache if you want that enabled:

# echo writeback > /sys/block/bcache0/bcache/cache_mode

Note: If you are very careful you can shrink the filesystem to the exact size in step 2 and avoid step 4. Be careful, though, many partition tools do not do exactly what you want, but instead adjust the requested partition start/end points to end on sector boundaries. This may be difficult to calculate ahead of time

Troubleshooting

/dev/bcache device does not exist on bootup

If you are sent to a busy box shell with an error:

ERROR: Unable to find root device 'UUID=b6b2d82b-f87e-44d5-bbc5-c51dd7aace15'.
You are being dropped to a recovery shell
    Type 'exit' to try and continue booting

This might happen if the backing device is configured for «writeback» mode (default is writearound). When in «writeback» mode, the /dev/bcache0 device is not started until the cache device is both registered and attached. Registering is something that needs to happen every bootup, but attaching should only have to be done once.

To continue booting, try one of the following:

  • Register both the backing device and the caching device
# echo /dev/sda3 > /sys/fs/bcache/register
# echo /dev/sdb > /sys/fs/bcache/register

If the /dev/bcache0 device now exists, type exit and continue booting. You will need to fix your initcpio to ensure devices are registered before mounting the root device.

Note:

  • An error of «sh: echo: write error: Invalid argument» means the device was already registered or is not recognized as either a bcache backing device or cache. If using the udev rule on boot it should only attempt to register a device if it finds a bcache superblock
  • This can also happen if using udev’s 69-bcache.rules in Installation’s step 7 and blkid and bcache-probe «disagree» due to rogue superblocks. See bcache’s wiki for a possible explanation/resolution.
  • Re-attach the cache to the backing device:

If the cache device was registered, a folder with the UUID of the cache should exist in /sys/fs/bcache. Use that UUID when following the example below:

# ls /sys/fs/bcache/
b6b2d82b-f87e-44d5-bbc5-c51dd7aace15     register     register_quiet
# echo b6b2d82b-f87e-44d5-bbc5-c51dd7aace15 > /sys/block/sda/sda3/bcache/attach

If the /dev/bcache0 device now exists, type exit and continue booting. You should not have to do this again. If it persists, ask on the bcache mailing list.

Note: An error of sh: echo: write error: Invalid argument means the device was already attached. An error of sh: echo: write error: No such file or directory means the UUID is not a valid cache (make sure you typed it correctly).

  • Invalidate the cache and force the backing device to run without it. You might want to check some stats, such as «dirty_data» so you have some idea of how much data will be lost.
# cat /sys/block/sda/sda3/bcache/dirty_data
-3.9M

dirty data is data in the cache that has not been written to the backing device. If you force the backing device to run, this data will be lost, even if you later re-attach the cache.

# cat /sys/block/sda/sda3/bcache/running
0
# echo 1 > /sys/block/sda/sda3/bcache/running

The /dev/bcache0 device will now exist. Type exit and continue booting. You might want to unregister the cache device and run make-bcache again. An fsck on /dev/bcache0 would also be wise. See the bcache documentation.

Warning: Only invalidate the cache if one of the two options above did not work.

/sys/fs/bcache/ does not exist

The kernel you booted is not bcache enabled, or you the bcache module is not loaded

write error: Invalid argument when trying to attach a device due to mismatched block parameter

Given bash: echo: write error: Invalid argument when trying to attach a device, and the actual error is shown with dmesg:

bcache: bch_cached_dev_attach() Couldn't attach sdc: block size less than set's block size

This happens because the --block 4k parameter was not set on either device and defaults can mismatch.

Creating both the backing and caching device in one command automatically solves the issue, but when using separate commands the block size parameter sometimes needs to be set manually on both devices.

See also

  • Bcache Homepage
  • Bcache Manual

Понравилась статья? Поделить с друзьями:
  • Basenamedobjects wmiprovidersubsystemhostjob win32 error 1060
  • Basecomm smx basic comm control unexpected error 23 in askpluginload callback
  • Base game not installed application will exit как исправить
  • Base error unhandled exception inputmapper
  • Base error 7405602 argument parse error