Comments
teancom
added a commit
to cloudfoundry-community/jumpbox-boshrelease
that referenced
this issue
May 21, 2018
…ux#219, which breaks this release in bosh-lite Signed-off-by: Steven Barry <sba30@allstate.com>
sanjayankur31
added a commit
to sanjayankur31/100_dotfiles
that referenced
this issue
Sep 16, 2021
guysoft
added a commit
to guysoft/CustomPiOS
that referenced
this issue
Oct 30, 2022
esatterwhite
added a commit
to esatterwhite/semantic-release-docker
that referenced
this issue
Dec 19, 2022
I have Ubuntu 16.04.6 LTS installed.
Previously I have installed Docker from its repository as docker.io
package.
Yesterday I have installed LXC with LXD and I suppose that they have some problem with coexistence with Docker.
LXC works normally:
$ lxc exec ubuntu-test -- su --login ubuntu-test ~ #
Now when I try to start docker container with docker run
I get error:
docker: Error response from daemon: cgroups: cannot find cgroup mount destination: unknown.
How can I fix this problem with starting Docker containers?
If it is possible please suggest a solution with Docker and LXC/LXD installed in the same time.
Some debug info:
$ mount | grep cgroup
tmpfs on /sys/fs/cgroup type tmpfs (rw,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset,clone_children)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event,release_agent=/run/cgmanager/agents/cgm-release-agent.perf_event)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids,release_agent=/run/cgmanager/agents/cgm-release-agent.pids)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb,release_agent=/run/cgmanager/agents/cgm-release-agent.hugetlb)
$ dpkg -l | grep -E "containerd|docker|lxc|lxd|cgroup"
ii cgmanager 0.39-2ubuntu5 amd64 Central cgroup manager daemon
ii cgroup-lite 1.11 all Light-weight package to set up cgroups at system boot
ii containerd 1.2.6-0ubuntu1~16.04.3 amd64 daemon to control runC
ii docker.io 18.09.7-0ubuntu1~16.04.5 amd64 Linux container runtime
ii libcgmanager0:amd64 0.39-2ubuntu5 amd64 Central cgroup manager daemon (client library)
ii liblxc1 2.0.11-0ubuntu1~16.04.3 amd64 Linux Containers userspace tools (library)
ii libpam-cgfs 2.0.8-0ubuntu1~16.04.2 amd64 PAM module for managing cgroups for LXC
ii lxc-common 2.0.11-0ubuntu1~16.04.3 amd64 Linux Containers userspace tools (common tools)
ii lxc-templates 2.0.11-0ubuntu1~16.04.3 amd64 Linux Containers userspace tools (templates)
ii lxc1 2.0.11-0ubuntu1~16.04.3 amd64 Linux Containers userspace tools
ii lxcfs 2.0.8-0ubuntu1~16.04.2 amd64 FUSE based filesystem for LXC
ii lxd 2.0.11-0ubuntu1~16.04.4 amd64 Container hypervisor based on LXC - daemon
ii lxd-client 2.0.11-0ubuntu1~16.04.4 amd64 Container hypervisor based on LXC - client
ii python3-lxc 2.0.11-0ubuntu1~16.04.3 amd64 Linux Containers userspace tools (Python 3.x bindings)
$ systemctl list-units --type service
UNIT LOAD ACTIVE SUB JOB DESCRIPTION
accounts-daemon.service loaded active running Accounts Service
acpid.service loaded active running ACPI event daemon
alsa-restore.service loaded active exited Save/Restore Sound Card State
apparmor.service loaded active exited LSB: AppArmor initialization
apport.service loaded active exited LSB: automatic crash report generation
atd.service loaded active running Deferred execution scheduler
avahi-daemon.service loaded active running Avahi mDNS/DNS-SD Stack
avahi-dnsconfd.service loaded active running Avahi DNS Configuration Daemon
binfmt-support.service loaded active exited Enable support for additional executable binary formats
bluetooth.service loaded active running Bluetooth service
cgmanager.service loaded active running Cgroup management daemon
cgroupfs-mount.service loaded active exited LSB: Set up cgroupfs mounts.
click-system-hooks.service loaded active exited Run Click system-level hooks
colord.service loaded active running Manage, Install and Generate Color Profiles
console-kit-daemon.service loaded active running Console Manager
console-kit-log-system-start.service loaded active exited Console System Startup Logging
console-setup.service loaded active exited Set console font and keymap
containerd.service loaded active running containerd container runtime
cpufrequtils.service loaded active exited LSB: set CPUFreq kernel parameters
cron.service loaded active running Regular background program processing daemon
cups-browsed.service loaded active running Make remote CUPS printers available locally
cups.service loaded active running CUPS Scheduler
dbus.service loaded active running D-Bus System Message Bus
docker.service loaded inactive dead start Docker Application Container Engine
ebtables.service loaded active exited LSB: ebtables ruleset management
getty@tty1.service loaded inactive dead start Getty on tty1
gpm.service loaded active running LSB: gpm sysv init script
gpsd.service loaded active running GPS (Global Positioning System) Daemon
grub-common.service loaded active exited LSB: Record successful boot for GRUB
hddtemp.service loaded inactive dead start LSB: disk temperature monitoring daemon
iio-sensor-proxy.service loaded active running IIO Sensor Proxy service
inetd.service loaded active running Internet superserver
irqbalance.service loaded active running LSB: daemon to balance interrupts for SMP systems
keyboard-setup.service loaded active exited Set console keymap
kmod-static-nodes.service loaded active exited Create list of required static device nodes for the curre
libvirt-bin.service loaded active running Virtualization daemon
libvirt-guests.service loaded active exited Suspend Active Libvirt Guests
lightdm.service loaded active running Light Display Manager
lm-sensors.service loaded active exited Initialize hardware monitoring sensors
loadcpufreq.service loaded active exited LSB: Load kernel modules needed to enable cpufreq scaling
lvm2-lvmetad.service loaded active running LVM2 metadata daemon
lvm2-monitor.service loaded active exited Monitoring of LVM2 mirrors, snapshots etc. using dmeventd
lxc-net.service loaded inactive dead start LXC network bridge setup
lxc.service loaded inactive dead start LXC Container Initialization and Autoboot Code
lxcfs.service loaded active running FUSE filesystem for LXC
lxd-bridge.service loaded active exited LXD - network bridge
lxd-containers.service loaded activating start start LXD - container startup/shutdown
lxd.service loaded inactive dead start LXD - main daemon
mdadm.service loaded active running LSB: MD monitoring daemon
ModemManager.service loaded active running Modem Manager
networking.service loaded active exited Raise network interfaces
NetworkManager-wait-online.service loaded activating start start Network Manager Wait Online
NetworkManager.service loaded active running Network Manager
nfs-config.service loaded active exited Preprocess NFS configuration
nmbd.service loaded inactive dead start LSB: start Samba NetBIOS nameserver (nmbd)
ntp.service loaded inactive dead start LSB: Start NTP daemon
ofono.service loaded active running oFono Mobile telephony stack
ondemand.service loaded active running LSB: Set the CPU Frequency Scaling governor to "ondemand"
openvpn.service loaded active exited OpenVPN service
osspd.service loaded active running OSS Proxy Daemon
plymouth-quit-wait.service loaded inactive dead start Hold until boot process finishes up
polipo.service loaded active running LSB: Start or stop the polipo web cache
polkitd.service loaded active running Authenticate and Authorize Users to Run Privileged Tasks
postgresql.service loaded active exited PostgreSQL RDBMS
qemu-kvm.service loaded active exited LSB: QEMU KVM module loading script
rc-local.service loaded inactive dead start /etc/rc.local Compatibility
resolvconf.service loaded active exited Nameserver information manager
rsyslog.service loaded active running System Logging Service
rtkit-daemon.service loaded active running RealtimeKit Scheduling Policy Service
samba-ad-dc.service loaded inactive dead start LSB: start Samba daemons for the AD DC
schroot.service loaded inactive dead start LSB: Recover schroot sessions.
setvtrgb.service loaded inactive dead start Set console scheme
smartd.service loaded active running Self Monitoring and Reporting Technology (SMART) Daemon
smbd.service loaded inactive dead start LSB: start Samba SMB/CIFS daemon (smbd)
speech-dispatcher.service loaded active exited LSB: Speech Dispatcher
ssh.service loaded active running OpenBSD Secure Shell server
sysstat.service loaded active exited LSB: Start/stop sysstat's sadc
systemd-backlight@backlight:intel_backlight.service loaded active exited Load/Save Screen Backlight Brightness of backlight:intel_
systemd-backlight@leds:asus::kbd_backlight.service loaded active exited Load/Save Screen Backlight Brightness of leds:asus::kbd_b
systemd-fsck@dev-disk-byx2duuid-1207x2d4052.service loaded active exited File System Check on /dev/disk/by-uuid/1207-4052
systemd-fsck@dev-disk-byx2duuid-4a44edd5x2dd396x2d443ex2d9a6ax2d41a81be97246.service loaded active exited File System Check on /dev/disk/by-uuid/4a44edd5-d396-443e
systemd-fsckd.service loaded active running File System Check Daemon to report status
systemd-hostnamed.service loaded active running Hostname Service
systemd-journal-flush.service loaded active exited Flush Journal to Persistent Storage
systemd-journald.service loaded active running Journal Service
systemd-logind.service loaded active running Login Service
systemd-modules-load.service loaded active exited Load Kernel Modules
systemd-random-seed.service loaded active exited Load/Save Random Seed
systemd-remount-fs.service loaded active exited Remount Root and Kernel File Systems
systemd-sysctl.service loaded active exited Apply Kernel Variables
systemd-tmpfiles-setup-dev.service loaded active exited Create Static Device Nodes in /dev
systemd-tmpfiles-setup.service loaded active exited Create Volatile Files and Directories
systemd-udev-trigger.service loaded active exited udev Coldplug all Devices
systemd-udevd.service loaded active running udev Kernel Device Manager
systemd-update-utmp-runlevel.service loaded inactive dead start Update UTMP about System Runlevel Changes
systemd-update-utmp.service loaded active exited Update UTMP about System Boot/Shutdown
systemd-user-sessions.service loaded active exited Permit User Sessions
sysvinit-backlight.service loaded active exited LSB: Save and restore screen and keyboard backlight level
thermald.service loaded active running Thermal Daemon Service
timidity.service loaded active running LSB: start and stop timidity
tor.service loaded active exited Anonymizing overlay network for TCP (multi-instance-maste
tor@default.service loaded active running Anonymizing overlay network for TCP
ubuntu-fan.service loaded inactive dead start Ubuntu FAN network setup
udisks.service loaded active running Disk Manager (legacy version)
udisks2.service loaded active running Disk Manager
ufw.service loaded active exited Uncomplicated firewall
upower.service loaded active running Daemon for power management
user@1000.service loaded active running User Manager for UID 1000
user@104.service loaded active running User Manager for UID 104
vboxautostart-service.service loaded active exited vboxautostart-service.service
vboxballoonctrl-service.service loaded active exited vboxballoonctrl-service.service
vboxdrv.service loaded active exited VirtualBox Linux kernel module
vboxweb-service.service loaded active running vboxweb-service.service
whoopsie.service loaded inactive dead start crash report submission daemon
winbind.service loaded inactive dead start LSB: start Winbind daemon
wpa_supplicant.service loaded active running WPA supplicant
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
JOB = Pending job for the unit.
116 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
Update after removing all images and containers and adding -D
to docker service:
$ docker run -it ubuntu:18.04
Unable to find image 'ubuntu:18.04' locally
18.04: Pulling from library/ubuntu
35c102085707: Pull complete
251f5509d51d: Pull complete
8e829fe70a46: Pull complete
6001e1789921: Pull complete
Digest: sha256:d1d454df0f579c6be4d8161d227462d69e163a8ff9d20a847533989cf0c94d90
Status: Downloaded newer image for ubuntu:18.04
docker: Error response from daemon: cgroups: cannot find cgroup mount destination: unknown.
$ tail -f /var/log/syslog | grep docker
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.235977407+03:00" level=debug msg="Calling GET /_ping"
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.239220453+03:00" level=debug msg="Calling POST /v1.39/containers/create"
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.239802159+03:00" level=debug msg="form data: {"AttachStderr":true,"AttachStdin":true,"AttachStdout":true,"Cmd":null,"Domainname":"","Entrypoint":null,"Env":[],"HostConfig":{"AutoRemove":false,"Binds":null,"BlkioDeviceReadBps":null,"BlkioDeviceReadIOps":null,"BlkioDeviceWriteBps":null,"BlkioDeviceWriteIOps":null,"BlkioWeight":0,"BlkioWeightDevice":[],"CapAdd":null,"CapDrop":null,"Cgroup":"","CgroupParent":"","ConsoleSize":[0,0],"ContainerIDFile":"","CpuCount":0,"CpuPercent":0,"CpuPeriod":0,"CpuQuota":0,"CpuRealtimePeriod":0,"CpuRealtimeRuntime":0,"CpuShares":0,"CpusetCpus":"","CpusetMems":"","DeviceCgroupRules":null,"Devices":[],"DiskQuota":0,"Dns":[],"DnsOptions":[],"DnsSearch":[],"ExtraHosts":null,"GroupAdd":null,"IOMaximumBandwidth":0,"IOMaximumIOps":0,"IpcMode":"","Isolation":"","KernelMemory":0,"Links":null,"LogConfig":{"Config":{},"Type":""},"MaskedPaths":null,"Memory":0,"MemoryReservation":0,"MemorySwap":0,"MemorySwappiness":-1,"NanoCpus":0,"NetworkMode":"default","OomKillDisable":false,"OomScoreAdj":0,"PidMode":"","PidsLimit":0,"PortBindings":{},"Privileged":false,"PublishAllPorts":false,"ReadonlyPaths":null,"ReadonlyRootfs":false,"RestartPolicy":{"MaximumRetryCount":0,"Name":"no"},"SecurityOpt":null,"ShmSize":0,"UTSMode":"","Ulimits":null,"UsernsMode":"","VolumeDriver":"","VolumesFrom":null},"Hostname":"","Image":"ubuntu:18.04","Labels":{},"NetworkingConfig":{"EndpointsConfig":{}},"OnBuild":null,"OpenStdin":true,"StdinOnce":true,"Tty":true,"User":"","Volumes":{},"WorkingDir":""}"
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.276281207+03:00" level=debug msg="container mounted via layerStore: &{/var/lib/docker/overlay2/60b7962391f9c3670d264b3d8a4982bbebe01cf9283220395c2ca812747a40eb/merged 0x55bf84d46900 0x55bf84d46900}"
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.303132923+03:00" level=debug msg="Calling POST /v1.39/containers/5d737b86472cdeecc20de8e6fa3f86f71bd7e53c3e59dcc68da0c911bcade3b9/attach?stderr=1&stdin=1&stdout=1&stream=1"
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.303320604+03:00" level=debug msg="attach: stdin: begin"
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.303347334+03:00" level=debug msg="attach: stdout: begin"
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.303367047+03:00" level=debug msg="attach: stderr: begin"
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.303750204+03:00" level=debug msg="Calling POST /v1.39/containers/5d737b86472cdeecc20de8e6fa3f86f71bd7e53c3e59dcc68da0c911bcade3b9/wait?condition=next-exit"
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.304842547+03:00" level=debug msg="Calling POST /v1.39/containers/5d737b86472cdeecc20de8e6fa3f86f71bd7e53c3e59dcc68da0c911bcade3b9/start"
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.306444389+03:00" level=debug msg="container mounted via layerStore: &{/var/lib/docker/overlay2/60b7962391f9c3670d264b3d8a4982bbebe01cf9283220395c2ca812747a40eb/merged 0x55bf84d46900 0x55bf84d46900}"
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.307074242+03:00" level=debug msg="Assigning addresses for endpoint elated_franklin's interface on network bridge"
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.307378411+03:00" level=debug msg="RequestAddress(LocalDefault/172.17.0.0/16, <nil>, map[])"
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.307682670+03:00" level=debug msg="Request address PoolID:172.17.0.0/16 App: ipam/default/data, ID: LocalDefault/172.17.0.0/16, DBIndex: 0x0, Bits: 65536, Unselected: 65533, Sequence: (0xc0000000, 1)->(0x0, 2046)->(0x1, 1)->end Curr:3 Serial:false PrefAddress:<nil> "
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.333117713+03:00" level=debug msg="Assigning addresses for endpoint elated_franklin's interface on network bridge"
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.352988148+03:00" level=debug msg="Programming external connectivity on endpoint elated_franklin (e6aaedb79f9b4df830da55a224ef60162d48952451294c856efcacd1b4d8f2ef)"
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.359846164+03:00" level=debug msg="EnableService 5d737b86472cdeecc20de8e6fa3f86f71bd7e53c3e59dcc68da0c911bcade3b9 START"
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.359866210+03:00" level=debug msg="EnableService 5d737b86472cdeecc20de8e6fa3f86f71bd7e53c3e59dcc68da0c911bcade3b9 DONE"
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.366442322+03:00" level=debug msg="bundle dir created" bundle=/var/run/docker/containerd/5d737b86472cdeecc20de8e6fa3f86f71bd7e53c3e59dcc68da0c911bcade3b9 module=libcontainerd namespace=moby root=/var/lib/docker/overlay2/60b7962391f9c3670d264b3d8a4982bbebe01cf9283220395c2ca812747a40eb/merged
Sep 7 20:57:34 norbert-UX32A NetworkManager[1224]: <info> [1567879054.5349] device (docker0): link connected
Sep 7 20:57:34 norbert-UX32A kernel: [43763.811967] docker0: port 1(vethca4e1e2) entered forwarding state
Sep 7 20:57:34 norbert-UX32A kernel: [43763.811996] docker0: port 1(vethca4e1e2) entered forwarding state
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.541099717+03:00" level=debug msg="sandbox set key processing took 82.724835ms for container 5d737b86472cdeecc20de8e6fa3f86f71bd7e53c3e59dcc68da0c911bcade3b9"
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.831828501+03:00" level=debug msg="attach: stdout: end"
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.831830570+03:00" level=debug msg="attach: stderr: end"
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.831871441+03:00" level=debug msg="attach: stdin: end"
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.831893135+03:00" level=debug msg="attach done"
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.831942207+03:00" level=debug msg="Closing buffered stdin pipe"
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.848047063+03:00" level=debug msg="Revoking external connectivity on endpoint elated_franklin (e6aaedb79f9b4df830da55a224ef60162d48952451294c856efcacd1b4d8f2ef)"
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.849567170+03:00" level=debug msg="DeleteConntrackEntries purged ipv4:0, ipv6:0"
Sep 7 20:57:34 norbert-UX32A kernel: [43764.133221] docker0: port 1(vethca4e1e2) entered disabled state
Sep 7 20:57:34 norbert-UX32A NetworkManager[1224]: <info> [1567879054.8855] device (docker0): link disconnected (deferring action for 4 seconds)
Sep 7 20:57:34 norbert-UX32A kernel: [43764.172680] docker0: port 1(vethca4e1e2) entered disabled state
Sep 7 20:57:34 norbert-UX32A kernel: [43764.176152] docker0: port 1(vethca4e1e2) entered disabled state
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.948202159+03:00" level=debug msg="Releasing addresses for endpoint elated_franklin's interface on network bridge"
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.948233303+03:00" level=debug msg="ReleaseAddress(LocalDefault/172.17.0.0/16, 172.17.0.2)"
Sep 7 20:57:34 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:34.948272022+03:00" level=debug msg="Released address PoolID:LocalDefault/172.17.0.0/16, Address:172.17.0.2 Sequence:App: ipam/default/data, ID: LocalDefault/172.17.0.0/16, DBIndex: 0x0, Bits: 65536, Unselected: 65532, Sequence: (0xe0000000, 1)->(0x0, 2046)->(0x1, 1)->end Curr:3"
Sep 7 20:57:35 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:35.004894384+03:00" level=error msg="5d737b86472cdeecc20de8e6fa3f86f71bd7e53c3e59dcc68da0c911bcade3b9 cleanup: failed to delete container from containerd: no such container"
Sep 7 20:57:35 norbert-UX32A dockerd[29727]: time="2019-09-07T20:57:35.004928780+03:00" level=error msg="Handler for POST /v1.39/containers/5d737b86472cdeecc20de8e6fa3f86f71bd7e53c3e59dcc68da0c911bcade3b9/start returned error: cgroups: cannot find cgroup mount destination: unknown"
Sep 7 20:57:39 norbert-UX32A NetworkManager[1224]: <info> [1567879059.5208] device (docker0): link disconnected (calling deferred action)
Share this post and Earn Free Points!
Open terminal and paste the following command. It will fix the issue: cgroup mountpoint does not exist which you have which is related to: docker cgroup mountpoint. Next you can run docker client as usual.
The error “Cgroup mountpoint does not exist” typically indicates that the cgroups (control groups) kernel feature is not enabled or configured properly on your system. Cgroups are used to limit, prioritize, and distribute system resources such as CPU, memory, and I/O bandwidth among different groups of processes.
To fix this error, you may need to enable and configure cgroups on your system. This can typically be done by modifying the kernel boot parameters or by installing and configuring a cgroup management tool such as systemd or cgmanager.
It is also possible that the error is being caused by a problem with the cgroup mount point itself, such as a missing or misconfigured mount point. In this case, you may need to check the configuration of your cgroup mount points and make sure that they are set up correctly.
If you are unable to resolve the error on your own, you may want to seek assistance from a system administrator or someone with expertise in cgroups and Linux systems.
Cgroups can be used for a variety of purposes, including:
- Resource management: Cgroups can be used to ensure that certain groups of processes have access to the resources they need to run, while also preventing other processes from consuming too many resources.
- Performance optimization: Cgroups can be used to prioritize certain groups of processes or to limit the resources available to certain groups in order to optimize the overall performance of the system.
- Containerization: Cgroups can be used as part of a containerization solution, such as Docker, to isolate and manage the resources of individual containers.
To use cgroups, you will need to enable and configure them on your Linux system. This can typically be done by modifying the kernel boot parameters or by installing and configuring a cgroup management tool such as systemd or cgmanager. Once cgroups are configured, you can create and manage cgroups using command-line tools or through a cgroup management tool.
All you need to fix is to create the cgroup directory with systemd inside and next mount the cgroup into this path. To achieve it just copy and paste the following two commands:
sudo mkdir /sys/fs/cgroup/systemd sudo mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd
Then let’s check if your docker is working as expected using command:
# Create container based on your image docker run <your_docker_image>
Docker: Error Response From Daemon: cgroups: cgroup mountpoint does not exist: unknown.
Please let me know in the comment it helped you to resolve you problem with docker client which was related to:
- cgroups: cgroup mountpoint does not exist: unknown
- docker: error response from daemon: cgroups: cgroup mountpoint does not exist: unknown.
- cgroup mountpoint does not exist
- error response from daemon: cgroups: cgroup mountpoint does not exist: unknown
CGroups And Docker cgroups
Control groups, also known as cgroups in this tutorial, are a new kernel feature that Linux offers. With the aid of Cgroups, you may distribute system resources, including CPU time, system memory, and network bandwidth, among user-defined groups of tasks that are currently operating on the system. You can keep track of the cgroups you set up, block their access to particular resources, and even dynamically change them on an active system. You may set up the cgconfig (control group config) service to launch at boot time and restore your previously specified cgroups, making them permanent between reboots.
System administrators may assign, prioritise, deny, manage, and monitor system resources with fine-grained control by utilising cgroups. Overall efficiency may be raised by properly allocating hardware resources among users and jobs.
Use Our Postgres Image To Test
You can use the Postgres image that I created for my course. I strongly encourage you to take this free course! 🙂
SQL: Complete Online Training – Queries For Practice
Docker Client
# Command docker run -d --rm -p 5432:5432 -e POSTGRES_PASSWORD=postgres -e POSTGRES_USER=postgres --name bigdataetl-postgres-sql-course bigdataetl/postgres-sql-course:latest
Could You Please Share This Post?
I appreciate It And Thank YOU! :)
Have A Nice Day!
-
Rizzen59
- First post
- Posts: 1
- Joined: Tue Apr 14, 2020 3:58 pm
Ubuntu 18.04 — LXC — With Docker — Cannot find cgroup mount destination: unknown.
Hi,
I’m trying to install docker on lxc ubuntu but when i want to run a container i have this error :<br/>
$ sudo docker run hello-world
docker: Error response from daemon: cgroups: cannot find cgroup mount destination: unknown.
ERRO[0002] error waiting for container: context canceled
Do you have an idea ?
QNAP :
version : TS-431X
firmware : 4.4.2.1270
Code: Select all
# uname -a
Linux Odin 4.2.8 #2 SMP Fri Apr 10 10:07:09 CST 2020 armv7l unknown
-
Trexx
- Ask me anything
- Posts: 5393
- Joined: Sat Oct 01, 2011 7:50 am
- Location: Minnesota
Re: Ubuntu 18.04 — LXC — With Docker — Cannot find cgroup mount destination: unknown.
Post
by Trexx » Tue May 05, 2020 10:42 pm
If you are looking for Qnap help, then you would need to open a helpdesk ticket.
In regards to docker, if supported on your model all you need to do is install container station to get docker support.
If that isn’t available for your model, trying to do an end around likely won’t work either.
Sent from my iPhone using Tapatalk
Paul
Model: TS-877-1600 FW: 4.5.3.x
QTS (SSD): [RAID-1] 2 x 1TB WD Blue m.2’s
Data (HDD): [RAID-5] 6 x 3TB HGST DeskStar
VMs (SSD): [RAID-1] 2 x1TB SK Hynix Gold
Ext. (HDD): TR-004 [Raid-5] 4 x 4TB HGST Ultastor
RAM: Kingston HyperX Fury 64GB DDR4-2666
UPS: CP AVR1350
Model:TVS-673 32GB & TS-228a Offline[/color]
——————————————————————————————————————————————
2018 Plex NAS Compatibility Guide | QNAP Plex FAQ | Moogle’s QNAP Faq
-
MadMaxster
- Starting out
- Posts: 18
- Joined: Wed Dec 30, 2015 2:12 pm
Re: Ubuntu 18.04 — LXC — With Docker — Cannot find cgroup mount destination: unknown.
Post
by MadMaxster » Tue Sep 22, 2020 2:15 am
I’m having the same exact problem.
I’m trying to get lancache.net (which uses docker) up and running.
I have tried to run docker from lxc ubuntu 18.04 with no success, but the exact same docker run commands work without issue when run on ubuntu 18.04 on virtualisation station.
Somehow a difference between lxc via container and virtualisation station is affecting docker.
Not sure where the problem lies…. would appreciate anyone who has docker up and running on lxc ubuntu 18.04 suggest steps or where we may be going wrong
-
MadMaxster
- Starting out
- Posts: 18
- Joined: Wed Dec 30, 2015 2:12 pm
Re: Ubuntu 18.04 — LXC — With Docker — Cannot find cgroup mount destination: unknown.
Post
by MadMaxster » Wed Sep 23, 2020 2:59 am
In the hope that this helps others I got around this problem by using an older version of docker-ce and not docker-io.
This allowed me to get through the cgroup problem
Install older version of docker-ce due to cgroups bug (as of Sept 2020) :
Setup the depot’s (old version which works is under xenial even though our ubuntu build is bionic) and remove old versions :
Code: Select all
sudo apt-get remove docker docker-engine docker.io docker.ce
sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable"
To list versions available :
Install older docker version :
Code: Select all
sudo apt-get install docker-ce=17.09.1~ce-0~ubuntu
Allow docker to be run as $USER (logout and back in to take effect — check via groups) :
After this all my docker containers worked without issue
-
aalllop
- New here
- Posts: 3
- Joined: Wed Oct 14, 2020 5:34 pm
Re: Ubuntu 18.04 — LXC — With Docker — Cannot find cgroup mount destination: unknown.
Post
by aalllop » Wed Oct 14, 2020 5:39 pm
Hello, I have followed all your instructions and now the error was:
docker run hello-world
container_linux.go:265: starting container process caused «process_linux.go:284: applying cgroup configuration for process caused «open /sys/fs/cgroup/memory/lxc/ubuntu-bionic-arm64-1/docker/cpuset.cpus: no such file or directory»»
docker: Error response from daemon: oci runtime error: container_linux.go:265: starting container process caused «process_linux.go:284: applying cgroup configuration for process caused «open /sys/fs/cgroup/memory/lxc/ubuntu-bionic-arm64-1/docker/cpuse
t.cpus: no such file or directory»».
ERRO[0002] error waiting for container: context canceled
Any idea of what could be the problem?
-
aalllop
- New here
- Posts: 3
- Joined: Wed Oct 14, 2020 5:34 pm
Re: Ubuntu 18.04 — LXC — With Docker — Cannot find cgroup mount destination: unknown.
Post
by aalllop » Tue Oct 20, 2020 11:37 pm
Finally i was able to install hassio under LXC with docker. This are the steps:
— Create a LXC ubuntu 18.04 machine.
— Using SSH into NAS and go to NAS Host to edit “/usr/local/container-station/lxc/share/lxc/config/common.conf”
Modify string “lxc.mount.auto = cgroup:mixed proc:mixed sys:mixed” into “lxc.mount.auto = proc:mixed sys:mixed”
Add those lines:
lxc.mount.entry = /dev/ttyS0 dev/ttyS0 none bind,create=file 0 0
lxc.mount.entry = /dev/ttyS1 dev/ttyS1 none bind,create=file 0 0
lxc.mount.entry = /dev/ttyS2 dev/ttyS2 none bind,create=file 0 0
lxc.mount.entry = /dev/ttyS3 dev/ttyS3 none bind,create=file 0 0
linux.kernel_modules: bridge,br_netfilter,ip_tables,ip6_tables,ip_vs,netlink_diag,nf_nat,overlay,xt_conntrack
raw.lxc: |-
lxc.cgroup.devices.allow = a
lxc.cap.drop =
security.nesting: «true»
security.privileged: «true»
Uncomment lxc.include = /usr/share/lxc/config/nesting.conf
— Restart LXC docker
— Download docker 17.09.01 from https://download.docker.com/linux/ubunt … ble/arm64/
— Execute those commands:
sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates curl software-properties-common -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add —
sudo add-apt-repository «deb [arch=arm64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable»
sudo apt-get update
dpkg -i <route_of_the_package>/docker-ce_17.09.1_ce-0_ubuntu_arm64.deb
….to be continued
-
aalllop
- New here
- Posts: 3
- Joined: Wed Oct 14, 2020 5:34 pm
Re: Ubuntu 18.04 — LXC — With Docker — Cannot find cgroup mount destination: unknown.
Post
by aalllop » Wed Oct 21, 2020 12:27 am
Continuation:
Now you have docker installed and running. Test with
sudo -i
apt-get dist-upgrade
apt-get update
apt-get install apt-utils -y
apt install software-properties-common -y
add-apt-repository universe
apt-get update
apt-get install -y apparmor-utils apt-transport-https avahi-daemon ca-certificates curl dbus jq network-manager socat software-properties-common -y
curl -sL «https://raw.githubusercontent.com/home- … staller.sh» | bash -s — -m qemuarm -p 8123:8123
My qnap has ARM chip but if yours has another family please see the options for the -m on https://raw.githubusercontent.com/home- … staller.sh
One important thing, I have created those lines:
lxc.mount.entry = /dev/ttyS0 dev/ttyS0 none bind,create=file 0 0
lxc.mount.entry = /dev/ttyS1 dev/ttyS1 none bind,create=file 0 0
lxc.mount.entry = /dev/ttyS2 dev/ttyS2 none bind,create=file 0 0
lxc.mount.entry = /dev/ttyS3 dev/ttyS3 none bind,create=file 0 0
because when I try to install the hassio add-on node-red I obtain an error that it can’t be installed because it can’t access to /dev/ttyS3, so I mapped in the conf file. If the docker doesn’t start comment those lines.
Now homeassistant with supervisor is running on my qnap.
-
onlyumike
- New here
- Posts: 6
- Joined: Mon Aug 14, 2017 12:27 am
Re: Ubuntu 18.04 — LXC — With Docker — Cannot find cgroup mount destination: unknown.
Post
by onlyumike » Wed Jan 13, 2021 9:24 am
Hi MadMaxster and all,
I have a similar problem with docker installation on Linux Station but not QNAP docker container. Because I feel the QNAP web console is slow, I prefer to install it directly on Linux Station which includes Ubuntu 20.04 LTS.
I ever had a similar issue when I installed latest docker, but follow your instructions I have installed docker 17.09. I met another problem, when I run «docker version», it will prompt an alarm like this.
Code: Select all
"admin@ubuntu2004:~$ docker version
Client:
Version: 17.09.1-ce
API version: 1.32
Go version: go1.8.3
Git commit: 19e2cf6
Built: Thu Dec 7 22:24:23 2017
OS/Arch: linux/amd64
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
And when I want to start docker.service, it will prompt me an error happened. But I cannot use journalctl to check details so far.
Code: Select all
"sudo systemctl restart docker.service
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details."
First, please let me know if I can use docker on Linux Station, which is already an image of QNAP docker container.
Second, how can I fix this error of starting docker.service and run like ‘docker run <image>’.
Thanks in advance,
Model: TS-451A — RAM: 8G — FW: QTS 4.5.1.1540
NAS HDDs —RAID0 1x8TB; RAID5: 3x8TB
I wanted to play with Docker swarm on a local machine to test a couple of scenarios. The goal was to run three manager nodes, and three worker nodes. I did not want to use a virtual machine to run five nodes on my computer, and I decided to use LXD. When using LXC or LXD containers, I usually try to use Alpine Linux for its small size, unless there are specific requirements.
First, I initialized the swarm on my local machine:
$ docker swarm init --advertise-addr 192.168.88.98 Swarm initialized: current node (bgzm63dfx8clvnm1tfudvrqpp) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-08noco12oi85n0v8mcbk9pphflmpnuap6w7jicah0zsbjqwc75-cnwlgyertaslaphko0ki079xc 192.168.88.98:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. $ docker swarm join-token manager To add a manager to this swarm, run the following command: docker swarm join --token SWMTKN-1-08noco12oi85n0v8mcbk9pphflmpnuap6w7jicah0zsbjqwc75-3ia42cf6wfemjf5y6c05jf47w 192.168.88.98:2377
Then I creates two manager nodes, and three worker nodes. To create a container, which will run Docker, you need to set security.nesting=true
.
OK, here it goes:
lxc launch images:alpine/3.11/amd64 manager-1 -c security.nesting=true lxc launch images:alpine/3.11/amd64 manager-2 -c security.nesting=true lxc exec manager-1 apk add docker lxc exec manager-2 apk add docker lxc exec manager-1 -T -- /etc/init.d/docker restart lxc exec manager-2 -T -- /etc/init.d/docker restart lxc exec manager-1 -- docker swarm join --token SWMTKN-1-08noco12oi85n0v8mcbk9pphflmpnuap6w7jicah0zsbjqwc75-3ia42cf6wfemjf5y6c05jf47w 192.168.88.98:2377 lxc exec manager-2 -- docker swarm join --token SWMTKN-1-08noco12oi85n0v8mcbk9pphflmpnuap6w7jicah0zsbjqwc75-3ia42cf6wfemjf5y6c05jf47w 192.168.88.98:2377 lxc launch images:alpine/3.11/amd64 worker-1 -c security.nesting=true lxc launch images:alpine/3.11/amd64 worker-2 -c security.nesting=true lxc launch images:alpine/3.11/amd64 worker-3 -c security.nesting=true lxc exec worker-1 apk add docker lxc exec worker-2 apk add docker lxc exec worker-3 apk add docker lxc exec worker-1 -T -- /etc/init.d/docker restart lxc exec worker-2 -T -- /etc/init.d/docker restart lxc exec worker-3 -T -- /etc/init.d/docker restart lxc exec worker-1 -- docker swarm join --token SWMTKN-1-08noco12oi85n0v8mcbk9pphflmpnuap6w7jicah0zsbjqwc75-cnwlgyertaslaphko0ki079xc 192.168.88.98:2377 lxc exec worker-2 -- docker swarm join --token SWMTKN-1-08noco12oi85n0v8mcbk9pphflmpnuap6w7jicah0zsbjqwc75-cnwlgyertaslaphko0ki079xc 192.168.88.98:2377 lxc exec worker-3 -- docker swarm join --token SWMTKN-1-08noco12oi85n0v8mcbk9pphflmpnuap6w7jicah0zsbjqwc75-cnwlgyertaslaphko0ki079xc 192.168.88.98:2377
Now, docker node ls
shows something like this:
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION kipabebjta1lujxz28jeiacag manager-1 Ready Active Reachable 19.03.5 xae8nsn3yd29wuxukvf6ef1og manager-2 Ready Active Reachable 19.03.5 bgzm63dfx8clvnm1tfudvrqpp * nostalgia-for-infinity Ready Active Leader 19.03.6 9c0p941inuizp1lbyhgrh8k1o worker-1 Ready Active 19.03.5 tiqp4tszcai5wljy2kst0p8w0 worker-2 Ready Active 19.03.5 u8b2vjpld2lx3jn6i0fe53w4l worker-3 Ready Active 19.03.5
However, when I tried to deploy my stack into the swarm, I faced the problem: Docker was unable to deploy any services to LXC nodes because of the following error:
Error response from daemon: cgroups: cannot find cgroup mount destination: unknown.
To make sure that this is not related to the way I have configured my containers (e.g., it’s not an issue with AppArmor, etc), I have configured another container, this time it was Ubuntu-based:
lxc launch images:ubuntu/bionic/amd64 worker-4 -c security.nesting=true lxc exec worker-4 apt update lxc exec worker-4 apt install docker.io lxc exec worker-4 docker swarm join --token SWMTKN-1-08noco12oi85n0v8mcbk9pphflmpnuap6w7jicah0zsbjqwc75-cnwlgyertaslaphko0ki079xc 192.168.88.98:2377
That worked: some services got deployed to the Ubuntu worker. This means that the problem was somewhere in the Alpine 🙁
I started to dig deeper.
When starting docker (rc-service docker start
), I noticed mount: permission denied errors:
~ # rc-service docker start * Caching service dependencies ... [ ok ] * Mounting cgroup filesystem ... [ ok ] mount: permission denied (are you root?) mount: permission denied (are you root?) mount: permission denied (are you root?) mount: permission denied (are you root?) mount: permission denied (are you root?) * /var/log/docker.log: creating file * /var/log/docker.log: correcting owner * Starting docker ... [ ok ]
OK, let us see what mount | grep cgroup
shows:
cgroup_root on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,relatime,size=10240k,mode=755,uid=300001,gid=300001) none on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime) cpuset on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset,clone_children) blkio on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) memory on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) devices on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) freezer on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) perf_event on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) hugetlb on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) pids on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) rdma on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
…and what are the subdirectories in /sys/fs/cgroup/
:
blkio cpu cpuacct cpuset devices freezer hugetlb memory net_cls net_prio openrc perf_event pids rdma unified
We see that cpu
, cpuacct
, net_cls
, net_prio
are not mounted. And indeed, if you try to mount any of them, you will get an error:
~ # mount -t cgroup cgroup /sys/fs/cgroup/cpu -o rw,nosuid,nodev,noexec,relatime,cpu mount: permission denied (are you root?)
OK, now let us see how Ubuntu handles that:
$ lxc exec worker-4 bash [email protected]:~# mount | grep cgroup tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755,uid=300001,gid=300001) cgroup on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset,clone_children) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
We see that it combines net_cls
and net_prio
into a single thing, and does the same to cpu
and cpuacct
.
No problem, let us go back to Alpine and add these mounts:
mkdir /sys/fs/cgroup/cpu,cpuacct mkdir /sys/fs/cgroup/net_cls,net_prio mount -t cgroup cgroup /sys/fs/cgroup/cpu,cpuacct -o rw,nosuid,nodev,noexec,relatime,cpu,cpuacct mount -t cgroup cgroup /sys/fs/cgroup/net_cls,net_prio -o rw,nosuid,nodev,noexec,relatime,net_cls,net_prio
mount
gave no “permission denied” errors; however, docker is still unable to launch any containers:
~ # docker run -it --rm alpine ash Unable to find image 'alpine:latest' locally latest: Pulling from library/alpine cbdbe7a5bc2a: Pull complete Digest: sha256:9a839e63dad54c3a6d1834e29692c8492d93f90c59c978c1ed79109ea4fb9a54 Status: Downloaded newer image for alpine:latest docker: Error response from daemon: cgroups: cannot find cgroup mount destination: unknown.
When looking at /etc/init.d/cgroups
, I saw the following piece of code:
if ! mountinfo -q /sys/fs/cgroup/openrc; then local agent="${RC_LIBEXECDIR}/sh/cgroup-release-agent.sh" mkdir /sys/fs/cgroup/openrc mount -n -t cgroup -o none,${cgroup_opts},name=openrc,release_agent="$agent" openrc /sys/fs/cgroup/openrc printf 1 > /sys/fs/cgroup/openrc/notify_on_release fi
However, I did not see /sys/fs/cgroup/openrc
in the mount list. And indeed, if I try to mount it manually, it fails with infamous “permission denied” error.
There was one unanswered question, and then another one that gave me a clue:
~ # cat /proc/1/cgroup 12:pids:/ 11:rdma:/ 10:hugetlb:/ 9:devices:/ 8:cpuset:/ 7:cpu,cpuacct:/ 6:freezer:/ 5:net_cls,net_prio:/ 4:memory:/ 3:perf_event:/ 2:blkio:/ 1:name=systemd:/ 0::/
So, we do not have name=openrc
there, nor do we have separate cpu
, cpuacct
, net_cls
, and net_prio
(and now it makes it clear to me why Ubuntu used cpu,cpuacct
and net_cls,net_prio
.
OK, instead of
mount -n -t cgroup -o 'none,nodev,noexec,nosuid,name=openrc,release_agent=/lib/rc/sh/cgroup-release-agent.sh' openrc /sys/fs/cgroup/openrc
I tried
mount -n -t cgroup -o 'none,nodev,noexec,nosuid,name=systemd,release_agent=/lib/rc/sh/cgroup-release-agent.sh' openrc /sys/fs/cgroup/openrc
…and it worked!
I intentionally did not change paths under /sys/fs/cgroup
in order not to break OpenRC’s cgroup-release-agent.sh
.
Success!
So, what are the changes? After cgroups
start, we need to run the following piece of code:
mkdir /sys/fs/cgroup/cpu,cpuacct mkdir /sys/fs/cgroup/net_cls,net_prio mount -t cgroup cgroup /sys/fs/cgroup/cpu,cpuacct -o rw,nosuid,nodev,noexec,relatime,cpu,cpuacct mount -t cgroup cgroup /sys/fs/cgroup/net_cls,net_prio -o rw,nosuid,nodev,noexec,relatime,net_cls,net_prio mount -n -t cgroup -o 'none,nodev,noexec,nosuid,name=systemd,release_agent=/lib/rc/sh/cgroup-release-agent.sh' openrc /sys/fs/cgroup/openrc
For the sake of simplicity I decided not to parse /proc/1/cgroups
OK, now let us create a service that runs these commands:
#!/sbin/openrc-run description="Mount the control groups for Docker" depend() { keyword -docker need sysfs cgroups } start() { if [ -d /sys/fs/cgroup ]; then mkdir -p /sys/fs/cgroup/cpu,cpuacct mkdir -p /sys/fs/cgroup/net_cls,net_prio mount -n -t cgroup cgroup /sys/fs/cgroup/cpu,cpuacct -o rw,nosuid,nodev,noexec,relatime,cpu,cpuacct mount -n -t cgroup cgroup /sys/fs/cgroup/net_cls,net_prio -o rw,nosuid,nodev,noexec,relatime,net_cls,net_prio if ! mountinfo -q /sys/fs/cgroup/openrc; then local agent="${RC_LIBEXECDIR}/sh/cgroup-release-agent.sh" mkdir -p /sys/fs/cgroup/openrc mount -n -t cgroup -o none,nodev,noexec,nosuid,name=systemd,release_agent="$agent" openrc /sys/fs/cgroup/openrc fi fi return 0 }
Save this as /etc/init.d/cgroups-patch
, then
chmod +x /etc/init.d/cgroups-patch rc-update add cgroups-patch boot
and then reboot.
Once the container is up, docker run -it --rm alpine ash
works.