Error docker exporter does not currently support exporting manifest lists

While trying to build images for multi-architecture (AMD64 and ARM64), I tried to load them into the Docker daemon with the --load parameter but I got an error: ➜ docker buildx build --platform lin...
* WIP: CNI Plugin (#2071)

* Export RootOptions and BuildFirewallConfiguration so that the cni-plugin can use them.
* Created the cni-plugin based on istio-cni implementation
* Create skeleton files that need to be filled out.
* Create the install scripts and finish up plugin to write iptables
* Added in an integration test around the install_cni.sh and updated the script to handle the case where it isn't the only plugin. Removed the istio kubernetes.go file in favor of pkg/k8s; initial usage of this package; found and fixed the typo in the ClusterRole and ClusterRoleBinding; found the docker-build-cni-plugin script
* Corrected an incorrect name in the docker build file for cni-plugin
* Rename linkerd2-cni to linkerd-cni
* Fixup Dockerfile and clean up code a bit as well as logging statements.
* Update Gopkg.lock after master merge.
* Update test file to remove temporary tag.
* Fixed the command to run during the test while building up the docker run.
* Added attributions to applicable files; in the test file, use a different container for each test scenario and also print the docker logs to stdout when there is an error;
* Add the --no-init-container flag to install and inject. This flag will not output the initContainer and will add an annotation assuming that the cni will be used in this case.
* Update .travis.yml to build the cni-plugin docker image before running the tests.
* Workaround golint warnings.
* Create a new command to install the linkerd-cni plugin.
* Add the --no-init-container option to linkerd inject
* Use the setup ip tables annotation during the proxy auto inject webhook prevent/allow addition of an init container; move cni-plugin tests to the integration-test section of travis
* gate the cni-plugin tests with the -integration-tests flag; remove unnecessary deployment .yaml file.
* Incorporate PR Cleanup suggestions.
* Remove the SetupIPTablesLabel annotation and use config flags and the presence of the init container to determine whether the cni-plugin writes ip tables.
* Fix a logic bug in the cni-plugin code that prevented the iptables from being written; Address PR comments; make tests pass.
* Update go deps shas
* Changed the single file install-cni plugin filename to be .conf vs .conflist; Incorporated latest PR comments around spacing with the new renderer among others.
* Fix an issue with renaming .conf to .conflist when needed.
* Renamed some of the variables to try to make it more clear what is going on.
* Address final PR comments.
* Hide cni flags for the time being.

Signed-off-by: Cody Vandermyn <cody.vandermyn@nordstrom.com>

* Add support for timeouts in service profiles (#2149)

Fixes #2042 

Adds a new field to service profile routes called `timeout`.  Any requests to that route which take longer than the given timeout will be aborted and a 504 response will be returned instead.  If the timeout field is not specified, a default timeout of 10 seconds is used.

Signed-off-by: Alex Leong <alex@buoyant.io>

* Added flags to allow further configuration of destination cni bin and cni conf directories; fixed up spacing in template. (#2181)

Signed-off-by: Cody Vandermyn <cody.vandermyn@nordstrom.com>

* Introduce go generate to embed static templates (#2189)

# Problem
In order to switch Linkerd template rendering to use `.yaml` files, static
assets must be bundled in the Go binary for use by `linkerd install`.

# Solution
The solution should not affect the local development process of building and
testing.

[vfsgen](https://github.com/shurcooL/vfsgen) generates Go code that statically
implements the provided `http.FileSystem`. Paired with `go generate` and Go
[build tags](https://golang.org/pkg/go/build/), we can continue to use the
template files on disk when developing with no change required.

In `!prod` Go builds, the `cli/static/templates.go` file provides a
`http.FileSystem` to the local templates. In `prod` Go builds, `go generate
./cli` generates `cli/static/generated_templates.gogen.go` that statically
provides the template files.

When built with `-tags prod`, the executable will be built with the staticlly
generated file instead of the local files.

# Validation
The binaries were compiled locally with `bin/docker-build`. The binaries were
then tested with `bin/test-run (pwd)/target/cli/darwin/linkerd`. All tests
passed.

No change was required to successfully run `bin/go-run cli install`. No change
was required to run `bin/linkerd install`.

Fixes #2153

Signed-off-by: Kevin Leimkuhler <kevin@kleimkuhler.com>

* Introduce Discovery API and endpoints command (#2195)

The Proxy API service lacked introspection of its internal state.

Introduce a new gRPC Discovery API, implemented by two servers:
1) Proxy API Server: returns a snapshot of discovery state
2) Public API Server: pass-through to the Proxy API Server

Also wire up a new `linkerd endpoints` command.

Fixes #2165

Signed-off-by: Andrew Seigner <siggy@buoyant.io>

* Improve ServiceProfile validation in linkerd check (#2218)

The `linkerd check` command was doing limited validation on
ServiceProfiles.

Make ServiceProfile validation more complete, specifically validate:
- types of all fields
- presence of required fields
- presence of unknown fields
- recursive fields

Also move all validation code into a new `Validate` function in the
profiles package.

Validation of field types and required fields is handled via
`yaml.UnmarshalStrict` in the `Validate` function. This motivated
migrating from github.com/ghodss/yaml to a fork, sigs.k8s.io/yaml.

Fixes #2190

* Read service profiles from client or server namespace instead of control namespace (#2200)

Fixes #2077 

When looking up service profiles, Linkerd always looks for the service profile objects in the Linkerd control namespace.  This is limiting because service owners who wish to create service profiles may not have write access to the Linkerd control namespace.

Instead, we have the control plane look for the service profile in both the client namespace (as read from the proxy's `proxy_id` field from the GetProfiles request and from the service's namespace.  If a service profile exists in both namespaces, the client namespace takes priority.  In this way, clients may override the behavior dictated by the service.

Signed-off-by: Alex Leong <alex@buoyant.io>

* Introduce golangci-lint tooling, fixes (#2239)

`golangci-lint` performs numerous checks on Go code, including golint,
ineffassign, govet, and gofmt.

This change modifies `bin/lint` to use `golangci-lint`, and replaces
usage of golint and govet.

Also perform a one-time gofmt cleanup:
- `gofmt -s -w controller/`
- `gofmt -s -w pkg/`

Part of #217

Signed-off-by: Andrew Seigner <siggy@buoyant.io>

* Upgrade Spinner to fix race condition (#2265)

Fixes #2264

Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>

* Generate CLI docs for usage by the website (#2296)

* Generate CLI docs for usage by the website

* Update description to match existing commands

* Remove global

* Bump base Docker images (#2241)

- `debian:jessie-slim` -> `stretch-20190204-slim`
- `golang:1.10.3` -> `1.11.5`
- `gcr.io/linkerd-io/base:2017-10-30.01` -> `2019-02-19.01`
- bump `golangci-lint` to 1.15.0
- use `GOCACHE` in travis

Signed-off-by: Andrew Seigner <siggy@buoyant.io>

* Enable `unused` linter (#2357)

`unused` checks Go code for unused constants, variables, functions, and
types.

Part of #217

Signed-off-by: Andrew Seigner <siggy@buoyant.io>

* lint: Enable goconst (#2365)

goconst finds repeated strings that could be replaced by a constant:
https://github.com/jgautheron/goconst

Part of #217

Signed-off-by: Andrew Seigner <siggy@buoyant.io>

* Authorization-aware control-plane components (#2349)

The control-plane components relied on a `--single-namespace` param,
passed from `linkerd install` into each individual component, to
determine which namespaces they were authorized to access, and whether
to support ServiceProfiles. This command-line flag was redundant given
the authorization rules encoded in the parent `linkerd install` output,
via [Cluster]Role[Binding]s.

Modify the control-plane components to query Kubernetes at startup to
determine which namespaces they are authorized to access, and whether
ServiceProfile support is available. This allows removal of the
`--single-namespace` flag on the components.

Also update `bin/test-cleanup` to cleanup the ServiceProfile CRD.

TODO:
- Remove `--single-namespace` flag on `linkerd install`, part of #2164

Signed-off-by: Andrew Seigner <siggy@buoyant.io>

* Wire up stats for Jobs  (#2416)

Support for Jobs in stat/tap/top cli commands

Part of #2007

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>

* Injection consolidation (#2334)

- Created the pkg/inject package to hold the new injection shared lib.
- Extracted from `/cli/cmd/inject.go` and `/cli/cmd/inject_util.go`
the core methods doing the workload parsing and injection, and moved them into
`/pkg/inject/inject.go`. The CLI files should now deal only with
strictly CLI concerns, and applying the json patch returned by the new
lib.
- Proceeded analogously with `/cli/cmd/uninject.go` and
`/pkg/inject/uninject.go`.
- The `InjectReport` struct and helping methods were moved into
`/pkg/inject/report.go`
- Refactored webhook to use the new injection lib
- Removed linkerd-proxy-injector-sidecar-config ConfigMap
- Added the ability to add pod labels and annotations without having to
specify the already existing ones

Fixes #1748, #2289

Signed-off-by: Alejandro Pedraza <alejandro.pedraza@gmail.com>

* Bump Prometheus client to v0.9.2 (#2388)

We were depending on an untagged version of prometheus/client_golang
from Feb 2018.

This bumps our dependency to v0.9.2, from Dec 2018.

Also, this is a prerequisite to #1488.

Signed-off-by: Andrew Seigner <siggy@buoyant.io>

* add preStop and change sleep command; update yaml spacing (#2441)

Signed-off-by: Cody Vandermyn <cody.vandermyn@nordstrom.com>

* Remove `--tls=optional` and `linkerd-ca` (#2515)

The proxy's TLS implementation has changed to use a new _Identity_ controller.

In preparation for this, the `--tls=optional` CLI flag has been removed
from install and inject; and the `ca` controller has been deleted. Metrics
and UI treatments for TLS have **not** been removed, as they will continue to
be valuable for the new Identity system.

With the removal of the old identity scheme, the Destination service's proxy
ID field is now set with an opaque string (e.g. `ns:emojivoto`) to enable
locality awareness.

* Introduce the Identity controller implementation (#2521)

This change introduces a new Identity service implementation for the
`io.linkerd.proxy.identity.Identity` gRPC service.

The `pkg/identity` contains a core, abstract implementation of the service
(generic over both the CA and (Kubernetes) Validator interfaces).

`controller/identity` includes a concrete implementation that uses the
Kubernetes TokenReview API to validate serviceaccount tokens when
issuing certificates.

This change does **NOT** alter installation or runtime to include the
identity service. This will be included in a follow-up.

* config: Store install parameters with global config (#2577)

When installing Linkerd, a user may override default settings, or may
explicitly configure defaults. Consider install options like `--ha
--controller-replicas=4` -- the `--ha` flag sets a new default value for
the controller-replicas, and then we override it.

When we later upgrade this cluster, how can we know how to configure the
cluster?

We could store EnableHA and ControllerReplicas configurations in the
config, but what if, in a later upgrade, the default value changes? How
can we know whether the user specified an override or just used the
default?

To solve this, we add an `Install` message into a new config.
This message includes (at least) the CLI flags used to invoke
install.

upgrade does not specify defaults for install/proxy-options fields and,
instead, uses the persisted install flags to populate default values,
before applying overrides from the upgrade invocation.

This change breaks the protobuf compatibility by altering the
`installation_uuid` field introduced in https://github.com/linkerd/linkerd2/commit/9c442f688575c3ee0261facc7542aa490b89c6cf.
Because this change was not yet released (even in an edge release), we
feel that it is safe to break.

Fixes https://github.com/linkerd/linkerd2/issues/2574

* Add validation webhook for service profiles (#2623)

Add validation webhook for service profiles

Fixes #2075

Todo in a follow-up PRs: remove the SP check from the CLI check.

Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>

* Switch UUID implementation (#2667)

The UUID implementation we use to generate install IDs is technically
not random enough for secure uses, which ours is not. To prevent
security scanners like SNYK from flagging this false-positive, let's
just switch to the other UUID implementation (Already in our
dependencies).

* Don't use spinner in cli when run without a tty (#2716)

In some non-tty environments, the `linkerd check` spinner can render
unexpected control characters.

Disable the spinner when run without a tty.

Fixes #2700

Signed-off-by: Andrew Seigner <siggy@buoyant.io>

* Consolidate k8s APIs (#2747)

Numerous codepaths have emerged that create k8s configs, k8s clients,
and make k8s api requests.

This branch consolidates k8s client creation and APIs. The primary
change migrates most codepaths to call `k8s.NewAPI` to instantiate a
`KubernetesAPI` struct from `pkg`. `KubernetesAPI` implements the
`kubernetes.Interface` (clientset) interface, and also persists a
`client-go` `rest.Config`.

Specific list of changes:
- removes manual GET requests from `k8s.KubernetesAPI`, in favor of
  clientsets
- replaces most calls to `k8s.GetConfig`+`kubernetes.NewForConfig` with
  a single `k8s.NewAPI`
- introduces a `timeout` param to `k8s.NewAPI`, currently only used by
  healthchecks
- removes `NewClientSet` in `controller/k8s/clientset.go` in favor of
  `k8s.NewAPI`
- removes `httpClient` and `clientset` from `HealthChecker`, use
  `KubernetesAPI` instead

Signed-off-by: Andrew Seigner <siggy@buoyant.io>

* Introduce k8s apiextensions support (#2759)

CustomResourceDefinition parsing and retrieval is not available via
client-go's `kubernetes.Interface`, but rather via a separate
`k8s.io/apiextensions-apiserver` package.

Introduce support for CustomResourceDefintion object parsing and
retrieval. This change facilitates retrieval of CRDs from the k8s API
server, and also provides CRD resources as mock objects.

Also introduce a `NewFakeAPI` constructor, deprecating
`NewFakeClientSets`. Callers need no longer be concerned with discreet
clientsets (for k8s resources vs. CRDs vs. (eventually)
ServiceProfiles), and can instead use the unified `KubernetesAPI`.

Part of #2337, in service to multi-stage check.

Signed-off-by: Andrew Seigner <siggy@buoyant.io>

* lower the log level of the linkerd-cni output (#2787)

Signed-off-by: Cody Vandermyn <cody.vandermyn@nordstrom.com>

* Split proxy-init into separate repo (#2824)

Split proxy-init into separate repo

Fixes #2563

The new repo is https://github.com/linkerd/linkerd2-proxy-init, and I
tagged the latest there `v1.0.0`.

Here, I've removed the `/proxy-init` dir and pinned the injected
proxy-init version to `v1.0.0` in the injector code and tests.

`/cni-plugin` depends on proxy-init, so I updated the import paths
there, and could verify CNI is still working (there is some flakiness
but unrelated to this PR).

For consistency, I added a `--init-image-version` flag to `linkerd
inject` along with its corresponding override config annotation.

Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>

* Refactor destination service (#2786)

This is a major refactor of the destination service.  The goals of this refactor are to simplify the code for improved maintainability.  In particular:

* Remove the "resolver" interfaces.  These were a holdover from when our decision tree was more complex about how to handle different kinds of authorities.  The current implementation only accepts fully qualified kubernetes service names and thus this was an unnecessary level of indirection.
* Moved the endpoints and profile watchers into their own package for a more clear separation of concerns.  These watchers deal only in Kubernetes primitives and are agnostic to how they are used.  This allows a cleaner layering when we use them from our gRPC service.
* Renamed the "listener" types to "translator" to make it more clear that the function of these structs is to translate kubernetes updates from the watcher to gRPC messages.

Signed-off-by: Alex Leong <alex@buoyant.io>

* Add support for TrafficSplits (#2897)

Add support for querying TrafficSplit resources through the common API layer. This is done by depending on the TrafficSplit client bindings from smi-sdk-go.

Signed-off-by: Alex Leong <alex@buoyant.io>

* Add traffic splitting to destination profiles (#2931)

This change implements the DstOverrides feature of the destination profile API (aka traffic splitting).

We add a TrafficSplitWatcher to the destination service which watches for TrafficSplit resources and notifies subscribers about TrafficSplits for services that they are subscribed to.  A new TrafficSplitAdaptor then merges the TrafficSplit logic into the DstOverrides field of the destination profile.

Signed-off-by: Alex Leong <alex@buoyant.io>

* Add prometheus metrics for watchers (#3022)

To give better visibility into the inner workings of the kubernetes watchers in the destination service, we add some prometheus metrics.

Signed-off-by: Alex Leong <alex@buoyant.io>

* Introduce Go modules support (#2481)

The repo relied on `dep` for managing Go dependencies. Go 1.11 shipped
with Go modules support. Go 1.13 will be released in August 2019 with
module support enabled by default, deprecating GOPATH.

This change replaces `dep` with Go modules for dependency management.
All scripts, including Docker builds and ci, should work without any dev
environment changes.

To execute `go` commands directly during development, do one of the
following:
1. clone this repo outside of `GOPATH`; or
2. run `export GO111MODULE=on`

Summary of changes:
- Docker build scripts and ci set `-mod=readonly`, to ensure
  dependencies defined in `go.mod` are exactly what is used for the
  builds.
- Dependency updates to `go.mod` are accomplished by running
 `go build` and `go test` directly.
- `bin/go-run`, `bin/build-cli-bin`, and `bin/test-run` set
  `GO111MODULE=on`, permitting usage inside and outside of GOPATH.
- `gcr.io/linkerd-io/go-deps` tags hashed from `go.mod`.
- `bin/update-codegen.sh` still requires running from GOPATH,
  instructions added to BUILD.md.

Fixes #1488

Signed-off-by: Andrew Seigner <siggy@buoyant.io>

* Introduce `linkerd --as` flag for impersonation (#3173)

Similar to `kubectl --as`, global flag across all linkerd subcommands
which sets a `ImpersonationConfig` in the Kubernetes API config.

Signed-off-by: Andrew Seigner <siggy@buoyant.io>

* Check in gen deps (#3245)

Go dependencies which are only used by generated code had not previously been checked into the repo.  Because `go generate` does not respect the `-mod=readonly` flag, running `bin/linkerd` will add these dependencies and dirty the local repo.  This can interfere with the way version tags are generated.

To avoid this, we simply check these deps in.

Note that running `go mod tidy` will remove these again.  Thus, it is not recommended to run `go mod tidy`. 

Signed-off-by: Alex Leong <alex@buoyant.io>

* Add a flag to install-cni command to configure iptables wait flag (#3066)


Signed-off-by: Charles Pretzer <charles@buoyant.io>

* Update CNI integration tests (#3273)

Followup to #3066

Signed-off-by: Alejandro Pedraza <alejandro@buoyant.io>

* Merge the CLI 'installValues' type with Helm 'Values' type (#3291)

* Rename template-values.go
* Define new constructor of charts.Values type
* Move all Helm values related code to the pkg/charts package
* Bump dependency
* Use '/' in filepath to remain compatible with VFS requirement
* Add unit test to verify Helm YAML output
* Alejandro's feedback
* Add unit test for Helm YAML validation (HA)

Signed-off-by: Ivan Sim <ivan@buoyant.io>

* Require go 1.12.9 for controller builds (#3297)

Netflix recently announced a security advisory that identified several
Denial of Service attack vectors that can affect server implementations
of the HTTP/2 protocol, and has issued eight CVEs. [1]

Go is affected by two of the vulnerabilities (CVE-2019-9512 and
CVE-2019-9514) and so Linkerd components that serve HTTP/2 traffic are
also affected. [2]

These vulnerabilities allow untrusted clients to allocate an unlimited
amount of memory, until the server crashes. The Kubernetes Product
Security Committee has assigned this set of vulnerabilities with a CVSS
score of 7.5. [3]

[1] https://github.com/Netflix/security-bulletins/blob/master/advisories/third-party/2019-002.md
[2] https://golang.org/doc/devel/release.html#go1.12
[3] https://www.first.org/cvss/calculator/3.0#CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

* Remove broken thrift dependency (#3370)

The repo depended on a (recently broken) thrift package:

```
github.com/linkerd/linkerd2
 -> contrib.go.opencensus.io/exporter/ocagent@v0.2.0
  -> go.opencensus.io@v0.17.0
   -> git.apache.org/thrift.git@v0.0.0-20180902110319-2566ecd5d999
```
... via this line in `controller/k8s`:

```go
_ "k8s.io/client-go/plugin/pkg/client/auth"
```

...which created a dependency on go.opencensus.io:

```bash
$ go mod why go.opencensus.io
...
github.com/linkerd/linkerd2/controller/k8s
k8s.io/client-go/plugin/pkg/client/auth
k8s.io/client-go/plugin/pkg/client/auth/azure
github.com/Azure/go-autorest/autorest
github.com/Azure/go-autorest/tracing
contrib.go.opencensus.io/exporter/ocagent
go.opencensus.io
```

Bump contrib.go.opencensus.io/exporter/ocagent from `v0.2.0` to
`v0.6.0`, creating this new dependency chain:

```
github.com/linkerd/linkerd2
 -> contrib.go.opencensus.io/exporter/ocagent@v0.6.0
  -> google.golang.org/api@v0.7.0
   -> go.opencensus.io@v0.21.0
```

Bumping our go.opencensus.io dependency from `v0.17.0` to `v0.21.0`
pulls in this commit:
https://github.com/census-instrumentation/opencensus-go/commit/ed3a3f0bf00d34af1ca7056123dae29672ca3b1a#diff-37aff102a57d3d7b797f152915a6dc16

...which removes our dependency on github.com/apache/thrift

Signed-off-by: Andrew Seigner <siggy@buoyant.io>

* Decrease proxy and web Docker image sizes (#3384)

The `proxy` and `web` Docker images were 161MB and 186MB, respectively.
Most of the space was tools installed into the `linkerd.io/base` image.

Decrease `proxy` and `web` Docker images to 73MB and 90MB, respectively.
Switch these images to be based off of `debian:stretch-20190812-slim`.
Also set `-ldflags "-s -w"` for `proxy-identity` and `web`. Modify
`linkerd.io/base` to also be based off of
`debian:stretch-20190812-slim`, update tag to `2019-09-04.01`.

Fixes #3383

Signed-off-by: Andrew Seigner <siggy@buoyant.io>

* Bump proxy-init to 1.2.0 (#3397)

Pulls in latest proxy-init:
https://github.com/linkerd/linkerd2-proxy-init/releases/tag/v1.2.0

This also bumps a dependency on cobra, which provides more complete zsh
completion.

Signed-off-by: Andrew Seigner <siggy@buoyant.io>

* Update to client-go v12.0.0, forked stern (#3387)

The repo depended on an old version of client-go. It also depended on
stern, which itself depended on an old version of client-go, making
client-go upgrade non-trivial.

Update the repo to client-go v12.0.0, and also replace stern with a
fork.

This fork of stern includes the following changes:
- updated to use Go Modules
- updated to use client-go v12.0.0
- fixed log line interleaving:
  - https://github.com/wercker/stern/issues/96
  - based on:
    - https://github.com/oandrew/stern/commit/8723308e46b408e239ce369ced12706d01479532

Fixes #3382

Signed-off-by: Andrew Seigner <siggy@buoyant.io>

* Trace Control Plane components using OC (#3461)

* add exporter config for all components

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>

* add cmd flags wrt tracing

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>

* add ochttp tracing to web server

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>

* add flags to the tap deployment

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>

* add trace flags to install and upgrade command

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>

* add linkerd prefix to svc names

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>

* add ochttp trasport to API Internal Client

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>

* fix goimport linting errors

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>

* add ochttp handler to tap http server

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>

* review and fix tests

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>

* update test values

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>

* use common template

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>

* update tests

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>

* use Initialize

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>

* fix sample flag

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>

* add verbose info reg flags

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>

* Update base docker image to debian latest stable: buster (#3438)

* Update base docker image to debian latest stable: buster

Signed-off-by: Charles Pretzer <charles@buoyant.io>

* Update all files to use buster image

* Revert "Trace Control Plane components using OC (#3461)" (#3484)

This reverts commit eaf7460448e33e229d5b5996aafcafe1dbf225e2.

This is a temporary revert of #3461 while we sort out some details of how this should configured and how it should interact with configuring a trace collector on the Linkerd proxy.  We will reintroduce this change once the config plan is straightened out.

Signed-off-by: Alex Leong <alex@buoyant.io>

* Revert upgrade to buster based on CNI test failure after merge (#3486)

* Add TapEvent headers and trailers to the tap protobuf (#3410)

### Motivation

In order to expose arbitrary headers through tap, headers and trailers should be
read from the linkerd2-proxy-api `TapEvent`s and set in the public `TapEvent`s.
This change should have no user facing changes as it just prepares the events
for JSON output in linkerd/linkerd2#3390

### Solution

The public API has been updated with a headers field for
`TapEvent_Http_RequestInit_` and `TapEvent_Http_ResponseInit_`, and trailers
field for `TapEvent_Http_ResponseEnd_`.

These values are set by reading the corresponding fields off of the proxy's tap
events.

The proto changes are equivalent to the proto changes proposed in
linkerd/linkerd2-proxy-api#33

Closes #3262

Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>

* Switch from using golangci fmt to using goimports (#3555)

CI currently enforcing formatting rules by using the fmt linter of golang-ci-lint which is invoked from the bin/lint script.  However it doesn't seem possible to use golang-ci-lint as a formatter, only as a linter which checks formatting.  This means any formatter used by your IDE or invoked manually may or may not use the same formatting rules as golang-ci-lint depending on which formatter you use and which specific revision of that formatter you use.  

In this change we stop using golang-ci-lint for format checking.  We introduce `tools.go` and add goimports to the `go.mod` and `go.sum` files.  This allows everyone to easily get the same revision of goimports by running `go install -mod=readonly golang.org/x/tools/cmd/goimports` from inside of the project.  We add a step in the CI workflow that uses goimports via the `bin/fmt` script to check formatting.

Some shell gymnastics were required in the `bin/fmt` script to work around some limitations of `goimports`:
* goimports does not have a built-in mechanism for excluding directories, and we need to exclude the vendor director as well as the generated Go sources
* goimports returns a 0 exit code, even when formatting errors are detected

Signed-off-by: Alex Leong <alex@buoyant.io>

* Trace Control plane Components with OC (#3495)

* add trace flags and initialisation
* add ocgrpc handler to newgrpc
* add ochttp handler to linkerd web
* add flags to linkerd web
* add ochttp handler to prometheus handler initialisation
* add ochttp clients for components
* add span for prometheus query
* update godep sha
* fix reviews
* better commenting
* add err checking
* remove sampling
* add check in main
* move to pkg/trace

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>

* Add APIService fake clientset support (#3569)

The `linkerd upgrade --from-manifests` command supports reading the
manifest output via `linkerd install`. PR #3167 introduced a tap
APIService object into `linkerd install`, but the manifest-reading code
in fake.go was never updated to support this new object kind.

Update the fake clientset code to support APIService objects.

Fixes #3559

Signed-off-by: Andrew Seigner <siggy@buoyant.io>

* Cert manager support (#3600)

* Add support for --identity-issuer-mode flag to install cmd
* Change flag to be a bool
* Read correct data form identity when external issuer is used
* Add ability for identity service to dynamically reload certs
* Fix failing tests
* Minor refactor
* Load trust anchors from identity issuer secret
* Make identity service actually watch for issuer certs updates
* Add some testing around cmd line identity options validation
* Add tests ensuring that identity service loads issuer
* Take into account external-issuer flag during upgrade + tests
* Fix failing upgrade test
* Address initial review feedback
* Address further review feedback on cli and helm
* Do not persist --identity-external-issuer
* Some improvements to identitiy service
* Bring back persistane of external issuer flag
* Address more feedback
* Update dockerfiles shas
* Publishing k8s events on issuer certs rotation
* Ensure --ignore-cluster+external issuer is not supported
* Update go-deps shas
* Transition to identity issuer scheme based configuration
* Use k8s consts for secret file names

Signed-off-by: zaharidichev <zaharidichev@gmail.com>

* Upgrade go to 1.13.4 (#3702)

Fixes #3566

As explained in #3566, as of go 1.13 there's a strict check that ensures a dependency's timestamp matches it's sha (as declared in go.mod). Our smi-sdk dependency has a problem with that that got resolved later on, but more work would be required to upgrade that dependency. In the meantime a quick pair of replace statements at the bottom of go.mod fix the issue.

* Removed calico logutils dependency, incompatible with go 1.13 (#3763)

* Removed calico logutils dependency, incompatible with go 1.13

Fixes #1153

Removed dependency on
`github.com/projectcalico/libcalico-go/lib/logutils` because it has
problems with go modules, as described in
projectcalico/libcalico-go#1153

Not a big deal since it was only used for modifying the plugin's log
format.

* Move CNI template to helm (#3581)

* Create helm chart for the CNI plugin

Signed-off-by: zaharidichev <zaharidichev@gmail.com>

* Add helm install tests for the CNI plugin

Signed-off-by: zaharidichev <zaharidichev@gmail.com>

* Add readme for the CNI helm chart

Signed-off-by: zaharidichev <zaharidichev@gmail.com>

* Fix integration tests

Signed-off-by: zaharidichev <zaharidichev@gmail.com>

* Remove old cni-plugin.yaml

Signed-off-by: zaharidichev <zaharidichev@gmail.com>

* Add trace partial template

Signed-off-by: zaharidichev <zaharidichev@gmail.com>

* Address more comments

Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>

* Upgrade prometheus to v1.2.1 (#3541)

Signed-off-by: Dax McDonald <dax@rancher.com>

* Cache StatSummary responses in dashboard web server (#3769)

Signed-off-by: Sergio Castaño Arteaga <tegioz@icloud.com>

* Enable mixed configuration of skip-[inbound|outbound]-ports (#3766)

* Enable mixed configuration of skip-[inbound|outbound]-ports using port numbers and ranges (#3752)
* included tests for generated output given proxy-ignore configuration options
* renamed "validate" method to "parseAndValidate" given mutation
* updated documentation to denote inclusiveness of ranges
* Updates for expansion of ignored inbound and outbound port ranges to be handled by the proxy-init rather than CLI (#3766)

This change maintains the configured ports and ranges as strings rather than unsigned integers, while still providing validation at the command layer.

* Bump versions for proxy-init to v1.3.0

Signed-off-by: Paul Balogh <javaducky@gmail.com>

* Remove empty fields from generated configs (#3886)

Fixes
- https://github.com/linkerd/linkerd2/issues/2962
- https://github.com/linkerd/linkerd2/issues/2545

### Problem
Field omissions for workload objects are not respected while marshaling to JSON.

### Solution
After digging a bit into the code, I came to realize that while marshaling, workload objects have empty structs as values for various fields which would rather be omitted. As of now, the standard library`encoding/json` does not support zero values of structs with the `omitemty` tag. The relevant issue can be found [here](https://github.com/golang/go/issues/11939). To tackle this problem, the object declaration should have _pointer-to-struct_ as a field type instead of _struct_ itself. However, this approach would be out of scope as the workload object declaration is handled by the k8s library.

I was able to find a drop-in replacement for the `encoding/json` library which supports zero value of structs with the `omitempty` tag. It can be found [here](https://github.com/clarketm/json). I have made use of this library to implement a simple filter like functionality to remove empty tags once a YAML with empty tags is generated, hence leaving the previously existing methods unaffected

Signed-off-by: Mayank Shah <mayankshah1614@gmail.com>

* Add `as-group` CLI flag (#3952)

Add CLI flag --as-group that can impersonate group for k8s operations

Signed-off-by: Mayank Shah mayankshah1614@gmail.com

* Fix CNI config parsing (#3953)

This PR addreses the problem introduced after #3766.

Fixes #3941 

Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>

* Use correct go module file syntax (#4021)

The correct syntax for the go module file is
go MAJOR.MINOR

Signed-off-by: Dax McDonald <dax@rancher.com>

* Update linkerd/stern to fix go.mod parsing (#4173)

## Motivation

I noticed the Go language server stopped working in VS Code and narrowed it
down to `go build ./...` failing with the following:

```
❯ go build ./...
go: github.com/linkerd/stern@v0.0.0-20190907020106-201e8ccdff9c: parsing go.mod: go.mod:3: usage: go 1.23
```

This change updates `linkerd/stern` version with changes made in
linkerd/stern#3 to fix this issue.

This does not depend on #4170, but it is also needed in order to completely
fix `go build ./...`

* Bump proxy-init to v1.3.2 (#4170)

* Bump proxy-init to v1.3.2

Bumped `proxy-init` version to v1.3.2, fixing an issue with `go.mod`
(linkerd/linkerd2-proxy-init#9).
This is a non-user-facing fix.

* Set auth override (#4160)

Set AuthOverride when present on endpoints annotation

Signed-off-by: Zahari Dichev <zaharidichev@gmail.com>

* Upgrade to client-go 0.17.4 and smi-sdk-go 0.3.0 (#4221)

Here we upgrade our dependencies on client-go to 0.17.4 and smi-sdk-go to 0.3.0.  Since smi-sdk-go uses client-go 0.17.4, these upgrades must be performed simultaneously.

This also requires simultaneously upgrading our dependency on linkerd/stern to a SHA which also uses client-go 0.17.4.  This keeps all of our transitive dependencies synchronized on one version of client-go.

This ALSO requires updating our codegen scripts to use the 0.17.4 version of code-generator and running it to generate 0.17.4 compatible generated code.  I took this opportunity to update our code generation script to properly use the version of code-generater from `go.mod` rather than a hardcoded SHA.

Signed-off-by: Alex Leong <alex@buoyant.io>

* Upgrade to go 1.14.2 (#4278)

Upgrade Linkerd's base docker image to use go 1.14.2 in order to stay modern.

The only code change required was to update a test which was checking the error message of a `crypto/x509.CertificateInvalidError`.  The error message of this error changed between go versions.  We update the test to not check for the specific error string so that this test passes regardless of go version.

Signed-off-by: Alex Leong <alex@buoyant.io>

* Refactor CNI integration tests to use annotations functions (#4363)

Followup to #4341

Replaced all the `t.Error`/`t.Fatal` calls in the integration tests with the
new functions defined in `testutil/annotations.go` as described in #4292,
in order for the errors to produce Github annotations.

This piece takes care of the CNI integration test suite.

This also enables the annotations for these and the general integration
tests, by setting the `GH_ANNOTATIONS` environment variable in the
workflows whose flakiness we're interested on catching: Kind
integration, Cloud integration and Release.

Re #4176

* install-cni.sh: Fix shellcheck issues (#4405)

Where cat and echo are actually not needed, they have been removed.

Signed-off-by: Joakim Roubert <joakim.roubert@axis.com>

* Add --close-wait-timeout inject flag (#4409)

Depends on https://github.com/linkerd/linkerd2-proxy-init/pull/10

Fixes #4276 

We add a `--close-wait-timeout` inject flag which configures the proxy-init container to run with `privileged: true` and to set `nf_conntrack_tcp_timeout_close_wait`. 

Signed-off-by: Alex Leong <alex@buoyant.io>

* Fix quotes in shellscripts (#4406)

- Add quotes where missing, to handle whitespace & c:o.
- Use single quotes for non-expansion strings.
- Fix quotes were the current would cause errors.

Signed-off-by: Joakim Roubert <joakim.roubert@axis.com>

* Use buster for base and web images too (#4567)

Requires setting iptables-legacy as the iptables provider.

Signed-off-by: Joakim Roubert <joakim.roubert@axis.com>

* Improve shellscript portability by using /bin/env (#4628)

Using `/bin/env` increases portability for the shell scripts (and often using `/bin/env` is requested by e.g. Mac users). This would also facilitate testing scripts with different Bash versions via the Bash containers, as they have bash in `/usr/local` and not `/bin`. Using `/bin/env`, there is no need to change the script when testing. (I assume the latter was behind https://github.com/linkerd/linkerd2/pull/4593/files/c301ea214b7ccf8d74d7c41cbf8c4cc05fea7d4a#diff-ecec5e3a811f60bc2739019004fa35b0, which would not happen using `/bin/env`.)

Signed-off-by: Joakim Roubert <joakimr@axis.com>

* Add support for Helm configuration of per-component proxy resources requests and limits (#4226)

Signed-off-by: Lutz Behnke <lutz.behnke@finleap.com>

* Update proxy-api version to v0.1.13 (#4614)

This update includes no API changes, but updates grpc-go
to the latest release.

* Upgrade generated protobuf files to v1.4.2 (#4673)

Regenerated protobuf files, using version 1.4.2 that was upgraded from
1.3.2 with the proxy-api update in #4614.

As of v1.4 protobuf messages are disallowed to be copied (because they
hold a mutex), so whenever a message is passed to or returned from a
function we need to use a pointer.

This affects _mostly_ test files.

This is required to unblock #4620 which is adding a field to the config
protobuf.

* add fish shell completion (#4751)

fixes #4208

Signed-off-by: Wei Lun <weilun_95@hotmail.com>

* Migrate CI to docker buildx and other improvements (#4765)

* Migrate CI to docker buildx and other improvements

## Motivation
- Improve build times in forks. Specially when rerunning builds because of some flaky test.
- Start using `docker buildx` to pave the way for multiplatform builds.

## Performance improvements
These timings were taken for the `kind_integration.yml` workflow when we merged and rerun the lodash bump PR (#4762)

Before these improvements:
- when merging: `24:18`
- when rerunning after merge (docker cache warm): `19:00`
- when running the same changes in a fork (no docker cache): `32:15`

After these improvements:
- when merging: `25:38`
- when rerunning after merge (docker cache warm): `19:25`
- when running the same changes in a fork (docker cache warm): `19:25`

As explained below, non-forks and forks now use the same cache, so the important take is that forks will always start with a warm cache and we'll no longer see long build times like the `32:15` above.
The downside is a slight increase in the build times for non-forks (up to a little more than a minute, depending on the case).

## Build containers in parallel
The `docker_build` job in the `kind_integration.yml`, `cloud_integration.yml` and `release.yml` workflows relied on running `bin/docker-build` which builds all the containers in sequence. Now each container is built in parallel using a matrix strategy.

## New caching strategy
CI now uses `docker buildx` for building the container images, which allows using an external cache source for builds, a location in the filesystem in this case. That location gets cached using actions/cache, using the key `{{ runner.os }}-buildx-${{ matrix.target }}-${{ env.TAG }}` and the restore key `${{ runner.os }}-buildx-${{ matrix.target }}-`.

For example when building the `web` container, its image and all the intermediary layers get cached under the key `Linux-buildx-web-git-abc0123`. When that has been cached in the `main` branch, that cache will be available to all the child branches, including forks. If a new branch in a fork asks for a key like `Linux-buildx-web-git-def456`, the key won't be found during the first CI run, but the system falls back to the key `Linux-buildx-web-git-abc0123` from `main` and so the build will start with a warm cache (more info about how keys are matched in the [actions/cache docs](https://docs.github.com/en/actions/configuring-and-managing-workflows/caching-dependencies-to-speed-up-workflows#matching-a-cache-key)).

## Packet host no longer needed
To benefit from the warm caches both in non-forks and forks like just explained, we're required to ditch doing the builds in Packet and now everything runs in the github runners VMs.
As a result there's no longer separate logic for non-forks and forks in the workflow files; `kind_integration.yml` was greatly simplified but `cloud_integration.yml` and `release.yml` got a little bigger in order to use the actions artifacts as a repository for the images built. This bloat will be fixed when support for [composite actions](https://github.com/actions/runner/blob/users/ethanchewy/compositeADR/docs/adrs/0549-composite-run-steps.md) lands in github.

## Local builds
You still are able to run `bin/docker-build` or any of the `docker-build.*` scripts. And to make use of buildx, run those same scripts after having set the env var `DOCKER_BUILDKIT=1`. Using buildx supposes you have installed it, as instructed [here](https://github.com/docker/buildx).

## Other
- A new script `bin/docker-cache-prune` is used to remove unused images from the cache. Without that the cache grows constantly and we can rapidly hit the 5GB limit (when the limit is attained the oldest entries get evicted).
- The `go-deps` dockerfile base image was changed from `golang:1.14.2` (ubuntu based) to `golang-1:14.2-alpine` also to conserve cache space.

# Addressed separately in #4875:

Got rid of the `go-deps` image and instead added something similar on top of all the Dockerfiles dealing with `go`, as a first stage for those Dockerfiles. That continues to serve as a way to pre-populate go's build cache, which speeds up the builds in the subsequent stages. That build should in theory be rebuilt automatically only when `go.mod` or `go.sum` change, and now we don't require running `bin/update-go-deps-shas`. That script was removed along with all the logic elsewhere that used it, including the `go_dependencies` job in the `static_checks.yml` github workflow.

The list of modules preinstalled was moved from `Dockerfile-go-deps` to a new script `bin/install-deps`. I couldn't find a way to generate that list dynamically, so whenever a slow-to-compile dependency is found, we have to make sure it's included in that list.

Although this simplifies the dev workflow, note that the real motivation behind this was a limitation in buildx's `docker-container` driver that forbids us from depending on images that haven't been pushed to a registry, so we have to resort to building the dependencies as a first stage in the Dockerfiles.

* CI: Remove Base image (#4782)

Removed the dependency on the base image, and instead install the needed packages in the Dockerfiles for debug and CNI.
Also removed some obsolete info from BUILD.md

Signed-off-by: Ali Ariff <ali.ariff12@gmail.com>

* Build ARM docker images (#4794)

Build ARM docker images in the release workflow.

# Changes:
- Add a new env key `DOCKER_MULTIARCH` and `DOCKER_PUSH`. When set, it will build multi-arch images and push them to the registry. See https://github.com/docker/buildx/issues/59 for why it must be pushed to the registry.
- Usage of `crazy-max/ghaction-docker-buildx ` is necessary as it already configured with the ability to perform cross-compilation (using QEMU) so we can just use it, instead of manually set up it.
- Usage of `buildx` now make default global arguments. (See: https://docs.docker.com/engine/reference/builder/#automatic-platform-args-in-the-global-scope)

# Follow-up:
- Releasing the CLI binary file in ARM architecture. The docker images resulting from these changes already build in the ARM arch. Still, we need to make another adjustment like how to retrieve those binaries and to name it correctly as part of Github Release artifacts.

Signed-off-by: Ali Ariff <ali.ariff12@gmail.com>

* Push docker images to ghcr.io instead of gcr.io (#4953)

* Push docker images to ghcr.io instead of gcr.io

The `cloud_integration.yml` and `release.yml` workflows were modified to
log into ghcr.io, and remove the `Configure gcloud` step which is no
longer necessary.

Note that besides the changes to cloud_integration.yml and release.yml, there was a change to the upgrade-stable integration test so that we do linkerd upgrade --addon-overwrite to reset the addons settings because in stable-2.8.1 the Grafana image was pegged to gcr.io/linkerd-io/grafana in linkerd-config-addons. This will need to be mentioned in the 2.9 upgrade notes.

Also the egress integration test has a debug container that now is pegged to the edge-20.9.2 tag.

Besides that, the other changes are just a global search and replace (s/gcr.io/linkerd-io/ghcr.io/linkerd/).

* CNI: Use skip ports configuration in CNI (#4974)

* CNI: Use skip ports configuration in CNI

This PR updates the install and `cmdAdd` workflow (which is called
for each new Pod creation) to retrieve and set the configured Skip
Ports. This also updates the `cmdAdd` workflow to check if the new
pod is a control plane Pod, and adds `443` to OutBoundSkipPort so
that 443 (used with k8s API) is skipped as it was causing errors because 
a resolve lookup was happening for them which is not intended.

* Bump k8s client-go to v0.19.2 (#5002)

Fixes #4191 #4993

This bumps Kubernetes client-go to the latest v0.19.2 (We had to switch directly to 1.19 because of this issue). Bumping to v0.19.2 required upgrading to smi-sdk-go v0.4.1. This also depends on linkerd/stern#5

This consists of the following changes:

- Fix ./bin/update-codegen.sh by adding the template path to the gen commands, as it is needed after we moved to GOMOD.
- Bump all k8s related dependencies to v0.19.2
- Generate CRD types, client code using the latest k8s.io/code-generator
- Use context.Context as the first argument, in all code paths that touch the k8s client-go interface

Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>

* Updated debian image tags (#5249)

Signed-off-by: Agnivesh Adhikari <agnivesh.adhikari@gmail.com>

* Update debian base images to buster-20210208-slim (#5750)

Before the upcoming stable release, we should update our base images to
use the most recent Debian images to pick up any security fixes that may
have been addressed.

This change updates all o four debian images to use the
`buster-20210208-slim` tag.

* Update Go to 1.14.15 (#5751)

The Go-1.14 release branch includes a number of important updates. This
change updates our containers' base image to the latest release, 1.14.15

See linkerd/linkerd2-proxy-init#32
Fixes #5655

* docker: Access container images via cr.l5d.io (#5756)

We've created a custom domain, `cr.l5d.io`, that redirects to `ghcr.io`
(using `scarf.sh`). This custom domain allows us to swap the underlying
container registry without impacting users. It also provides us with
important metrics about container usage, without collecting PII like IP
addresses.

This change updates our Helm charts and CLIs to reference this custom
domain. The integration test workflow now refers to the new domain,
while the release workflow continues to use the `ghcr.io/linkerd` registry
for the purpose of publishing images.

* cni: add ConfigureFirewall error propagation (#5811)

This change adds error propagation for the CNI's ADD command so that any failures in the `ConfigureFirewall` function to configure the Pod's iptables can be bubbled up to be logged and handled.

Fixes #5809 

Signed-off-by: Frank Gu <frank@voiceflow.com>

* update go.mod and docker images to go 1.16.2 (#5890)

* update go.mod and docker images to go 1.16.1

Signed-off-by: Dennis Adjei-Baah <dennis@buoyant.io>

* update test error messages for ParseDuration

* update go version to 1.16.2

* fix: issues affecting code quality (#5827)

Fix various lint issues:

- Remove unnecessary calls to fmt.Sprint
- Fix check for empty string
- Fix unnecessary calls to Printf
- Combine multiple `append`s into a single call

Signed-off-by: shubhendra <withshubh@gmail.com>

* Update Go to 1.16.4 (#6170)

Go 1.16.4 includes a fix for a denial-of-service in net/http: golang/go#45710

Go's error file-line formatting changed in 1.16.3, so this change
updates tests to only do suffix matching on these error strings.

* Enable readOnlyFileSystem for cni plugin chart (#6469)

Increase container security by making the root file system of the cni
install plugin read-only.

Change the temporary directory used in the cni install script, add a
writable EmptyDir volume and enable readOnlyFileSystem securityContext
in cni plugin helm chart.

Tested this by building the container image of the cni plugin and
installed the chart onto a cluster. Logs looked the same as before this
change.

Fixes #6468

Signed-off-by: Gerald Pape <gerald@giantswarm.io>

* Upgrade CNI to v0.8.1 (#7270)

Addresses #7247 and unblocks #7094

Bumped the cni lib version in `go.mod`, which required implementing the
new CHECK command through `cmdCHeck`, which for now is no-op.

* build(deps): bump github.com/containernetworking/cni from 0.8.1 to 1.0.1 (#7346)

* build(deps): bump github.com/containernetworking/cni from 0.8.1 to 1.0.1

Bumps [github.com/containernetworking/cni](https://github.com/containernetworking/cni) from 0.8.1 to 1.0.1.
- [Release notes](https://github.com/containernetworking/cni/releases)
- [Commits](https://github.com/containernetworking/cni/compare/v0.8.1...v1.0.1)

---
updated-dependencies:
- dependency-name: github.com/containernetworking/cni
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Alejandro Pedraza <alejandro@buoyant.io>

* build: upgrade to Go 1.17 (#7371)

* build: upgrade to Go 1.17

This commit introduces three changes:
	1. Update the `go` directive in `go.mod` to 1.17
	2. Update all Dockerfiles from `golang:1.16.2` to
	   `golang:1.17.3`
	3. Update all CI to use Go 1.17

Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>

* chore: run `go fmt ./...`

This commit synchronizes `//go:build` lines with `// +build` lines.

Reference: https://go.googlesource.com/proposal/+/master/design/draft-gobuild.md
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>

* bin/shellcheck-all: Add filename/shebang check (#7541)

We only run shellcheck for files that contain a #!/usr/bin/env shebang
with either bash or sh. If a new shellscript file is added that has the
.sh extension but either lacks shebang or has something other than that,
shellcheck will not be run for that file. Then there is a risk that
by mistake such a file slips into the repo under the radar.

This patch adds a check for all .sh files to make sure they have a
corresponding shebang in order for them to be passed to shellcheck.

Change-Id: I24235e672dd82c7c73df6fe6c8beda2a579bd187
Signed-off-by: Joakim Roubert <joakimr@axis.com>

* Fix CNI integration test (#7660)

Reverts the change made to `env_vars.sh` in #7541

That file is consumed by `docker run --env-file` which requires the old
format, as documented [here](https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file).

Also renamed it to `env_vars.list` to have it not mistaken to be a shell
target.

This broke the `ARM64 integration test` as seen here:
https://github.com/linkerd/linkerd2/runs/4887813913?check_suite_focus=true#step:7:34

* go: Enable `errorlint` checking (#7885)

Since Go 1.13, errors may "wrap" other errors. [`errorlint`][el] checks
that error formatting and inspection is wrapping-aware.

This change enables `errorlint` in golangci-lint and updates all error
handling code to pass the lint. Some comparisons in tests have been left
unchanged (using `//nolint:errorlint` comments).

[el]: https://github.com/polyfloyd/go-errorlint

Signed-off-by: Oliver Gould <ver@buoyant.io>

* Add `gosec` and `errcheck` lints (#7954)

Closes #7826

This adds the `gosec` and `errcheck` lints to the `golangci` configuration. Most significant lints have been fixed my individual changes, but this enables them by default so that all future changes are caught ahead of time.

A significant amount of these lints are been exluced by the various `exclude-rules` rules added to `.golangci.yml`. These include operations are files that generally do not fail such as `Copy`, `Flush`, or `Write`. We also choose to ignore most errors when cleaning up functions via the `defer` keyword.

Aside from those, there are several other rules added that all have comments explaining why it's okay to ignore the errors that they cover.

Finally, several smaller fixes in the code have been made where it seems necessary to catch errors or at least log them.

Signed-off-by: Kevin Leimkuhler <kleimkuhler@icloud.com>

* Update debian to bullseye (#8287)

Several container images use `debian:buster-20210208-slim`. `bullseye`
is now the default version (i.e., referenced by the `latest` tag).

This change updates container images that use debian to reference
`bullseye` instead of `buster`. The date tags have been dropped so that
we pick up the latest patch version on each Linkerd release.

Signed-off-by: Oliver Gould <ver@buoyant.io>

* Introduce file watch to CNI installer (#8299)

Introduce fs watch for cni installer

Our CNI installer script is prone to race conditions, especially when a
node is rebooted, or restarted. Order of configuration should not matter
and our CNI plugin should attach to other plugins (i.e chain to them) or
run standalone when applicable. In order to be more flexible, we
introduce a filesystem watcher through inotifywait to react to changes
in the cni config directory. We react to changes based on SHAs.

Linkerd's CNI plugin should append configuration when at least one other
file exists, but if multiple files exist, the CNI plugin should not have
to make a decision on whether thats the current file to append itself
to. As a result, most of the logic in this commit revolves around the
assumption that whatever file we detect has been created should be
injected with Linkerd's config -- the rest is up to the host.

In addition, we also introduce a sleep in the cni preStop hook, changed to
using bash and introduce procps to get access to ps and pgrep.

Closes #8070

Signed-off-by: Matei David <matei@buoyant.io>
Co-authored-by: Oliver Gould <ver@buoyant.io>
Co-authored-by: Alejandro Pedraza <alejandro@buoyant.io>

* Shellscript housekeeping (#8549)

- Replace simple awk commands with shell built-ins
- Single quotes instead of double quotes for static strings
- No need for -n operator to check that variables are not empty
- Use single echo calls instead of several consecutive ones
- No quotes are needed for variable assignments
- Use the more lightweight echo instead of printf where applicable
- No need to use bash's == comparison when there is the POSIX =

Signed-off-by: Joakim Roubert <joakim.roubert@axis.com>

* Update Go to the latest 1.17 release (#8603)

Our docker images hardcode a patch version, 1.17.3, which does not
include a variety of important fixes that have been released:

> go1.17.4 (released 2021-12-02) includes fixes to the compiler, linker,
> runtime, and the go/types, net/http, and time packages. See the Go
> 1.17.4 milestone on our issue tracker for details.

> go1.17.5 (released 2021-12-09) includes security fixes to the net/http
> and syscall packages. See the Go 1.17.5 milestone on our issue tracker
> for details.

> go1.17.6 (released 2022-01-06) includes fixes to the compiler, linker,
> runtime, and the crypto/x509, net/http, and reflect packages. See the Go
> 1.17.6 milestone on our issue tracker for details.

> go1.17.7 (released 2022-02-10) includes security fixes to the go
> command, and the crypto/elliptic and math/big packages, as well as bug
> fixes to the compiler, linker, runtime, the go command, and the
> debug/macho, debug/pe, and net/http/httptest packages. See the Go 1.17.7
> milestone on our issue tracker for details.

> go1.17.8 (released 2022-03-03) includes a security fix to the
> regexp/syntax package, as well as bug fixes to the compiler, runtime,
> the go command, and the crypto/x509 and net packages. See the Go 1.17.8
> milestone on our issue tracker for details.

> go1.17.9 (released 2022-04-12) includes security fixes to the
> crypto/elliptic and encoding/pem packages, as well as bug fixes to the
> linker and runtime. See the Go 1.17.9 milestone on our issue tracker for
> details.

> go1.17.10 (released 2022-05-10) includes security fixes to the syscall
> package, as well as bug fixes to the compiler, runtime, and the
> crypto/x509 and net/http/httptest packages. See the Go 1.17.10 milestone
> on our issue tracker for details.

> go1.17.11 (released 2022-06-01) includes security fixes to the
> crypto/rand, crypto/tls, os/exec, and path/filepath packages, as well as
> bug fixes to the crypto/tls package. See the Go 1.17.11 milestone on our
> issue tracker for details.

This changes our container configs to use the latest 1.17 release on
each build so that these patch releases are picked up without manual
intervention.

Signed-off-by: Oliver Gould <ver@buoyant.io>

* Fix CNI plugin event processing (#8778)

The CNI plugin watches for file changes and reacts accordingly. To
append our CNI plugin configuration to an existing configuration file,
we keep a watch on the config file directory, and whenever a new file is
created (or modified) we append to it. To avoid redundancy and infinite
loops, after a file has been processed, we save its SHA in-memory.
Whenever a new update is received, we calculate the file's SHA, and if
it differs from the previous one, we update it (since the file hasn't
been 'seen' by our script yet). The in-memory SHA is continously
overridden as updates are received and processed.

In our processing logic, we override the SHA only if the file exists (in
short, we want to avoid processing the SHA on 'DELETE' events). However,
when a different CNI plugin deletes the file, it typically re-creates it
immediately after. Since we do not check for the event type and instead
rely only on file existence, we end up calculating the SHA for a new
file before the file has had a chance to be processed when its
associated 'CREATE' event is picked up. This means that new files will
essentially be skipped from being updated, since the script considers
them to have been processed already (since their SHA was calculated when
the previous file was deleted).

This change fixes the bug by introducing a type check for the event in
addition to checking the file's existence. This allows us to be sure
that new files are only processed when the 'CREATE' event is picked up,
ensuring we do not skip them.

Signed-off-by: Matei David <matei@buoyant.io>

* Bump proxy-init version to v1.6.1 (#8913)

Release v1.6.1 of proxy-init adds support for iptables-nft. This change
bumps up the proxy-init version used in code, chart values, and golden
files.

* Update go.mod dep
* Update CNI plugin with new opts
* Update proxy-init ref in golden files and chart values
* Update policy controller CI workflow

Signed-off-by: Matei David <matei@buoyant.io>

* Update Go to 1.18 (#9019)

Go 1.18 features a number of important chanages, notably removing client
support for defunct TLS versions: https://tip.golang.org/doc/go1.18

This change updates our Go version in CI and development.

Signed-off-by: Oliver Gould <ver@buoyant.io>

* Allow running Linkerd CNI plugin stand-alone (#8864)

This PR allows Linkerd-CNI to be called in non-chained (stand-alone) mode.
Together with a separate controller https://github.com/ErmakovDmitriy/linkerd-multus-attach-operator this PR should allow to run Linkerd-CNI in Kubernetes clusters with Multus CNI.

The main issue with Multus-CNI clusters is that Multus does not handle "*.conflist" CNI configuration files, so Linkerd-CNI is ignored. Please, take a look at some details in issue #8553.

Short summary about the aforementioned controller: it adds Multus NetworkAttachmentDefinitions to namespaces which have special annotation `linkerd.io/multus=enabled` and patches Pod definitions with `k8s.cni.cncf.io/v1=linkerd-cni`. The result is that Linkerd-CNI binary is called by Multus with configuration from the NetworkAttachmentDefinition.

For using with Openshift, one should manually annotate a namespace or a Pod with config.linkerd.io/proxy-uid annotation with some value in the allowed range, for instance:

```yaml
apiVersion: v1
kind: Namespace
metadata:
  annotations:
    # I used UID in the end of the range "openshift.io/sa.scc.uid-range"
    config.linkerd.io/proxy-uid: "1000739999"
    linkerd.io/inject: enabled
    linkerd.io/multus: enabled
    openshift.io/sa.scc.mcs: s0:c27,c14
    openshift.io/sa.scc.supplemental-groups: 1000730000/10000
    openshift.io/sa.scc.uid-range: 1000730000/10000
  labels:
    config.linkerd.io/admission-webhooks: enabled
    kubernetes.io/metadata.name: emojivoto
  name: emojivoto
```

Signed-off-by: Dmitrii Ermakov <demonihin@gmail.com>

* Remove old .conf file from CNI directory when we convert .conf file to .conflist (#9555)

* Change the integration test to check that the CNI configuration directory only has a single configuration file
* Change the install script to remove the old .conf file when it's rewritten into a .conflist

* Replace usage of io/ioutil package (#9613)

`io/ioutil` has been deprecated since go 1.16 and the linter started to…

With ARM based dev machines and servers becoming more common, it is become increasingly important to build Docker images that support multiple architectures. This guide will show you how to build these Docker images on any machine of your choosing.

https://www.apple.com/newsroom/images/product/mac/standard/Apple_M1-Pro-M1-Max_CPU-Performance_10182021_big.jpg.large_2x.jpg

This is the graph that changed the landscape for dev machines. This graph opened thousands of issues on Github that said “M1 support when?”. This website was created in the immediate aftermath and surged with traffic. So here we are over a year later in a developer ecosystem where every engineer on a team might be in a completely different environment, whether that be Linux or Apple Silicon Macs or Intel Macs or Github Codespaces or a custom built cloud dev box.

Meanwhile AWS has been slowly building up its ARM based instances with Graviton 1 and more recently, Graviton 2 which have significant cost savings. Azure has joined the club with their Ampere Altra offerings and GCP just announced its ARM offerings. At Speedscale, we’ve decided to ride this wave and take advantage of Graviton instances for our Kubernetes clusters. We build all of our Docker images for multiple architectures and deploy to ARM instances which has saved us about 30% in compute costs alone.

Given that the industry is embracing ARM and has mostly moved to containerized applications, surely all these architectures have been abstracted away and everything just works, right? Right?

Multi-architecture images

Kind of. Take this simple Go image where we build a binary and put it in a final alpine release image.

FROM golang:1.18 as build

WORKDIR /go/src/github.com/kush/hello-world
COPY . .
RUN go build -o /hello-world

FROM alpine
ENTRYPOINT ["/hello-world"]
COPY --from=build /hello-world /hello-world

We can build this with docker build -t hello-world . .

To build the multi-arch version (linux amd64 and arm in this case) of this we do the following

$ docker buildx create --use # only needed the first time
$ docker buildx build --platform linux/amd64,linux/arm64 -t hello-world .
=> [linux/arm64 internal] load metadata for docker.io/library/golang:1.18                                                                                                                                                                               1.3s
=> [linux/amd64 internal] load metadata for docker.io/library/golang:1.18

Easy enough. You’ll notice that two builds are happening in parallel for each platform you specified. You might also notice this warning when you build your image

WARNING: No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load

If you run the command again with the --load option, you’ll see error: docker exporter does not currently support exporting manifest lists. Furthermore if you run docker image ls grep hello-world you won’t see the image we just built.

So what’s going on here?

Container images with support for multiple architectures are part of the OCI specification and Docker supports creating these as well. The image index (more commonly referred to as the image manifest) contains some metadata about the image itself and an array of actual manifests which specify the platform and the image layer references. Docker supports creating these but only through the experimental new builder, buildx.

Buildx has some nice new features like support for better caching between images as well as cleaner output during builds. However, it runs completely independently of your standard local docker registry. If you run docker ps, you’ll see a buildx builder running as a container on your local machine. This is a virtual builder that we created using docker buildx create. There are ways to create builders that run as Kubernetes pods or remote Docker machines as well but these require a lot more setup.

Unfortunately because of this there’s no way to manipulate the images created with buildx after the fact. You can no longer do docker push since the image doesn’t exist in your registry so you must add docker buildx build --push to your build command which will push to a remote registry right after the build is finished.

Can we go faster?

Functionally, we’ve achieved all that we needed to in the above example but on my machine, that build takes about 9 seconds. Part of this slowness comes from the fact that I’m emulating a different architecture meaning that instructions are constantly being converted from one CPU architecture to another, more details here. Since my project is in Go and I know Go natively supports cross compilation, I can leverage this to make my builds even faster. Here is the cross compilation version of my Docker image.

FROM --platform=$BUILDPLATFORM golang:1.18 as build

WORKDIR /go/src/github.com/kush/hello-world
COPY . .

ARG TARGETOS TARGETARCH
ENV GOOS $TARGETOS
ENV GOARCH $TARGETARCH
RUN go build -o /hello-world

FROM alpine
ENTRYPOINT ["/hello-world"]
COPY --from=build /hello-world /hello-world

This takes about 7 seconds on my machine and would be a more significant difference for a non trivial example.

Docker populates the variable BUILDPLATFORM which is used to avoid using an emulated base image. It also populates TARGETOS and TARGETARCH which I use to tell my compiler which architecture to build for. Note that we don’t change anything in the final release image, we use the architecture specific alpine image with our cross compiled binaries.

For more details on how this actually works, check out this blog post by Docker which visually represents the difference between emulation and cross compilation.

Summary

  1. Create and use a docker buildx builder with docker buildx create --use
  2. Build and push with docker buildx build --push --platform linux/amd64,linux/arm64 -t {REGISTRY}:{IMAGE}:{tag} .
  3. If you can cross compile, do that because it will be much faster.
    1. If you’re using Go and specifically depend on cgo, you can still cross compile but you’ll need to install a C cross compiler for eg. gcc-aarch64-linux-gnu libc6-dev-arm64-cross
  4. If you’re using any sort of other binaries during your build steps such as a yum install or apt-get , you may need to run this magic line in order for the emulation to work correctly docker run --rm --privileged multiarch/qemu-user-static --reset -p yes

What next?

You can now successfully produce multi-architecture Docker ARM images so that you can run them on whatever machine you want. But what about actually deploying these to production? In all likelihood, you depend on projects that do not publish multi-architecture images yet but you still want to reduce costs by using ARM instances in your Kubernetes cluster. In the next blog post, we’ll guide you through setting up a Kubernetes cluster with multiple architecture nodes and how to guarantee that your existing workloads will not be disrupted in the process.

carlosedp

Building images for multi-arch with —load parameter fails

While trying to build images for multi-architecture (AMD64 and ARM64), I tried to load them into the Docker daemon with the --load parameter but I got an error:

➜ docker buildx build --platform linux/arm64,linux/amd64 --load  -t carlosedp/test:v1  .
[+] Building 1.3s (24/24) FINISHED
 => [internal] load .dockerignore                                                                                                                                                        0.0s
 => => transferring context: 2B                                                                                                                                                          0.0s
 => [internal] load build definition from Dockerfile                                                                                                                                     0.0s
 => => transferring dockerfile: 115B                                                                                                                                                     0.0s
 => [linux/amd64 internal] load metadata for docker.io/library/alpine:latest                                                                                                             0.8s
 => [linux/amd64 internal] load metadata for docker.io/library/golang:1.12-alpine                                                                                                        1.0s
 => [linux/arm64 internal] load metadata for docker.io/library/golang:1.12-alpine                                                                                                        1.2s
 => [linux/arm64 internal] load metadata for docker.io/library/alpine:latest                                                                                                             1.2s
 => [linux/amd64 builder 1/5] FROM docker.io/library/golang:1.12-alpine@sha256:1a5f8b6db670a7776ce5beeb69054a7cf7047a5d83176d39b94665a54cfb9756                                          0.0s
 => => resolve docker.io/library/golang:1.12-alpine@sha256:1a5f8b6db670a7776ce5beeb69054a7cf7047a5d83176d39b94665a54cfb9756                                                              0.0s
 => [linux/amd64 stage-1 1/4] FROM docker.io/library/alpine@sha256:28ef97b8686a0b5399129e9b763d5b7e5ff03576aa5580d6f4182a49c5fe1913                                                      0.0s
 => => resolve docker.io/library/alpine@sha256:28ef97b8686a0b5399129e9b763d5b7e5ff03576aa5580d6f4182a49c5fe1913                                                                          0.0s
 => [internal] load build context                                                                                                                                                        0.0s
 => => transferring context: 232B                                                                                                                                                        0.0s
 => CACHED [linux/amd64 stage-1 2/4] RUN apk add --no-cache file &&     rm -rf /var/cache/apk/*                                                                                          0.0s
 => CACHED [linux/amd64 builder 2/5] WORKDIR /go/src/app                                                                                                                                 0.0s
 => CACHED [linux/amd64 builder 3/5] ADD . /go/src/app/                                                                                                                                  0.0s
 => CACHED [linux/amd64 builder 4/5] RUN CGO_ENABLED=0 go build -o main .                                                                                                                0.0s
 => CACHED [linux/amd64 builder 5/5] RUN mv /go/src/app/main /                                                                                                                           0.0s
 => CACHED [linux/amd64 stage-1 3/4] COPY --from=builder /main /main                                                                                                                     0.0s
 => [linux/arm64 builder 1/5] FROM docker.io/library/golang:1.12-alpine@sha256:1a5f8b6db670a7776ce5beeb69054a7cf7047a5d83176d39b94665a54cfb9756                                          0.0s
 => => resolve docker.io/library/golang:1.12-alpine@sha256:1a5f8b6db670a7776ce5beeb69054a7cf7047a5d83176d39b94665a54cfb9756                                                              0.0s
 => [linux/arm64 stage-1 1/4] FROM docker.io/library/alpine@sha256:28ef97b8686a0b5399129e9b763d5b7e5ff03576aa5580d6f4182a49c5fe1913                                                      0.0s
 => => resolve docker.io/library/alpine@sha256:28ef97b8686a0b5399129e9b763d5b7e5ff03576aa5580d6f4182a49c5fe1913                                                                          0.0s
 => CACHED [linux/arm64 stage-1 2/4] RUN apk add --no-cache file &&     rm -rf /var/cache/apk/*                                                                                          0.0s
 => CACHED [linux/arm64 builder 2/5] WORKDIR /go/src/app                                                                                                                                 0.0s
 => CACHED [linux/arm64 builder 3/5] ADD . /go/src/app/                                                                                                                                  0.0s
 => CACHED [linux/arm64 builder 4/5] RUN CGO_ENABLED=0 go build -o main .                                                                                                                0.0s
 => CACHED [linux/arm64 builder 5/5] RUN mv /go/src/app/main /                                                                                                                           0.0s
 => CACHED [linux/arm64 stage-1 3/4] COPY --from=builder /main /main                                                                                                                     0.0s
 => ERROR exporting to oci image format                                                                                                                                                  0.0s
------
 > exporting to oci image format:
------
failed to solve: rpc error: code = Unknown desc = docker exporter does not currently support exporting manifest lists

I understand that the daemon can’t see the manifest lists but I believe there should be a way to tag the images with some variable, like:

docker buildx build --platform linux/arm64,linux/amd64 --load -t carlosedp/test:v1-$ARCH .

To have both images loaded into the daemon and ignoring the manifest list in this case.

tonistiigi

The limitation is temporary as daemon should get support for loading multi-arch with moby/moby#38738 so I’m a bit hesitant to add a custom implementation for it atm.

mcamou

Hi, this issue is 5 months old and the linked issue (moby/moby#38738) is still in Draft. Any news?

EdoFede

Hi,
I’ve the same issue as the original message, trying to build multi-arch image and loading to local docker instance for testing purposes.

I want to build and execute some local tests before pushing the image to the repository.
Previously for this purpose, I had used standard build command (tagging the architecture) with qemu.

Build runs fine, but at the end…

 => ERROR exporting to oci image format                                                                                                  0.0s
------
 > exporting to oci image format:
------
failed to solve: rpc error: code = Unknown desc = docker exporter does not currently support exporting manifest lists
Build failed
make: *** [build] Error 3

Here’s my environment:

docker version

Client: Docker Engine - Community
 Version:           19.03.5
 API version:       1.40
 Go version:        go1.12.12
 Git commit:        633a0ea
 Built:             Wed Nov 13 07:22:34 2019
 OS/Arch:           darwin/amd64
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          19.03.5
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.12
  Git commit:       633a0ea
  Built:            Wed Nov 13 07:29:19 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.2.10
  GitCommit:        b34a5c8af56e510852c35414db4c1f4fa6172339
 runc:
  Version:          1.0.0-rc8+dev
  GitCommit:        3e425f80a8c931f88e6d94a8c831b9d5aa481657
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

docker buildx version

github.com/docker/buildx v0.3.1-tp-docker 6db68d029599c6710a32aa7adcba8e5a344795a7

git-developer

[…] trying to build multi-arch image and loading to local docker instance for testing purposes.

Depending on the use case it may be sufficient when tests run on the runner’s platform only. If so, this issue can be avoided by omitting the platform parameter on load. Example:

  • Build: docker buildx build --platform linux/arm64,linux/amd64 -t foo/bar:latest .
  • Test: docker buildx build --load -t foo/bar:latest .

tonistiigi

@git-developer The current limitation is combination of --load and multiple values to --platform. Eg. --platform linux/arm64 --load works fine.

Zhang21

you should use --push on multi-platform , use --load for single platform。

Filius-Patris

The pipeline in the project I’m working on runs tests before pushing. Do I have to build the image once for the test and then a second time for production?

Is there any workaround to run a multiarch container before pushing it to a repo?

tonistiigi

@Filius-Patris You’re options are:

  • run tests are part of the build
  • build single arch image, test with docker run, then build multi-arch image. Cache from the first build will be reused.
  • use a local registry

alexellis

I also landed here and I’m not sure what the options are, on a practical level it seems like a hack to support «docker build» commands and docker buildx to get around the issue.

thesix

How would I tag an image with a version and latest?

robtaylor

I’m hitting this as well.. any updates?

MkLHX

The pipeline in the project I’m working on runs tests before pushing. Do I have to build the image once for the test and then a second time for production?

Is there any workaround to run a multiarch container before pushing it to a repo?

Same problem for me, i have to build with —push on each gitlab-ci stages… using —platform linux/arm/v7,linux/amd64

superherointj

This problem just happened to me. Why this issue is closed if issue still exist?

Stono

I think because it can’t be fixed right? It actually makes sense in that if you build a multi arch image, you can’t --load it as local docker doesn’t support having multi-arch images (when you do a docker pull it’ll pull a single arch based on your system). So best case --load would have to just load the architecture for your current host.

sakoht

It would be cool if it could «load» multi-arch images, and use only the correct one. So that if you say, re-tag and push, you could do it across architectures.

A lot of code does build/tag/push in separate places.

amouat

dswiecki

Couldn’t it work in a way in which there’s --platform specified with multiple platforms and --load would only load the one the local docker arch supports? This would allow to build and load in one command which is very handy

vkoivula

Ok lol so this doesn’t work yet after almost three years. Can we ever get it working?

There are very good reasons as multiple people in this thread have pointed out and at the time we didn’t even have Apple Silicon released. Which would make this even more useful.

skhamkar-tibco

Hi All,

does anyone know the solution to the above problem? I can push images to the registry without any issue but when I try to load them on the docker desktop I face the below issue, which is same as above.

error: docker exporter does not currently support exporting manifest lists

sakoht

@skhamkar-tibco I’d need to see your exact commands to be sure this applies, but in general when doing a multi-platform buildx, I have it push the images it makes directly, then pull immediately after that, which will get the one built for the current platform only.

Recommend Projects

  • React photo

    React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo

    Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo

    Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo

    TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo

    Django

    The Web framework for perfectionists with deadlines.

  • Laravel photo

    Laravel

    A PHP framework for web artisans

  • D3 photo

    D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Visualization

    Some thing interesting about visualization, use data art

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo

    Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo

    Microsoft

    Open source projects and samples from Microsoft.

  • Google photo

    Google

    Google ❤️ Open Source for everyone.

  • Alibaba photo

    Alibaba

    Alibaba Open Source for everyone

  • D3 photo

    D3

    Data-Driven Documents codes.

  • Tencent photo

    Tencent

    China tencent open source team.

docker / buildx
Goto Github
PK

View Code? Open in Web Editor
NEW

2.4K
47.0
341.0
22.2 MB

Docker CLI plugin for extended build capabilities with BuildKit

License: Apache License 2.0

Dockerfile 1.52%

Go 96.10%

Shell 1.44%

Makefile 0.28%

HCL 0.66%

buildx’s Introduction

GitHub release
PkgGoDev
Build Status
Go Report Card
codecov

buildx is a Docker CLI plugin for extended build capabilities with
BuildKit.

Key features:

  • Familiar UI from docker build
  • Full BuildKit capabilities with container driver
  • Multiple builder instance support
  • Multi-node builds for cross-platform images
  • Compose build support
  • High-level build constructs (bake)
  • In-container driver support (both Docker and Kubernetes)

Table of Contents

  • Installing
    • Windows and macOS
    • Linux packages
    • Manual download
    • Dockerfile
  • Set buildx as the default builder
  • Building
  • Getting started
    • Building with buildx
    • Working with builder instances
    • Building multi-platform images
  • Manuals
    • High-level build options with Bake
    • Drivers
    • Exporters
    • Cache backends
  • Guides
    • CI/CD
    • CNI networking
    • Using a custom network
    • Using a custom registry configuration
    • OpenTelemetry support
    • Registry mirror
    • Resource limiting
  • Reference
    • buildx bake
    • buildx build
    • buildx create
    • buildx du
    • buildx imagetools
      • buildx imagetools create
      • buildx imagetools inspect
    • buildx inspect
    • buildx install
    • buildx ls
    • buildx prune
    • buildx rm
    • buildx stop
    • buildx uninstall
    • buildx use
    • buildx version
  • Contributing

Installing

Using buildx with Docker requires Docker engine 19.03 or newer.

Warning

Using an incompatible version of Docker may result in unexpected behavior,
and will likely cause issues, especially when using Buildx builders with more
recent versions of BuildKit.

Windows and macOS

Docker Buildx is included in Docker Desktop
for Windows and macOS.

Linux packages

Docker Linux packages also include Docker Buildx when installed using the
DEB or RPM packages.

Manual download

Important

This section is for unattended installation of the buildx component. These
instructions are mostly suitable for testing purposes. We do not recommend
installing buildx using manual download in production environments as they
will not be updated automatically with security updates.

On Windows and macOS, we recommend that you install Docker Desktop
instead. For Linux, we recommend that you follow the instructions specific for your distribution.

You can also download the latest binary from the GitHub releases page.

Rename the relevant binary and copy it to the destination matching your OS:

OS Binary name Destination folder
Linux docker-buildx $HOME/.docker/cli-plugins
macOS docker-buildx $HOME/.docker/cli-plugins
Windows docker-buildx.exe %USERPROFILE%.dockercli-plugins

Or copy it into one of these folders for installing it system-wide.

On Unix environments:

  • /usr/local/lib/docker/cli-plugins OR /usr/local/libexec/docker/cli-plugins
  • /usr/lib/docker/cli-plugins OR /usr/libexec/docker/cli-plugins

On Windows:

  • C:ProgramDataDockercli-plugins
  • C:Program FilesDockercli-plugins

Note

On Unix environments, it may also be necessary to make it executable with chmod +x:

$ chmod +x ~/.docker/cli-plugins/docker-buildx

Dockerfile

Here is how to install and use Buildx inside a Dockerfile through the
docker/buildx-bin image:

# syntax=docker/dockerfile:1
FROM docker
COPY --from=docker/buildx-bin /buildx /usr/libexec/docker/cli-plugins/docker-buildx
RUN docker buildx version

Set buildx as the default builder

Running the command docker buildx install
sets up docker builder command as an alias to docker buildx build. This
results in the ability to have docker build use the current buildx builder.

To remove this alias, run docker buildx uninstall.

Building

# Buildx 0.6+
$ docker buildx bake "https://github.com/docker/buildx.git"
$ mkdir -p ~/.docker/cli-plugins
$ mv ./bin/build/buildx ~/.docker/cli-plugins/docker-buildx

# Docker 19.03+
$ DOCKER_BUILDKIT=1 docker build --platform=local -o . "https://github.com/docker/buildx.git"
$ mkdir -p ~/.docker/cli-plugins
$ mv buildx ~/.docker/cli-plugins/docker-buildx

# Local 
$ git clone https://github.com/docker/buildx.git && cd buildx
$ make install

Getting started

Building with buildx

Buildx is a Docker CLI plugin that extends the docker build command with the
full support of the features provided by Moby BuildKit
builder toolkit. It provides the same user experience as docker build with
many new features like creating scoped builder instances and building against
multiple nodes concurrently.

After installation, buildx can be accessed through the docker buildx command
with Docker 19.03. docker buildx build is the command for starting a new
build. With Docker versions older than 19.03 buildx binary can be called
directly to access the docker buildx subcommands.

$ docker buildx build .
[+] Building 8.4s (23/32)
 => ...

Buildx will always build using the BuildKit engine and does not require
DOCKER_BUILDKIT=1 environment variable for starting builds.

The docker buildx build command supports features available for docker build,
including features such as outputs configuration, inline build caching, and
specifying target platform. In addition, Buildx also supports new features that
are not yet available for regular docker build like building manifest lists,
distributed caching, and exporting build results to OCI image tarballs.

Buildx is flexible and can be run in different configurations that are exposed
through various «drivers». Each driver defines how and where a build should
run, and have different feature sets.

We currently support the following drivers:

  • The docker driver (guide, reference)
  • The docker-container driver (guide, reference)
  • The kubernetes driver (guide, reference)
  • The remote driver (guide)

For more information on drivers, see the drivers guide.

Working with builder instances

By default, buildx will initially use the docker driver if it is supported,
providing a very similar user experience to the native docker build. Note that
you must use a local shared daemon to build your applications.

Buildx allows you to create new instances of isolated builders. This can be
used for getting a scoped environment for your CI builds that does not change
the state of the shared daemon or for isolating the builds for different
projects. You can create a new instance for a set of remote nodes, forming a
build farm, and quickly switch between them.

You can create new instances using the docker buildx create
command. This creates a new builder instance with a single node based on your
current configuration.

To use a remote node you can specify the DOCKER_HOST or the remote context name
while creating the new builder. After creating a new instance, you can manage its
lifecycle using the docker buildx inspect,
docker buildx stop, and
docker buildx rm commands. To list all
available builders, use buildx ls. After
creating a new builder you can also append new nodes to it.

To switch between different builders, use docker buildx use <name>.
After running this command, the build commands will automatically use this
builder.

Docker also features a docker context
command that can be used for giving names for remote Docker API endpoints.
Buildx integrates with docker context so that all of your contexts
automatically get a default builder instance. While creating a new builder
instance or when adding a node to it you can also set the context name as the
target.

Building multi-platform images

BuildKit is designed to work well for building for multiple platforms and not
only for the architecture and operating system that the user invoking the build
happens to run.

When you invoke a build, you can set the --platform flag to specify the target
platform for the build output, (for example, linux/amd64, linux/arm64, or
darwin/amd64).

When the current builder instance is backed by the docker-container or
kubernetes driver, you can specify multiple platforms together. In this case,
it builds a manifest list which contains images for all specified architectures.
When you use this image in docker run
or docker service,
Docker picks the correct image based on the node’s platform.

You can build multi-platform images using three different strategies that are
supported by Buildx and Dockerfiles:

  1. Using the QEMU emulation support in the kernel
  2. Building on multiple native nodes using the same builder instance
  3. Using a stage in Dockerfile to cross-compile to different architectures

QEMU is the easiest way to get started if your node already supports it (for
example. if you are using Docker Desktop). It requires no changes to your
Dockerfile and BuildKit automatically detects the secondary architectures that
are available. When BuildKit needs to run a binary for a different architecture,
it automatically loads it through a binary registered in the binfmt_misc
handler.

For QEMU binaries registered with binfmt_misc on the host OS to work
transparently inside containers they must be registered with the fix_binary
flag. This requires a kernel >= 4.8 and binfmt-support >= 2.1.7. You can check
for proper registration by checking if F is among the flags in
/proc/sys/fs/binfmt_misc/qemu-*. While Docker Desktop comes preconfigured
with binfmt_misc support for additional platforms, for other installations
it likely needs to be installed using tonistiigi/binfmt
image.

$ docker run --privileged --rm tonistiigi/binfmt --install all

Using multiple native nodes provide better support for more complicated cases
that are not handled by QEMU and generally have better performance. You can
add additional nodes to the builder instance using the --append flag.

Assuming contexts node-amd64 and node-arm64 exist in docker context ls;

$ docker buildx create --use --name mybuild node-amd64
mybuild
$ docker buildx create --append --name mybuild node-arm64
$ docker buildx build --platform linux/amd64,linux/arm64 .

Finally, depending on your project, the language that you use may have good
support for cross-compilation. In that case, multi-stage builds in Dockerfiles
can be effectively used to build binaries for the platform specified with
--platform using the native architecture of the build node. A list of build
arguments like BUILDPLATFORM and TARGETPLATFORM is available automatically
inside your Dockerfile and can be leveraged by the processes running as part
of your build.

# syntax=docker/dockerfile:1
FROM --platform=$BUILDPLATFORM golang:alpine AS build
ARG TARGETPLATFORM
ARG BUILDPLATFORM
RUN echo "I am running on $BUILDPLATFORM, building for $TARGETPLATFORM" > /log
FROM alpine
COPY --from=build /log /log

You can also use tonistiigi/xx Dockerfile
cross-compilation helpers for more advanced use-cases.

High-level build options

See docs/manuals/bake/index.md for more details.

Contributing

Want to contribute to Buildx? Awesome! You can find information about
contributing to this project in the CONTRIBUTING.md

buildx’s People

buildx’s Issues

tracking missing build UI

The build UI should follow docker build and be compatible with for a possibility to merge the tools in the future.

Tracking things currently missing (add things):

  • remote contexts (git, https)
  • -f symlink
  • -f -
  • tar context form stdin
  • --cache-from + updates to other importers
  • extra hosts (+ net via entitlements?)
  • -q
  • --iidfile
  • --squash ?
  • -o type=docker to load into docker (+ a driver specific default would be nice)
  • new: --cache-to
  • --security-opt
  • --force-rm
  • --network=custom
  • --compress

@tiborvass

add caching for remembering state of nodes

Buildx does a lot of connections to the remote daemon to check existing state. Especially when using ssh transport the connection handshake is slow and this phase may take even more time than actual build.

For example when using 18.09 node:
docker context create foo --host ssh://node && docker context use foo

Before we can build (at least) following connections happen

  • cli calls ping before buildx main is called
  • buildx calls ping to optionally negotiate lower version
  • buildx calls /grpc to check if docker driver can be used
  • this fails to buildx calls /inspect to see if container is running
  • buildx actually starts the build connection

First ping is completely not needed but not sure if we can step around that. For /grpc support we can remember the state or deduct it from ping response.

Theoretically, we could avoid inspect and only call it when interactions with the container fail.

We could think about forcing http2 and reusing the connection.

@tiborvass

Allow building for current client platform

There should be a simple way to build for the platform that is invoking the build. Eg. I want to build a binary for my current system. docker build --platform=CURRENT ?

@tiborvass

Q: —cache-to for multi-stage build

After building a multi-stage dockerfile with --cache-to option, I notice only the final stage layers were saved to the --cache-to destination. How can I also cache the intermediate build stage layers? Do I have to use --target to build and cache each stage one by one?

How can I enable i386 arch?

Hello,

I’m trying to build directly on i386, but I’m not sure how can I enable it. My host:

$ docker info
Client:
 Debug Mode: false
 Plugins:
  buildx: Build with BuildKit (Docker Inc., v0.3.0)

Server:
 Containers: 2
  Running: 1
  Paused: 0
  Stopped: 1
 Images: 6
 Server Version: 19.03.1
 Storage Driver: btrfs
  Build Version: Btrfs v4.7.3
  Library Version: 101
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 85f6aa58b8a3170aec9824568f7a31832878b603.m
 runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f
 init version: fec3683
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 4.19.59-1-MANJARO
 Operating System: Manjaro Linux
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 3.778GiB
 Name: vk496-pc
 ID: XLJG:PMBX:6NEF:3SHM:AZBA:FSMW:UPQK:DWLX:L7ZH:EXG3:EVTA:L35W
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Username: vk496
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: true
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
 Product License: Community Engine
$ docker buildx version
github.com/docker/buildx v0.3.0 c967f1d570fb393b4475c01efea58781608d093c

This is how I initialize the builder:

docker buildx inspect --bootstrap         ✔  10073  11:33:01
[+] Building 0.7s (1/1) FINISHED                                                                                                                
 => [internal] booting buildkit                                                                                                            0.7s
 => => starting container buildx_buildkit_mybuilder0                                                                                       0.7s
Name:   mybuilder
Driver: docker-container

Nodes:
Name:      mybuilder0
Endpoint:  unix:///var/run/docker.sock
Status:    running
Platforms: linux/amd64

$ docker run --rm --privileged multiarch/qemu-user-static --reset
$ docker buildx inspect --bootstrap                                       ✔  10059  11:42:40
[+] Building 0.9s (1/1) FINISHED                                                                                    
 => [internal] booting buildkit                                                                                0.9s
 => => starting container buildx_buildkit_mybuilder0                                                           0.9s
Name:   mybuilder
Driver: docker-container

Nodes:
Name:      mybuilder0
Endpoint:  unix:///var/run/docker.sock
Status:    running
Platforms: linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/arm/v7, linux/arm/v6

I miss something?

stuck at pushing image to docker hub

I created a build context to use my local machine amd64 and a remote Rpi arm/v7

docker buildx ls
NAME/NODE DRIVER/ENDPOINT             STATUS  PLATFORMS
multi *   docker-container                    
  multi0  unix:///var/run/docker.sock running linux/amd64
  multi1  ssh://[email protected] running linux/arm/v7, linux/arm/v6
default   docker                              
  default default                     running linux/amd64

The build process seems to gets stuck at pushing the image.

docker buildx build --platform linux/amd64,linux/arm/v7 -t arribada/gateway --push --progress=plain  .

It is a simple golang app

FROM --platform=$BUILDPLATFORM golang:alpine AS build
RUN apk add --no-cache git
COPY ./ /tmp/builder/
WORKDIR /tmp/builder/
RUN CGO_ENABLED=0 go build  -o main .
FROM alpine
COPY --from=build main /usr/local/bin/
CMD /usr/local/bin/main
docker buildx build --platform linux/amd64,linux/arm/v7 -t arribada/gateway --push --progress=plain --no-cache .
#1 [internal] booting buildkit
#1 starting container buildx_buildkit_multi1
#1 starting container buildx_buildkit_multi1 8.2s done
#1 DONE 8.2s

#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 32B done
#2 DONE 0.1s

#5 [linux/amd64 internal] load metadata for docker.io/library/golang:alpine
#5 DONE 1.9s

#4 [linux/amd64 internal] load metadata for docker.io/library/alpine:latest
#4 DONE 2.0s

#7 [linux/amd64 build 1/5] FROM docker.io/library/golang:[email protected]:87e5...
#7 resolve docker.io/library/golang:[email protected]:87e527712342efdb8ec5ddf2d57e87de7bd4d2fedf9f6f3547ee5768bb3c43ff done
#7 CACHED

#6 [linux/amd64 stage-1 1/2] FROM docker.io/library/[email protected]:6a92cd1fc...
#6 resolve docker.io/library/[email protected]:6a92cd1fcdc8d8cdec60f33dda4db2cb1fcdcacf3410a8e05b3741f44a9b5998 done
#6 CACHED

#9 [internal] load build context
#9 transferring context: 119B done
#9 DONE 0.1s

#8 [linux/amd64 build 2/5] RUN apk add --no-cache git
#8 0.272 fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
#8 0.662 fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
#8 0.890 (1/5) Installing nghttp2-libs (1.38.0-r0)
#8 0.939 (2/5) Installing libcurl (7.65.1-r0)
#8 0.998 (3/5) Installing expat (2.2.7-r0)
#8 1.044 (4/5) Installing pcre2 (10.33-r0)
#8 1.107 (5/5) Installing git (2.22.0-r0)
#8 1.753 Executing busybox-1.30.1-r2.trigger
#8 1.758 OK: 21 MiB in 20 packages
#8 DONE 1.9s

#10 [linux/amd64 build 3/5] COPY ./ /tmp/builder/
#10 DONE 0.1s

#11 [linux/amd64 build 4/5] WORKDIR /tmp/builder/
#11 DONE 0.1s

#12 [linux/amd64 build 5/5] RUN CGO_ENABLED=0 go build  -o main .
#12 0.323 go: finding github.com/pkg/errors v0.8.1
#12 0.324 go: finding github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751
#12 0.324 go: finding github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4
#12 0.324 go: finding github.com/brocaar/lorawan v0.0.0-20190725071148-7d77cf375455
#12 0.325 go: finding github.com/twpayne/go-geom v1.0.6-0.20190712172859-6e5079ee5888
#12 1.187 go: finding gopkg.in/alecthomas/kingpin.v2 v2.2.6
#12 2.522 go: finding github.com/sirupsen/logrus v1.3.0
#12 2.524 go: finding github.com/stretchr/testify v1.3.0
#12 2.525 go: finding github.com/smartystreets/assertions v0.0.0-20190401211740-f487f9de1cd3
#12 2.526 go: finding github.com/smartystreets/goconvey v0.0.0-20190330032615-68dc04aab96a
#12 2.527 go: finding github.com/jacobsa/oglemock v0.0.0-20150831005832-e94d794d06ff
#12 2.528 go: finding github.com/jacobsa/crypto v0.0.0-20180924003735-d95898ceee07
#12 2.529 go: finding github.com/NickBall/go-aes-key-wrap v0.0.0-20170929221519-1c3aa3e4dfc5
#12 2.620 go: finding github.com/opencontainers/runc v0.1.1
#12 2.789 go: finding golang.org/x/net v0.0.0-20190328230028-74de082e2cca
#12 3.499 go: finding github.com/ory/dockertest v3.3.4+incompatible
#12 4.391 go: finding github.com/DATA-DOG/go-sqlmock v1.3.2
#12 4.636 go: finding github.com/opencontainers/go-digest v1.0.0-rc1
#12 4.738 go: finding github.com/d4l3k/messagediff v1.2.1
#12 4.919 go: finding github.com/opencontainers/image-spec v1.0.1
#12 5.108 go: finding golang.org/x/crypto v0.0.0-20180904163835-0709b304e793
#12 5.381 go: finding github.com/davecgh/go-spew v1.1.0
#12 6.047 go: finding github.com/lib/pq v1.0.0
#12 6.544 go: finding github.com/twpayne/go-kml v1.0.0
#12 6.596 go: finding github.com/jacobsa/ogletest v0.0.0-20170503003838-80d50a735a11
#12 6.774 go: finding github.com/docker/go-connections v0.4.0
#12 6.778 go: finding github.com/jacobsa/oglematchers v0.0.0-20150720000706-141901ea67cd
#12 7.481 go: finding github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5
#12 7.650 go: finding github.com/pmezard/go-difflib v1.0.0
#12 8.153 go: finding github.com/stretchr/objx v0.1.1
#12 8.449 go: finding golang.org/x/tools v0.0.0-20190328211700-ab21143f2384
#12 8.470 go: finding github.com/jacobsa/reqtrace v0.0.0-20150505043853-245c9e0234cb
#12 8.485 go: finding github.com/stretchr/testify v1.2.2
#12 8.614 go: finding github.com/jtolds/gls v4.20.0+incompatible
#12 8.906 go: finding github.com/twpayne/go-polyline v1.0.0
#12 8.909 go: finding github.com/gopherjs/gopherjs v0.0.0-20190328170749-bb2674552d8f
#12 8.988 go: finding golang.org/x/crypto v0.0.0-20190325154230-a5d413f7728c
#12 9.007 go: finding golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3
#12 9.034 go: finding github.com/davecgh/go-spew v1.1.1
#12 9.305 go: finding golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2
#12 9.523 go: finding golang.org/x/sys v0.0.0-20190402054613-e4093980e83e
#12 9.534 go: finding golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a
#12 10.02 go: finding github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1
#12 10.37 go: finding github.com/containerd/continuity v0.0.0-20181203112020-004b46473808
#12 10.38 go: finding github.com/sirupsen/logrus v1.4.1
#12 10.60 go: finding github.com/stretchr/objx v0.1.0
#12 10.60 go: finding github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d
#12 10.96 go: finding github.com/cenkalti/backoff v2.1.1+incompatible
#12 12.38 go: finding github.com/docker/go-units v0.3.3
#12 12.55 go: finding golang.org/x/text v0.3.0
#12 12.83 go: finding golang.org/x/sys v0.0.0-20190405154228-4b34438f7a67
#12 13.51 go: finding golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33
#12 13.52 go: finding golang.org/x/net v0.0.0-20190311183353-d8887717615a
#12 13.52 go: finding github.com/konsorten/go-windows-terminal-sequences v1.0.1
#12 18.22 go: downloading github.com/twpayne/go-geom v1.0.6-0.20190712172859-6e5079ee5888
#12 18.22 go: downloading github.com/brocaar/lorawan v0.0.0-20190725071148-7d77cf375455
#12 18.22 go: downloading github.com/pkg/errors v0.8.1
#12 18.25 go: downloading gopkg.in/alecthomas/kingpin.v2 v2.2.6
#12 18.30 go: extracting github.com/pkg/errors v0.8.1
#12 18.36 go: extracting gopkg.in/alecthomas/kingpin.v2 v2.2.6
#12 18.37 go: downloading github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4
#12 18.37 go: downloading github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751
#12 18.39 go: extracting github.com/brocaar/lorawan v0.0.0-20190725071148-7d77cf375455
#12 18.39 go: extracting github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4
#12 18.40 go: downloading github.com/jacobsa/crypto v0.0.0-20180924003735-d95898ceee07
#12 18.42 go: extracting github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751
#12 18.44 go: extracting github.com/twpayne/go-geom v1.0.6-0.20190712172859-6e5079ee5888
#12 18.89 go: extracting github.com/jacobsa/crypto v0.0.0-20180924003735-d95898ceee07
#12 DONE 22.0s

#13 [linux/amd64 stage-1 2/2] COPY --from=build main /usr/local/bin/
#13 DONE 0.1s

#14 exporting to image
#14 exporting layers
#14 exporting layers 0.9s done
#14 exporting manifest sha256:b1df6565ea0301148310cbcba589664316e6467da060d44ebc83035f23ee976c 0.0s done
#14 exporting config sha256:edf051c77aa488f30f30a7891394af6d3a3a0459f87562f397b10c54b8ad5b96 0.0s done
#14 exporting manifest list sha256:5e95279884f4819a5bea408ef9b577b056c7d4aba397024052a664c2ffa1ca03 0.0s done
#14 pushing layers
#14 pushing layers 8.0s done
#14 pushing manifest for docker.io/arribada/gateway
#14 pushing manifest for docker.io/arribada/gateway 1.0s done
#14 DONE 10.0s

—build-arg behavior differs from docker build

With the docker CLI, --build-arg FOO will pass the value of $FOO to the build arg. If FOO is unset then no value is passed.
With buildx an empty string is always set.

Support for starting buildkit with entitlements

As the title suggests, I’d like to run a build with docker buildx build --network=host, but I’m seeing:

failed to solve: rpc error: code = Unknown desc = network.host is not allowed

It looks like we need to set the proper entitlement, but docker buildx create does not expose that option.

bake doesn’t pass env vars to replace values in docker-compose.yaml

Having an environment variable on a tty doesn’t seem be passed to buildx, as it was on docker-compose

besides #16 , I don’t see an option to pass args

export FOO=bar
/usr/libexec/docker/cli-plugins/docker-buildx bake --progress plain -f docker-compose.yml
  image:
    image: image/test:${FOO}
    build:
      context: .
      dockerfile: buildx

github.com/docker/buildx v0.2.2-6-g2b03339-tp-docker 2b03339

Client: Docker Engine — Community
Version: 19.03.0-rc3
API version: 1.40
Go version: go1.12.5
Git commit: 27fcb77
Built: Thu Jun 20 02:02:44 2019
OS/Arch: linux/amd64
Experimental: false

Server: Docker Engine — Community
Engine:
Version: 19.03.0-rc3
API version: 1.40 (minimum version 1.12)
Go version: go1.12.5
Git commit: 27fcb77
Built: Thu Jun 20 02:01:20 2019
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.6
GitCommit: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc:
Version: 1.0.0-rc8
GitCommit: 425e105d5a03fabd737a126ad93d62a9eeede87f
docker-init:
Version: 0.18.0
GitCommit: fec3683

`buildx use default` doesn’t switch to local

When a remote node is added, one expects buildx use default to switch back to local, but buildx ls still shows the remote builder as current.

Build cannot export to registry on localhost

While setting up a local demo, I found that buildx is unable to access a registry server running on localhost. I can see that my registry server is listening on port 5000 with a curl command. This is with the docker-container driver, so I suspect that container is not using the host namespace and therefore cannot see any services running on localhost.

$ docker buildx build -f Dockerfile.buildx --target debug --platform linux/amd64,linux/arm64 -t localhost:5000/bmitch-public/golang-hello:buildx1 --output type=registry .
[+] Building 3.1s (24/24) FINISHED
 => [internal] load build definition from Dockerfile.buildx                                                                       0.4s
 => => transferring dockerfile: 39B                                                                                               0.0s
 => [internal] load .dockerignore                                                                                                 0.5s
 => => transferring context: 34B                                                                                                  0.0s
 => [linux/amd64 internal] load metadata for docker.io/library/debian:latest                                                      0.7s
 => [linux/amd64 internal] load metadata for docker.io/tonistiigi/xx:golang                                                       0.7s
 => [linux/amd64 internal] load metadata for docker.io/library/golang:1.12-alpine                                                 0.8s
 => [internal] load build context                                                                                                 0.2s
 => => transferring context: 105B                                                                                                 0.0s
 => [linux/amd64 debug 1/2] FROM docker.io/library/[email protected]:118cf8f3557e1ea766c02f36f05f6ac3e63628427ea8965fb861be904ec35a6  0.0s
 => => resolve docker.io/library/[email protected]:118cf8f3557e1ea766c02f36f05f6ac3e63628427ea8965fb861be904ec35a6f                   0.0s
 => [linux/amd64 xgo 1/1] FROM docker.io/tonistiigi/xx:[email protected]:4703827f56e3964eda6ca07be85046d1dd533eb0ed464e549266c10a4cd  0.0s
 => => resolve docker.io/tonistiigi/xx:[email protected]:4703827f56e3964eda6ca07be85046d1dd533eb0ed464e549266c10a4cd8a29f             0.0s
 => [linux/amd64 dev 1/6] FROM docker.io/library/golang:[email protected]:cee6f4b901543e8e3f20da3a4f7caac6ea643fd5a46201c3c2387  0.0s
 => => resolve docker.io/library/golang:[email protected]:cee6f4b901543e8e3f20da3a4f7caac6ea643fd5a46201c3c2387183a332d989       0.0s
 => CACHED [linux/amd64 dev 2/6] COPY --from=xgo / /                                                                              0.0s
 => CACHED [linux/amd64 dev 3/6] RUN apk add --no-cache git ca-certificates                                                       0.0s
 => CACHED [linux/amd64 dev 4/6] RUN adduser -D appuser                                                                           0.0s
 => CACHED [linux/amd64 dev 5/6] WORKDIR /src                                                                                     0.0s
 => CACHED [linux/amd64 dev 6/6] COPY . /src/                                                                                     0.0s
 => CACHED [linux/amd64 build 1/1] RUN CGO_ENABLED=0 go build -ldflags '-w -extldflags -static' -o app .                          0.0s
 => CACHED [linux/amd64 debug 2/2] COPY --from=build /src/app /app                                                                0.0s
 => CACHED [linux/amd64 dev 2/6] COPY --from=xgo / /                                                                              0.0s
 => CACHED [linux/amd64 dev 3/6] RUN apk add --no-cache git ca-certificates                                                       0.0s
 => CACHED [linux/amd64 dev 4/6] RUN adduser -D appuser                                                                           0.0s
 => CACHED [linux/amd64 dev 5/6] WORKDIR /src                                                                                     0.0s
 => CACHED [linux/amd64 dev 6/6] COPY . /src/                                                                                     0.0s
 => CACHED [linux/amd64 build 1/1] RUN CGO_ENABLED=0 go build -ldflags '-w -extldflags -static' -o app .                          0.0s
 => CACHED [linux/amd64 debug 2/2] COPY --from=build /src/app /app                                                                0.0s
 => ERROR exporting to image                                                                                                      1.4s
 => => exporting layers                                                                                                           0.1s
 => => exporting manifest sha256:fb7fb1aacd96dcd6c9a6d2654fb2a9cf7692c3ebfd4d15bd1dd397d38713a589                                 0.2s
 => => exporting config sha256:8c443cd193baf5e58914a1ad50d8311e25f7d9ac86772a6ab2df99ed7f4ef6f3                                   0.2s
 => => exporting manifest sha256:d63ec5c6531662c1185b1cc90755573a1bbc1b4754998181847598433fe30e5e                                 0.2s
 => => exporting config sha256:3838e43619611f78eedbc6604fedc3ab134f2beb4225d45d10bb37698603189e                                   0.2s
 => => exporting manifest list sha256:4c8694f90dda751d32ccbd9e48bdeba1042467f07bd0193378e254141e7464ec                            0.2s
 => => pushing layers                                                                                                             0.0s
------
 > exporting to image:
------
failed to solve: rpc error: code = Unknown desc = failed to do request: Head http://localhost:5000/v2/bmitch-public/golang-hello/blobs/sha256:8c443cd193baf5e58914a1ad50d8311e25f7d9ac86772a6ab2df99ed7f4ef6f3: dial tcp 127.0.0.1:5000: connect: connection refused

$ curl -sSLk https://localhost:5000/v2/
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":null}]}

$ docker buildx ls
NAME/NODE     DRIVER/ENDPOINT             STATUS  PLATFORMS
new *         docker-container
  new0        unix:///var/run/docker.sock running linux/amd64
default       docker
  default     default                     running linux/amd64

Note this is a low priority issue for me, I’d much rather see #80 solved.

Custom registry, push error on self-signed cert

‘buildx’ errors, while ‘docker build’ succeeds:

cat <<'EOD' > Dockerfile
FROM alpine
RUN touch /test
EOD
docker buildx build  
  -t img.service.consul/alpine:test  
 --platform=linux/amd64,linux/arm64,linux/arm  
 --push  
 .

[+] Building 2.1s (12/12) FINISHED
 => [internal] load build definition from Dockerfile                                                                                                                                                       0.0s
 => => transferring dockerfile: 65B                                                                                                                                                                        0.0s
 => [internal] load .dockerignore                                                                                                                                                                          0.1s
 => => transferring context: 2B                                                                                                                                                                            0.0s
 => [linux/arm/v7 internal] load metadata for docker.io/library/alpine:latest                                                                                                                              1.5s
 => [linux/amd64 internal] load metadata for docker.io/library/alpine:latest                                                                                                                               1.3s
 => [linux/arm64 internal] load metadata for docker.io/library/alpine:latest                                                                                                                               1.3s
 => CACHED [linux/arm64 1/2] FROM docker.io/library/[email protected]:769fddc7cc2f0a1c35abb2f91432e8beecf83916c421420e6a6da9f8975464b6                                                                         0.0s
 => => resolve docker.io/library/[email protected]:769fddc7cc2f0a1c35abb2f91432e8beecf83916c421420e6a6da9f8975464b6                                                                                            0.0s
 => [linux/arm64 2/2] RUN touch /test                                                                                                                                                                      0.1s
 => CACHED [linux/amd64 1/2] FROM docker.io/library/[email protected]:769fddc7cc2f0a1c35abb2f91432e8beecf83916c421420e6a6da9f8975464b6                                                                         0.0s
 => => resolve docker.io/library/[email protected]:769fddc7cc2f0a1c35abb2f91432e8beecf83916c421420e6a6da9f8975464b6                                                                                            0.0s
 => [linux/amd64 2/2] RUN touch /test                                                                                                                                                                      0.1s
 => CACHED [linux/arm/v7 1/2] FROM docker.io/library/[email protected]:769fddc7cc2f0a1c35abb2f91432e8beecf83916c421420e6a6da9f8975464b6                                                                        0.0s
 => => resolve docker.io/library/[email protected]:769fddc7cc2f0a1c35abb2f91432e8beecf83916c421420e6a6da9f8975464b6                                                                                            0.0s
 => [linux/arm/v7 2/2] RUN touch /test                                                                                                                                                                     0.2s
 => ERROR exporting to image                                                                                                                                                                               0.4s
 => => exporting layers                                                                                                                                                                                    0.2s
 => => exporting manifest sha256:879ac4adf9493121ff9bb12f8566ed993fa7079c59ae02b516a9287d6de7daea                                                                                                          0.0s
 => => exporting config sha256:42c2e158eb64d21ea6832e4842a3c11c9fd70c89afbf2fd0fbe2f16dd2698453                                                                                                            0.0s
 => => exporting manifest sha256:64b17d691e5c5ab257ad37b622c9ed219e50ea637bf5f7aa25e2f65f0bd0c26d                                                                                                          0.0s
 => => exporting config sha256:14af510e076d60389743c6fc7c99e2777ba56bdad11cbbffacb438e7c68f6321                                                                                                            0.0s
 => => exporting manifest sha256:2e87dbf064ba1c829a9d18525fa38e77baa26b654c04659b0fa3e75d6ea34ea5                                                                                                          0.0s
 => => exporting config sha256:f1b89d61d625bff13e65a679d2bdb1c513289999789ec2e13fe7acefca39adfd                                                                                                            0.0s
 => => exporting manifest list sha256:53f07aa12e20079138de3650629277928313e7bfdc59c3f22c93834fe11ba9f3                                                                                                     0.0s
 => => pushing layers                                                                                                                                                                                      0.0s
------
 > exporting to image:
------
failed to solve: rpc error: code = Unknown desc = failed to do request: Head https://img.service.consul/v2/alpine/blobs/sha256:8dc302c06141b7124ea05ccf2fdde10013ce594c28e5fe980047b0740891e398: x509: certificate signed by unknown authority

x509: certificate signed by unknown authority, but certificate chain is ok.

test:
: |openssl s_client -connect img.service.consul:443 […] Verify return code: 0 (ok)

docker build + push works also:

docker build  
 -t img.service.consul/x86_64/alpine:test  
 .

Sending build context to Docker daemon  2.048kB
Step 1/2 : FROM alpine
 ---> 055936d39205
Step 2/2 : RUN touch /test
 ---> Using cache
 ---> 7bd5dcd02d4c
Successfully built 7bd5dcd02d4c
Successfully tagged img.service.consul/x86_64/alpine:test

docker push img.service.consul/x86_64/alpine:test

The push refers to repository [img.service.consul/x86_64/alpine]
b13e8440598c: Pushed
f1b5933fe4b5: Pushed
test: digest: sha256:e9c2e8f188d0bedc6d3c26b39a6a75c36be5b4cbeedb9defc4f3b48953b4ef45 size: 734

buildx imagetools inspect again works:

docker buildx imagetools inspect img.service.consul/x86_64/alpine:test

{
   "schemaVersion": 2,
   "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
   "config": {
      "mediaType": "application/vnd.docker.container.image.v1+json",
      "size": 1645,
      "digest": "sha256:7bd5dcd02d4c340892fe431a40a39badf5695af58b669a33bd21b61159f4ffe5"
   },
   "layers": [
      {
         "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
         "size": 2757034,
         "digest": "sha256:e7c96db7181be991f19a9fb6975cdbbd73c65f4a2681348e63a141a2192a5f10"
      },
      {
         "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
         "size": 97,
         "digest": "sha256:6e0d312d9ebb1961db61366af2a5a323ad84155db2018457d2c5168c4f86e410"
      }
   ]
}

perhaps related to #57 (comment)

tested:

  • docker beta-4 + 20190517171028-f4f33ba16d nightly,
  • ‘buildx’ release + current master.
uname -mrov
4.19.44 #1-NixOS SMP Thu May 16 17:41:32 UTC 2019 x86_64 GNU/Linux

Trying to use buildx and .net core not working, .net core issue?

Is this a .net issue or a buildx issue ?

Building for AMD/64 works fine :

d:Devdockertesting>docker buildx build --platform linux/amd64 -t ramblinggeekuk/dockertesting --push . [+] Building 211.7s (15/15) FINISHED => [internal] load build definition from Dockerfile 0.2s => => transferring dockerfile: 32B 0.0s => [internal] load .dockerignore 0.3s => => transferring context: 2B 0.0s => [internal] load metadata for mcr.microsoft.com/dotnet/core/aspnet:2.2 1.1s => [internal] load metadata for mcr.microsoft.com/dotnet/core/sdk:2.2 1.2s => [build-env 1/6] FROM mcr.microsoft.com/dotnet/core/sdk:[email protected]:b4c25c26dc73f498073fcdb4aefe1677 110.2s => => resolve mcr.microsoft.com/dotnet/core/sdk:[email protected]:b4c25c26dc73f498073fcdb4aefe167793eb3a8c79ef 0.0s => => sha256:b4c25c26dc73f498073fcdb4aefe167793eb3a8c79effa76df768006b5c345b8 2.19kB / 2.19kB 0.0s => => sha256:36add2b62b779b538e3062839e1978d78e12e26ec2214940e9043924a29890c0 1.80kB / 1.80kB 0.0s => => sha256:ade3808af4d74b6181b7412b73314b5806fa39d140def18c6ee1cdbcb3ed41b1 300.69MB / 300.69MB 40.3s => => sha256:52c7fe5918815504427b3168845267e876464f8b010ccc09d0f61eb67dd6a17e 4.41kB / 4.41kB 0.0s => => sha256:dbdc36973392a980d56b8fab63383ae44582f6502001d8bbdd543aa3bf1d746e 10.79MB / 10.79MB 9.3s => => sha256:aaef3e0262580b9032fc6741fb099c7313834c7cf332500901e87ceeb38ac153 50.07MB / 50.07MB 58.7s => => sha256:a4d8138d0f6b5a441aaa533faf5fe0c3996a6ca42643c46f4402c7e8bda53742 45.34MB / 45.34MB 53.0s => => sha256:f59d6d019dd5b8398eb8d794e3fafe31f9411cc99a71dabfa587bf732b4a7385 4.34MB / 4.34MB 62.4s => => sha256:f62345fbba0dbbb77ba8aca5b81a4f0d8ec16c5d540def66c0b8e8d6492fa444 13.25MB / 13.25MB 59.3s => => sha256:373065ab5fafec0e8bcfd74485dcd728f40b80800867c553e80c7cd92cd5d504 173.83MB / 173.83MB 79.8s => => unpacking mcr.microsoft.com/dotnet/core/sdk:[email protected]:b4c25c26dc73f498073fcdb4aefe167793eb3a8c7 29.0s => [stage-1 1/3] FROM mcr.microsoft.com/dotnet/core/aspnet:[email protected]:b18d512d00aff0937699014a9ba44234 54.5s => => resolve mcr.microsoft.com/dotnet/core/aspnet:[email protected]:b18d512d00aff0937699014a9ba44234692ce424c 0.0s => => sha256:b18d512d00aff0937699014a9ba44234692ce424c70248bedaa5a60972d77327 2.19kB / 2.19kB 0.0s => => sha256:8bd16a07ec8b72f4131a1747ff479048db65b48f54ae2ced1ffb1b42798c952e 1.16kB / 1.16kB 0.0s => => sha256:318149b63beb70e442e84e530f4472f9354e3906874c35be2ba5045b5f7a8c7a 4.06kB / 4.06kB 0.0s => => sha256:fc7181108d403205fda45b28dbddfa1cf07e772fa41244e44f53a341b8b1893d 22.49MB / 22.49MB 27.7s => => sha256:2c86df27317feb8a2806928aa12f27e6c580894e0cb844cb25aaed1420964e3d 17.69MB / 17.69MB 40.7s => => sha256:66dd687a6ad17486c0e3bc4e3c3690cefb7de9ad55f654e65cf657016ed4194c 2.98MB / 2.98MB 41.5s => => sha256:a7638d93f1fe40e3393bfb685305ce5022179c288f5b2a717978ccae465b4d7a 62.13MB / 62.13MB 48.1s => => unpacking mcr.microsoft.com/dotnet/core/aspnet:[email protected]:b18d512d00aff0937699014a9ba44234692ce42 5.2s => [internal] load build context 0.5s => => transferring context: 6.63kB 0.0s => [stage-1 2/3] WORKDIR /app 0.4s => [build-env 2/6] WORKDIR /app 0.2s => [build-env 3/6] COPY *.csproj ./ 0.7s => [build-env 4/6] RUN dotnet restore 10.7s => [build-env 5/6] COPY . ./ 0.4s => [build-env 6/6] RUN dotnet publish -c Release -o out 3.6s => [stage-1 3/3] COPY --from=build-env /app/out . 0.3s => exporting to image 83.5s => => exporting layers 1.0s => => exporting manifest sha256:6ac874b02ae2ce6c86c5d79290a04694778b2f86ff787285650c11dce4b2a37e 0.2s => => exporting config sha256:e9625bb3b3e783bcb6f6b7dd8b3ad4a1f090a1156be3bf237d5d4b7c8f97ebcc 0.2s => => pushing layers 81.3s => => pushing manifest for docker.io/ramblinggeekuk/dockertesting:latest 0.6s

Building for linux/arm/v7 — fails…

‘d:Devdockertesting>docker buildx build —platform linux/arm/v7 -t ramblinggeekuk/dockertesting —push .
[+] Building 112.6s (11/14)
=> [internal] load .dockerignore 0.2s
=> => transferring context: 2B 0.0s
=> [internal] load build definition from Dockerfile 0.3s
=> => transferring dockerfile: 443B 0.0s
=> [internal] load metadata for mcr.microsoft.com/dotnet/core/aspnet:2.2 1.5s
=> [internal] load metadata for mcr.microsoft.com/dotnet/core/sdk:2.2 1.6s
=> [build-env 1/6] FROM mcr.microsoft.com/dotnet/core/sdk:[email protected]:b4c25c26dc73f498073fcdb4aefe1677 102.6s
=> => resolve mcr.microsoft.com/dotnet/core/sdk:[email protected]:b4c25c26dc73f498073fcdb4aefe167793eb3a8c79ef 0.0s
=> => sha256:b4c25c26dc73f498073fcdb4aefe167793eb3a8c79effa76df768006b5c345b8 2.19kB / 2.19kB 0.0s
=> => sha256:27c2d2f8b92b964c1e3f4de6c8025b1f0362a1f3436118d77b3dbfa921cfd9c9 1.80kB / 1.80kB 0.0s
=> => sha256:493f51ba80c0d5fd46ea25516eb221089190b416d5a8cc2c898517dea68519a4 4.91kB / 4.91kB 0.0s
=> => sha256:e06d849c15a63e2cf30d5c5af0d9aa87b2f7c6cbfe0e8c3e351fa4c5d4666d11 300.71MB / 300.71MB 44.8s
=> => sha256:41835060b113803e2ca628a32805c2e1178fe441b81d3e77427749fec4de06e9 9.49MB / 9.49MB 45.9s
=> => sha256:da770cd5eae6caeefe9468e318964be31036c06e729c2d983756906ede859b17 46.39MB / 46.39MB 51.2s
=> => sha256:582caf5d2e7bf5e75a96afc2254a97f6e86ad72c8815429ada61280467cc6d6f 3.92MB / 3.92MB 45.0s
=> => sha256:dd04b2ffc5474ba8df46350a273baaf841243fda01cfe05d3e5429e4ecc9bb19 144.38MB / 144.38MB 73.9s
=> => sha256:fa48f739865746afb4020d2d370105be51d23dd6ad6faa8663e1365b607d46c2 13.04MB / 13.04MB 52.3s
=> => sha256:dcb61f1d45657be196f648f75a07805b856fb8f4aebb61138c03c12e2919ee9e 42.08MB / 42.08MB 57.5s
=> => unpacking mcr.microsoft.com/dotnet/core/sdk:[email protected]:b4c25c26dc73f498073fcdb4aefe167793eb3a8c7 27.0s
=> [stage-1 1/3] FROM mcr.microsoft.com/dotnet/core/aspnet:[email protected]:b18d512d00aff0937699014a9ba44234 18.5s
=> => resolve mcr.microsoft.com/dotnet/core/aspnet:[email protected]:b18d512d00aff0937699014a9ba44234692ce424c 0.0s
=> => sha256:b18d512d00aff0937699014a9ba44234692ce424c70248bedaa5a60972d77327 2.19kB / 2.19kB 0.0s
=> => sha256:9ad51bcfeeb6e58218f23fb1f4c5229b39008cc245c9df1fcf8c9330c18a2acb 1.16kB / 1.16kB 0.0s
=> => sha256:8b7eead4e00d6228dbbf945848d78b43580687575eb8cba1d7a2b11129186f77 4.07kB / 4.07kB 0.0s
=> => sha256:a51e654c7ec5bf1fd3f38645d4bc8aa40f86ca7803d70031a9828ae65e3b67ae 63.47MB / 63.47MB 8.9s
=> => sha256:2eead4197fac409644fd8aaf115559d6383b0d56f1ad04d7116aaabbcbea8bed 19.28MB / 19.28MB 10.3s
=> => sha256:9358a462710e1891aec7076e8674e6f522f08a9f3624dc1f55554c2fc7cb99ea 16.30MB / 16.30MB 12.0s
=> => sha256:14144450932b5358107e71ebcd25ec878cb799ccc75ec39386e374d0dad903b3 2.88MB / 2.88MB 12.2s
=> => unpacking mcr.microsoft.com/dotnet/core/aspnet:[email protected]:b18d512d00aff0937699014a9ba44234692ce42 4.5s
=> [internal] load build context 0.2s
=> => transferring context: 7.04kB 0.0s
=> [stage-1 2/3] WORKDIR /app 0.2s
=> [build-env 2/6] WORKDIR /app 0.3s
=> [build-env 3/6] COPY *.csproj ./ 0.4s
=> ERROR [build-env 4/6] RUN dotnet restore 7.0s’

using —no-cache still uses CACHED layers and host cache

$ ~/.docker/cli-plugins/docker-buildx bake --no-cache -f kimi.yml
[+] Building 2.1s (13/21)
 => [internal] load build definition from buildx-builder                                                                                                                                                                                 0.0s
 => => transferring dockerfile: 36B                                                                                                                                                                                                      0.0s
 => [internal] load .dockerignore                                                                                                                                                                                                        0.0s
 => => transferring context: 35B                                                                                                                                                                                                         0.0s
 => resolve image config for docker.io/docker/dockerfile:experimental                                                                                                                                                                    0.4s
 => CACHED docker-image://docker.io/docker/dockerfile:[email protected]:9022e911101f01b2854c7a4b2c77f524b998891941da55208e71c0335e6e82c3                                                                                               0.0s
 => [internal] load metadata for docker.io/library/alpine:latest                                                                                                                                                                         0.0s
 => [internal] load metadata for docker.io/library/node:8.11-alpine                                                                                                                                                                      0.0s
 => CANCELED [internal] load build context                                                                                                                                                                                               1.0s
 => => transferring context: 14.31MB                                                                                                                                                                                                     1.0s
 => [storage 1/6] FROM docker.io/library/alpine                                                                                                                                                                                  0.0s
 => CACHED [storage 2/6] WORKDIR /var/www                                                                                                                                                                                        0.0s
 => CACHED [storage 3/6] RUN adduser -DHSu 100 nginx -s /sbin/nologin                                                                                                                                                            0.0s
 => [builder 1/8] FROM docker.io/library/node:8.11-alpine                                                                                                                                                                        0.0s
 => CACHED [builder 2/8] WORKDIR /src                                                                                                                                                                                            0.0s
 => ERROR [builder 3/8] RUN --mount=type=cache,target=/var/cache     apk add --update     git                                                                                                                                    1.0s
------
 > [builder 3/8] RUN --mount=type=cache,target=/var/cache     apk add --update     git:
#12 0.775 fetch http://dl-cdn.alpinelinux.org/alpine/v3.6/main/x86_64/APKINDEX.tar.gz
#12 0.786 fetch http://dl-cdn.alpinelinux.org/alpine/v3.6/community/x86_64/APKINDEX.tar.gz
#12 0.788 ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.6/main: Bad file descriptor
#12 0.788 WARNING: Ignoring APKINDEX.84815163.tar.gz: Bad file descriptor
#12 0.805   git (missing):
#12 0.805     required by: world[git]
#12 0.805 ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.6/community: Bad file descriptor
#12 0.805 WARNING: Ignoring APKINDEX.24d64ab1.tar.gz: Bad file descriptor
#12 0.805 ERROR: unsatisfiable constraints:
------
Error: failed to solve: rpc error: code = Unknown desc = executor failed running [/bin/sh -c apk add --update     git]: exit code: 1

github.com/docker/buildx v0.2.2-10-g3f18b65-tp-docker 3f18b65

Client: Docker Engine — Community
Version: 19.03.0-rc2
API version: 1.40
Go version: go1.12.5
Git commit: f97efcc
Built: Wed Jun 5 01:37:53 2019
OS/Arch: darwin/amd64
Experimental: false

Server: Docker Engine — Community
Engine:
Version: 19.03.0-rc2
API version: 1.40 (minimum version 1.12)
Go version: go1.12.5
Git commit: f97efcc
Built: Wed Jun 5 01:42:10 2019
OS/Arch: linux/amd64
Experimental: true
containerd:
Version: v1.2.6
GitCommit: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc:
Version: 1.0.0-rc8
GitCommit: 425e105d5a03fabd737a126ad93d62a9eeede87f
docker-init:
Version: 0.18.0
GitCommit: fec3683

Migrate buildx to the public Jenkins

Description: We recently created a new Jenkins at ci.docker.com/public. It may be worth it to migrate buildx from travis to Jenkins by creating a new Jenkinsfile using the declarative syntax.

image and standalone target

This binary should be usable without being invoked by docker as well. The image target should be runnable with docker run.

Eg.
docker run docker/buildx build -o type=oci,dest=- git://... > oci.tgz

For auth/local sources mounts would need to be used.

Depends on adding a new driver.

Tag multiple stages and/or build multiple targets with one invocation of buildkit

Wanted to re look at this moby/buildkit#609 now that we have docker buildx bake.

I am trying to determine if bake has ultimately been built to solve this issue or are it’s goals slightly different?

If I use a bake file like this:

group "default" {
    targets = ["prj1", "prj2"]
}

target "prj1" {
    dockerfile = "Dockerfile"
    target = "prj1",
    tags = ["prj1"]
}

target "prj2" {
    inherits = ["prj1"]
    target = "prj2",
    tags = ["prj2"]
}

And then execute with docker buildx bake -f ./docker-bake.hcl I only get prj1 image exported. prj2 still builds just doesn’t get exported.

Also can / could a bake file be used to effectively stitch multiple dockerfiles together?

Our monorepo has 20+ projects, all with their own Dockerfile «segments» — I say this because if you tried to build one of these files by it’s self it would probably fail as it depends on parent build stages defined in other files.

I have written a script that concatenates all the Dockerfiles together into a single Dockerfile in the correct order so that build stages refer to each other correctly and then I execute a buildkit build on this giant Dockerfile. Using the approach outlined at moby/buildkit#609

It would be great if I could define normal standalone Dockerfiles and then use a bake file to do the stitching / build in correct order, consider the following contrived example:

node-modules.dockerfile

FROM node
RUN npm install

prj1.dockerfile

FROM node-modules
RUN npm build

prj2.dockerfile

FROM node-modules
RUN npm build

docker-bake.hcl

group "default" {
    targets = ["prj1", "prj2"]
}

target "node-modules" {
    dockerfile = "node-modules.dockerfile"
    tags = ["node-modules"]
}

target "prj1" {
    depends = ["node-modules"]
    dockerfile = "prj1.dockerfile"
    tags = ["prj1"]
}

target "prj2" {
    depends = ["node-modules"]
    dockerfile = "prj2.dockerfile"
    tags = ["prj1"]
}

Perhaps bake can already do this but I haven’t figured it out yet?

`make binaries` no bueno

after cloning 3f18b65 into local working dir:

$ make binaries
...
=> exporting to image                                                                                             0.0s
 => => exporting layers                                                                                            0.0s
 => => writing image sha256:590d78d23c7678533fa8f3669bc85e6b2bb559d04083a4fbc48793d6c91aaa3a                       0.0s
++ cat /tmp/docker-iidfile.Z3KsTw3vbo
+ iid=sha256:590d78d23c7678533fa8f3669bc85e6b2bb559d04083a4fbc48793d6c91aaa3a
++ docker create sha256:590d78d23c7678533fa8f3669bc85e6b2bb559d04083a4fbc48793d6c91aaa3a copy
+ containerID=f945249a3a4532fb2304b8b79d32f3963230f0fdb16371e45b40b7668f495bf4
+ docker cp f945249a3a4532fb2304b8b79d32f3963230f0fdb16371e45b40b7668f495bf4:/ bin/tmp
+ mv 'bin/tmp/build*' bin/
mv: cannot stat 'bin/tmp/build*': No such file or directory
Makefile:5: recipe for target 'binaries' failed
make: *** [binaries] Error 1

Docker enterprise

Currently, buildx comes bundled as a plugin with the latest community edition.
Are there plans for it to be bundled with the Enterprise edition too?

buildx ls can show the same platform twice

I saw linux/amd64 twice when adding an AWS node.

add kubepod driver

Similarly to docker-container driver there could be a support to kubepod driver that bootstraps itself inside a k8s cluster. Kube endpoints in docker context can be used for providing initial configuration. Switching between drivers implemented in #20 . Would be nice if all supported platforms of k8s workers could be also detected automatically.

@AkihiroSuda

Building for ARM causes error often

When trying to compile a Golang project I am seeing the following:

 > [linux/arm64 bldr 4/4] RUN go build -o /bldr .:
#28 0.608 go: finding github.com/spf13/cobra v0.0.4
#28 0.620 go: finding github.com/hashicorp/go-multierror v1.0.0
#28 1.579 go: finding golang.org/x/sys v0.0.0-20190602015325-4c4f7f33c9ed
#28 1.586 go: finding golang.org/x/xerrors v0.0.0-20190513163551-3ee3066db522
#28 1.631 go: finding gopkg.in/yaml.v2 v2.2.2
#28 2.797 go: finding github.com/hashicorp/errwrap v1.0.0
#28 3.392 go: finding github.com/cpuguy83/go-md2man v1.0.10
#28 3.421 go: finding github.com/inconshreveable/mousetrap v1.0.0
#28 3.455 go: finding github.com/spf13/pflag v1.0.3
#28 3.466 go: finding github.com/mitchellh/go-homedir v1.1.0
#28 3.513 go: finding github.com/BurntSushi/toml v0.3.1
#28 3.552 go: finding github.com/spf13/viper v1.3.2
#28 3.679 fatal error: schedule: holding locks
#28 3.700 SIGSEGV: segmentation violation
#28 3.700 PC=0x4000283138 m=0 sigcode=1
#28 3.704 
#28 3.707 goroutine 82 [syscall]:
#28 3.709 runtime.notetsleepg(0x4000bb3f60, 0x246c4acc8, 0x14000022400)
#28 3.709       /usr/lib/go/src/runtime/lock_futex.go:227 +0x28 fp=0x140003bc750 sp=0x140003bc720 pc=0x400023dd70
#28 3.711 runtime.timerproc(0x4000bb3f40)
#28 3.711       /usr/lib/go/src/runtime/time.go:288 +0x2a4 fp=0x140003bc7d0 sp=0x140003bc750 pc=0x400027817c
#28 3.712 runtime.goexit()
#28 3.712       /usr/lib/go/src/runtime/asm_arm64.s:1114 +0x4 fp=0x140003bc7d0 sp=0x140003bc7d0 pc=0x400028588c
#28 3.715 created by runtime.(*timersBucket).addtimerLocked
#28 3.715       /usr/lib/go/src/runtime/time.go:170 +0xf4

on another run:

 > [linux/arm64 bldr 4/4] RUN go build -o /bldr .:
#20 0.728 go: finding github.com/hashicorp/go-multierror v1.0.0
#20 0.744 go: finding github.com/spf13/cobra v0.0.4
#20 1.509 go: finding golang.org/x/sys v0.0.0-20190602015325-4c4f7f33c9ed
#20 1.523 go: finding golang.org/x/xerrors v0.0.0-20190513163551-3ee3066db522
#20 2.127 go: finding gopkg.in/yaml.v2 v2.2.2
#20 5.053 go: finding github.com/hashicorp/errwrap v1.0.0
#20 5.334 go: finding github.com/cpuguy83/go-md2man v1.0.10
#20 5.368 go: finding github.com/inconshreveable/mousetrap v1.0.0
#20 5.380 go: finding github.com/mitchellh/go-homedir v1.1.0
#20 5.386 go: finding github.com/spf13/viper v1.3.2
#20 5.393 go: finding github.com/BurntSushi/toml v0.3.1
#20 5.427 go: finding github.com/spf13/pflag v1.0.3
#20 7.801 go: finding gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405
#20 8.046 go: finding github.com/russross/blackfriday v1.5.2
#20 8.250 go: finding golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a
#20 8.443 go: finding github.com/spf13/cast v1.3.0
#20 8.477 go: finding github.com/hashicorp/hcl v1.0.0
#20 8.490 go: finding github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77
#20 8.493 go: finding github.com/magiconair/properties v1.8.0
#20 8.508 go: finding github.com/pelletier/go-toml v1.2.0
#20 8.556 go: finding github.com/mitchellh/mapstructure v1.1.2
#20 8.679 go: finding golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9
#20 13.12 go: finding golang.org/x/text v0.3.0
#20 13.37 go: finding github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8
#20 13.40 go: finding github.com/stretchr/testify v1.2.2
#20 13.81 go: finding github.com/fsnotify/fsnotify v1.4.7
#20 14.03 go: finding github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6
#20 14.28 go: finding github.com/spf13/afero v1.1.2
#20 14.43 go: finding github.com/pmezard/go-difflib v1.0.0
#20 14.48 fatal error: exitsyscall: syscall frame is no longer valid
#20 14.48 
#20 14.48 goroutine 6 [syscall]:
#20 14.49 syscall.Syscall(0x3f, 0x2a, 0x1400030e8f0, 0x8, 0x0, 0x0, 0x0)
#20 14.49       /usr/lib/go/src/syscall/asm_linux_arm64.s:9 +0x8 fp=0x1400030e810 sp=0x1400030e800 pc=0x40002a3ce0
#20 14.49 syscall.readlen(0x2a, 0x1400030e8f0, 0x8, 0x7, 0x140001e1180, 0x13)
#20 14.49       /usr/lib/go/src/syscall/zsyscall_linux_arm64.go:1026 +0x40 fp=0x1400030e860 sp=0x1400030e810 pc=0x40002a1fb8
#20 14.49 syscall.forkExec(0x140000a6390, 0xc, 0x1400013a480, 0x6, 0x6, 0x1400030ea48, 0x20, 0x0, 0x140000a84a0)
#20 14.49       /usr/lib/go/src/syscall/exec_unix.go:203 +0x284 fp=0x1400030e970 sp=0x1400030e860 pc=0x400029cdec
#20 14.49 runtime: unexpected return pc for syscall.StartProcess called from 0x2d
#20 14.49 stack: frame={sp:0x1400030e970, fp:0x1400030e9c0} stack=[0x1400030e000,0x14000310000)
#20 14.49 000001400030e870:  000001400030e8f0  0000000000000008 
#20 14.49 000001400030e880:  0000000000000007  00000140001e1180 

and another run:

 > [linux/arm/v7 bldr 4/4] RUN go build -o /bldr .:
#12 0.730 fatal error: fatal error: schedule: holding locks
#12 0.735 
#12 0.735 fatal error: unexpected signal during runtime execution
#12 0.735 panic during panic
#12 0.736 [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x0]
#12 0.738 
#12 0.738 runtime stack:
#12 0.740 runtime: unexpected return pc for runtime/internal/atomic.Xchg called from 0x1
#12 0.740 stack: frame={sp:0xeea5ec60, fp:0xeea5ec78} stack=[0xeea4b10c,0xeea5ed0c)
#12 0.742 eea5ebe0:  00000000  00000001  ffaf9b52  ff6f97c0 <runtime.fatalthrow+72> 
#12 0.746 eea5ebf0:  00472700  ff6f9608 <runtime.throw+96>  eea5ec24  fffebea0 
#12 0.748 eea5ec00:  eea5ec24  ff6f9608 <runtime.throw+96>  00472700  ff6f9608 <runtime.throw+96> 
#12 0.750 eea5ec10:  eea5ec14  ff724158 <runtime.fatalthrow.func1+0>  00472700  ff6f9608 <runtime.throw+96> 
#12 0.752 eea5ec20:  eea5ec24  ff710558 <runtime.sigpanic+588>  eea5ec2c  ff7240e8 <runtime.throw.func1+0> 
#12 0.754 eea5ec30:  ffb00a98  0000002a  ff6cc9a4 <runtime/internal/atomic.Xchg+36>  ffb00a98 
#12 0.756 eea5ec40:  0000002a  ff724140 <runtime.throw.func1+88>  fffebea0  00000000 
#12 0.757 eea5ec50:  00472700  ff6f9944 <runtime.startpanic_m+216>  ff6f9944 <runtime.startpanic_m+216>  ff6cc9a4 <runtime/internal/atomic.Xchg+36> 
#12 0.758 eea5ec60: <00000001  00000000  ff724184 <runtime.fatalthrow.func1+44>  fffebee8 
#12 0.760 eea5ec70:  00000001  00000001 >00472700  ff6f97c0 <runtime.fatalthrow+72> 
#12 0.762 eea5ec80:  ffae9b65  00000001  ffae9b65  00000001 
#12 0.763 eea5ec90:  eea5ecb4  ff6f9608 <runtime.throw+96>  00472700  ff6f9608 <runtime.throw+96> 
#12 0.766 eea5eca0:  eea5eca4  ff724158 <runtime.fatalthrow.func1+0>  00472700  ff6f9608 <runtime.throw+96> 
#12 0.769 eea5ecb0:  eea5ecb4  ff70199c <runtime.schedule+796>  00000001  00000002 
#12 0.772 eea5ecc0:  ffaf54b3  00000017  ff701aa8 <runtime.park_m+144>  ffaf54b3 
#12 0.773 eea5ecd0:  00000017  005122f0  00000000  00000001 
#12 0.774 eea5ece0:  ff701ad0 <runtime.park_m+184>  005122f0  ff7256a8 <runtime.mcall+96>  0045f6c0 
#12 0.776 eea5ecf0:  005122f0  00000001 
#12 0.777 runtime/internal/atomic.Xchg(0xff6f97c0, 0xffae9b65, 0x1)
#12 0.778       /usr/lib/go/src/runtime/internal/atomic/atomic_arm.go:60 +0x24

and another run:

 => [linux/arm64 bldr 4/4] RUN go build -o /bldr .                                                                                                                                                                                                                                                             347.1s
 => => # go: finding gopkg.in/yaml.v2 v2.2.2                                                                                                                                                                                                                                                                         
 => => # go: finding github.com/hashicorp/errwrap v1.0.0                                                                                                                                                                                                                                                             
 => => # go: finding github.com/spf13/viper v1.3.2                                                                                                                                                                                                                                                                   
 => => # go: finding github.com/cpuguy83/go-md2man v1.0.10                                                                                                                                                                                                                                                           
 => => # go: finding github.com/mitchellh/go-homedir v1.1.0                                                                                                                                                                                                                                                          
 => => # qemu: uncaught target signal 11 (Segmentation fault) - core dumped                                                                                                                                                                                                                                          
 => CACHED [linux/arm/v7 stage-1 3/5] RUN [ "ln", "-svf", "/bin/bash", "/bin/sh" ]                                                                                                                                                                                                                               0.0s
 => [linux/arm/v7 bldr 4/4] RUN go build -o /bldr .                                                                                                                                                                                                                                                            346.9s
 => => # go: finding github.com/hashicorp/go-multierror v1.0.0                                                                                                                                                                                                                                                       
 => => # go: finding golang.org/x/sys v0.0.0-20190602015325-4c4f7f33c9ed                                                                                                                                                                                                                                             
 => => # go: finding golang.org/x/xerrors v0.0.0-20190513163551-3ee3066db522                                                                                                                                                                                                                                         
 => => # go: finding gopkg.in/yaml.v2 v2.2.2                                                                                                                                                                                                                                                                         
 => => # go: finding github.com/hashicorp/errwrap v1.0.0                                                                                                                                                                                                                                                             
 => => # qemu: uncaught target signal 4 (Illegal instruction) - core dumped  

The Dockerfile for reference:

FROM --platform=$TARGETPLATFORM alpine:3.9 as bldr
RUN apk add build-base git go
COPY . .
ENV CGO_ENABLED 0
RUN go build -o /bldr .

FROM --platform=$TARGETPLATFORM alpine:3.9
RUN apk add --no-cache bash
RUN [ "ln", "-svf", "/bin/bash", "/bin/sh" ]
COPY --from=bldr /bldr /bin/bldr
WORKDIR /pkg
ENTRYPOINT ["/bin/bldr"]
CMD ["build"]
ONBUILD COPY . .
ONBUILD RUN bldr build

Rpmdb checksum is invalid: dCDPT(pkg checksums): wget.aarch64 0:1.14-18.el7_6.1 — u

When trying to install wget in Dockerfile, I hit the issue as below:

Rpmdb checksum is invalid: dCDPT(pkg checksums): wget.aarch64 0:1.14-18.el7_6.1 - u

The command is as below:

docker buildx build --platform linux/arm64 -t xxx:0.1 .

The Dockerfile is as below:

FROM xxx/centos7-aarch64-xxx:xxx
RUN yum install -y wget

I have specified in the command with --platform linux/arm64, the issue of rebuilding Rpmdb should not occur. Thanks.

«sending tarball» takes a long time even when the image already exists

When I build an image which already exists (because of a previous build on the same engine with 100% cache hit), the builder still spends a lot of time in «sending tarball». This causes a noticeable delay in the build. Perhaps this delay could be optimized away in the case of 100% cache hit?

For example, when building a 1.84GB image with 51 layers, the entire build is 9s, of which 8s is in «sending tarball» (see output below).

It would be awesome if fully cached builds returned at near-interactive speed!

 => [internal] load build definition from Dockerfile          0.0s
 => => transferring dockerfile: 2.53kB                        0.0s
 => [internal] load .dockerignore                             0.0s
 => => transferring context: 2B                               0.0s
 => [internal] load metadata for docker.io/library/alpine:la  1.0s
 => [1/51] FROM docker.io/library/[email protected]:6a92cd1fcdc8  0.0s
 => => resolve docker.io/library/[email protected]:6a92cd1fcdc8d  0.0s
 => CACHED [2/51] RUN apk update                              0.0s
 => CACHED [3/51] RUN apk add openssh                         0.0s
 => CACHED [4/51] RUN apk add bash                            0.0s
 => CACHED [5/51] RUN apk add bind-tools                      0.0s
 => CACHED [6/51] RUN apk add curl                            0.0s
 => CACHED [7/51] RUN apk add docker                          0.0s
 => CACHED [8/51] RUN apk add g++                             0.0s
 => CACHED [9/51] RUN apk add gcc                             0.0s
 => CACHED [10/51] RUN apk add git                            0.0s
 => CACHED [11/51] RUN apk add git-perl                       0.0s
 => CACHED [12/51] RUN apk add make                           0.0s
 => CACHED [13/51] RUN apk add python                         0.0s
 => CACHED [14/51] RUN apk add openssl-dev                    0.0s
 => CACHED [15/51] RUN apk add vim                            0.0s
 => CACHED [16/51] RUN apk add py-pip                         0.0s
 => CACHED [17/51] RUN apk add file                           0.0s
 => CACHED [18/51] RUN apk add groff                          0.0s
 => CACHED [19/51] RUN apk add jq                             0.0s
 => CACHED [20/51] RUN apk add man                            0.0s
 => CACHED [21/51] RUN cd /tmp && git clone https://github.c  0.0s
 => CACHED [22/51] RUN apk add go                             0.0s
 => CACHED [23/51] RUN apk add coreutils                      0.0s
 => CACHED [24/51] RUN apk add python2-dev                    0.0s
 => CACHED [25/51] RUN apk add python3-dev                    0.0s
 => CACHED [26/51] RUN apk add tar                            0.0s
 => CACHED [27/51] RUN apk add vim                            0.0s
 => CACHED [28/51] RUN apk add rsync                          0.0s
 => CACHED [29/51] RUN apk add less                           0.0s
 => CACHED [30/51] RUN pip install awscli                     0.0s
 => CACHED [31/51] RUN curl --silent --location "https://git  0.0s
 => CACHED [32/51] RUN curl https://dl.google.com/dl/cloudsd  0.0s
 => CACHED [33/51] RUN curl -L -o /usr/local/bin/kubectl htt  0.0s
 => CACHED [34/51] RUN curl -L -o /usr/local/bin/kustomize    0.0s
 => CACHED [35/51] RUN apk add ruby                           0.0s
 => CACHED [36/51] RUN apk add ruby-dev                       0.0s
 => CACHED [37/51] RUN gem install bigdecimal --no-ri --no-r  0.0s
 => CACHED [38/51] RUN gem install kubernetes-deploy --no-ri  0.0s
 => CACHED [39/51] RUN apk add npm                            0.0s
 => CACHED [40/51] RUN npm config set unsafe-perm true        0.0s
 => CACHED [41/51] RUN npm install -g yarn                    0.0s
 => CACHED [42/51] RUN npm install -g netlify-cli             0.0s
 => CACHED [43/51] RUN apk add libffi-dev                     0.0s
 => CACHED [44/51] RUN pip install docker-compose             0.0s
 => CACHED [45/51] RUN apk add mysql-client                   0.0s
 => CACHED [46/51] RUN (cd /tmp && curl -L -O https://releas  0.0s
 => CACHED [47/51] RUN apk add shadow sudo                    0.0s
 => CACHED [48/51] RUN echo '%wheel ALL=(ALL) NOPASSWD: ALL'  0.0s
 => CACHED [49/51] RUN useradd -G docker,wheel -m -s /bin/ba  0.0s
 => CACHED [50/51] RUN groupmod -o -g 999 docker              0.0s
 => CACHED [51/51] WORKDIR /home/sh                           0.0s
 => exporting to oci image format                             8.8s
 => => exporting layers                                       0.3s
 => => exporting manifest sha256:69088589c4e63094e51ae0e34e6  0.0s
 => => exporting config sha256:65db1e1d42a26452307b43bc5c683  0.0s
 => => sending tarball                                        8.3s
 => importing to docker                                       0.1s

improve on name

The current name is not great. Let’s find something better (or explain why you like current).

@tiborvass @AkihiroSuda

merging compose files

[Feature Request] Remove duplicate context transfers

As an example use case you may have a single intermediary target in your Dockerfile that acts like a cache for all downstream images. You build all your packages in your cache target and then assemble into different binaries for the different services that make up your app.

Today if you have n targets all of which are downstream from a single target, even if the upstream is the only thing that actually relies on the context, the context will be transferred n times in parallel.

Example

  1. Make a large context with
dd if=/dev/zero of=largefile count=262400 bs=1024
  1. Make a Dockerfile with one upstream image that actually uses the context, and three downstream images.
FROM scratch AS upstream

COPY largefile /largefile


FROM upstream AS downstream0

FROM upstream AS downstream1

FROM upstream AS downstream2
  1. Make a bake hcl with the targets
group "default" {
  targets = ["downstream0", "downstream1", "downstream2"]
}

target "downstream0" {
  target = "downstream0"
  tags = ["docker.io/rabrams/downstream0"]
}

target "downstream1" {
  target = "downstream0"
  tags = ["docker.io/rabrams/downstream1"]
}

target "downstream2" {
  target = "downstream0"
  tags = ["docker.io/rabrams/downstream2"]
}
  1. Run docker buildx bake and observe duplicate context transfers
$ docker buildx bake
[+] Building 1.1s (6/12)
 => [downstream1 internal] load .dockerignore                              0.2s
 => => transferring context: 2B                                            0.1s
 => [downstream2 internal] load build context                              0.9s
 => => transferring context: 327.77kB                                      0.9s
 => [downstream0 internal] load build context                              0.9s
 => => transferring context: 360.55kB                                      0.9s
 => [downstream1 internal] load build context                              0.9s
 => => transferring context: 229.45kB                                      0.9s

For large contexts and many downstream images this can be a problem because your uplink is divided between all the context transfers that are doing the same thing.

buildx is not a docker command on linux/amd64 ?

Hi,
I have docker 19.03 on ubuntu amd64 and I tried the steps in your readme but fail to get buildx command working. It complains command not found. What do I need to get it working? I tried beta release as well and same issue. I am looking to compile images for arm64.

Appreciate the help a lot,
Puja
$ cat ~/.docker/config.json
{
«experimental»: «enabled»
}
$ ls -l ~/.docker/cli-plugins/docker-buildx
total 55936
drwxr-xr-x 2 pujag newhiredefaultgrp 4096 Aug 12 14:05 .
drwxr-xr-x 3 pujag newhiredefaultgrp 4096 Aug 12 14:01 ..
-rwxr-xr-x 1 pujag newhiredefaultgrp 57036919 Aug 12 14:04 buildx-v0.2.0.linux-amd64
$ docker version
Client: Docker Engine — Community
Version: 19.03.1
API version: 1.40
Go version: go1.12.5
Git commit: 74b1e89
Built: Thu Jul 25 21:21:05 2019
OS/Arch: linux/amd64
Experimental: true

Server: Docker Engine — Community
Engine:
Version: 19.03.1
API version: 1.40 (minimum version 1.12)
Go version: go1.12.5
Git commit: 74b1e89
Built: Thu Jul 25 21:19:41 2019
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.6
GitCommit: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc:
Version: 1.0.0-rc8
GitCommit: 425e105d5a03fabd737a126ad93d62a9eeede87f
docker-init:
Version: 0.18.0
GitCommit: fec3683

No declared license.

i just wrote up a patch but there’s no license anywhere. Also very unclear on contribution guidelines beyond running the test suite; do I need to use the DCO -s stuff?

Supporting target platform: «ppc64le» and «s390x»

Hi,
I got below error messages when trying build «ppc64le» and «s390x» on my x86_64 Fedora machine. Is there way to build «ppc64le» and «s390x» images?

$ docker buildx build --rm -t my-fedora:aarch64 --platform linux/arm64 .

=> OK

Run the aarch64 image with QEMU and binfmt_misc.

$ uname -m
x86_64

$ docker run --rm -t my-fedora:aarch64 uname -m
aarch64

But below builds for target «ppc64le» and «s390x» are failed.

$ docker buildx build --rm -t my-fedora:ppc64le --platform linux/ppc64le .
...
failed to solve: rpc error: code = Unknown desc = runtime execution on platform linux/ppc64le not supported

$ docker buildx build --rm -t my-fedora:s390x --platform linux/s390x .
...
failed to solve: rpc error: code = Unknown desc = runtime execution on platform linux/s390x not supported

I used below Dockerfile.

$ cat Dockerfile 
# https://hub.docker.com/_/fedora
FROM fedora
RUN uname -m

My environment is like this.

$ uname -m
x86_64

$ cat /etc/fedora-release 
Fedora release 30 (Thirty)

$ docker --version
Docker version 19.03.0, build aeac9490dc

$ docker buildx version
github.com/docker/buildx v0.2.2-10-g3f18b65 3f18b659a09804c738226dbf6bacbcae54afd7c6
$ docker buildx inspect
Name:   default
Driver: docker

Nodes:
Name:      default
Endpoint:  default
Status:    running
Platforms: linux/amd64, linux/arm64, linux/riscv64, linux/arm/v7, linux/arm/v6

Thank you.

unable to build private git repository with ssh access

Problem:

If you have a private git repository you access using ssh key, you are not able to build it directly using buildx.

docker buildx build [email protected]:xxxx/yyyy.git fails with permission denied, Could not read from remote repository…

Of course he normal build command works docker build [email protected]:xxxx/yyyy.git

Expected behavior:

docker buildx build [email protected]:xxxx/yyyy.git should be able to use the ssh key or the ssh-agent in order to download the private project and then build the project.

Docker buildx version:

github.com/docker/buildx v0.2.0 91a2774376863c097ca936bf5e39aa3db0c72d0f

Docker version:

Client: Docker Engine - Community
 Version:           19.03.0-beta4
 API version:       1.40
 Go version:        go1.12.4
 Git commit:        e4666ebe81
 Built:             Tue May 14 12:47:08 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.0-beta4
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.4
  Git commit:       e4666ebe81
  Built:            Tue May 14 12:45:43 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.5
  GitCommit:        bb71b10fd8f58240ca47fbb579b9d1028eea7c84
 runc:
  Version:          1.0.0-rc6+dev
  GitCommit:        2b18fe1d885ee5083ef9f0838fee39b62d653e30
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

‘buildx’ is not a docker command on Windows 10

I am trying to build multi-arch images following this article: https://engineering.docker.com/2019/04/multi-arch-images/,
Updated the docker desktop to edge release. But docker buildx ls command is not working. It’s throwing error:

docker: 'buildx' is not a docker command.
See 'docker --help'

I am on docker desktop for windows.
Docker Desktop version: 2.0.5.0 (35318)
Engine: 19.03.0-rc2

OS: Windows 10 Pro, Version 1803

[Bug] `buildx` cannot build what `docker build` can build

Taking example of https://github.com/jupyter/docker-stacks/tree/master/all-spark-notebook.

all-spark-notebook can be built by normal docker build as

git clone https://github.com/jupyter/docker-stacks.git
cd docker-stacks/all-spark-notebook
docker build -t all .

However, it fails when using buildx:

git clone https://github.com/jupyter/docker-stacks.git
cd docker-stacks/all-spark-notebook
docker buildx build -t allx .

Here is the error message

$ docker buildx build -t allx .                                                    
[+] Building 276.2s (8/9)                                                                                                             
 => [internal] load build definition from Dockerfile                                                                              0.2s
 => => transferring dockerfile: 1.42kB                                                                                            0.0s
 => [internal] load .dockerignore                                                                                                 0.3s
 => => transferring context: 66B                                                                                                  0.0s
 => [internal] load metadata for docker.io/jupyter/pyspark-notebook:latest                                                        0.0s
 => [1/6] FROM docker.io/jupyter/pyspark-notebook                                                                                 1.5s
 => => resolve docker.io/jupyter/pyspark-notebook:latest                                                                          0.0s
 => [2/6] RUN fix-permissions /usr/local/spark/R/lib                                                                              1.8s
 => [3/6] RUN apt-get update &&     apt-get install -y --no-install-recommends     fonts-dejavu     gfortran     gcc &&     rm   22.9s
 => [4/6] RUN conda install --quiet --yes     'r-base=3.5.1'     'r-irkernel=0.8*'     'r-ggplot2=3.1*'     'r-sparklyr=0.9*'   177.2s
 => ERROR [5/6] RUN pip install --no-cache-dir     https://dist.apache.org/repos/dist/release/incubator/toree/0.3.0-incubating/  72.1s
------
 > [5/6] RUN pip install --no-cache-dir     https://dist.apache.org/repos/dist/release/incubator/toree/0.3.0-incubating/toree-pip/toree
-0.3.0.tar.gz     &&     jupyter toree install --sys-prefix &&     rm -rf /home/jovyan/.local &&     fix-permissions /opt/conda &&    
fix-permissions /home/jovyan:
#8 1.456 Collecting https://dist.apache.org/repos/dist/release/incubator/toree/0.3.0-incubating/toree-pip/toree-0.3.0.tar.gz   
#8 2.195   Downloading https://dist.apache.org/repos/dist/release/incubator/toree/0.3.0-incubating/toree-pip/toree-0.3.0.tar.gz (25.9MB
)
#8 66.81 Requirement already satisfied: jupyter_core>=4.0 in /opt/conda/lib/python3.7/site-packages (from toree==0.3.0) (4.4.0)
#8 66.82 Requirement already satisfied: jupyter_client>=4.0 in /opt/conda/lib/python3.7/site-packages (from toree==0.3.0) (5.2.4)
#8 66.83 Requirement already satisfied: traitlets<5.0,>=4.0 in /opt/conda/lib/python3.7/site-packages (from toree==0.3.0) (4.3.2)
#8 66.84 Requirement already satisfied: tornado>=4.1 in /opt/conda/lib/python3.7/site-packages (from jupyter_client>=4.0->toree==0.3.0)
 (6.0.2)
#8 66.84 Requirement already satisfied: python-dateutil>=2.1 in /opt/conda/lib/python3.7/site-packages (from jupyter_client>=4.0->toree
==0.3.0) (2.8.0) 
#8 66.85 Requirement already satisfied: pyzmq>=13 in /opt/conda/lib/python3.7/site-packages (from jupyter_client>=4.0->toree==0.3.0) (1
8.0.1)  
#8 66.85 Requirement already satisfied: ipython_genutils in /opt/conda/lib/python3.7/site-packages (from traitlets<5.0,>=4.0->toree==0.
3.0) (0.2.0)   
#8 66.85 Requirement already satisfied: six in /opt/conda/lib/python3.7/site-packages (from traitlets<5.0,>=4.0->toree==0.3.0) (1.12.0)
#8 66.85 Requirement already satisfied: decorator in /opt/conda/lib/python3.7/site-packages (from traitlets<5.0,>=4.0->toree==0.3.0) (4
.4.0)   
#8 66.85 Building wheels for collected packages: toree   
#8 66.86   Building wheel for toree (setup.py): started  
#8 68.55   Building wheel for toree (setup.py): finished with status 'done'   
#8 68.55   Stored in directory: /tmp/pip-ephem-wheel-cache-b297gkpf/wheels/1c/fe/6a/1a8d5d7d0274ccd5c160f3e2ef9477f3b071b7f9bb0ce6c96a
#8 68.82 Successfully built toree   
#8 69.19 Installing collected packages: toree
#8 69.29 Successfully installed toree-0.3.0
#8 69.69 [ToreeInstall] Installing Apache Toree version 0.3.0   
#8 69.69 [ToreeInstall] 
#8 69.69 Apache Toree is an effort undergoing incubation at the Apache Software 
#8 69.69 Foundation (ASF), sponsored by the Apache Incubator PMC. 
#8 69.69
#8 69.69 Incubation is required of all newly accepted projects until a further review
#8 69.69 indicates that the infrastructure, communications, and decision making process
#8 69.69 have stabilized in a manner consistent with other successful ASF projects.  
#8 69.69
#8 69.69 While incubation status is not necessarily a reflection of the completeness 
#8 69.69 or stability of the code, it does indicate that the project has yet to be   
#8 69.69 fully endorsed by the ASF. 
#8 69.69 [ToreeInstall] Creating kernel Scala
#8 69.69 Traceback (most recent call last):
#8 69.69   File "/opt/conda/lib/python3.7/site-packages/traitlets/traitlets.py", line 528, in get  
#8 69.69     value = obj._trait_values[self.name] 
#8 69.69 KeyError: 'kernel_spec_manager'   
#8 69.69
#8 69.69 During handling of the above exception, another exception occurred:  
#8 69.69
#8 69.69 Traceback (most recent call last):
#8 69.69   File "/opt/conda/lib/python3.7/site-packages/traitlets/traitlets.py", line 528, in get  
#8 69.69     value = obj._trait_values[self.name] 
#8 69.69 KeyError: 'data_dir'
#8 69.69
#8 69.69 During handling of the above exception, another exception occurred:  
#8 69.69
#8 69.69 Traceback (most recent call last):
#8 69.69   File "/opt/conda/bin/jupyter-toree", line 10, in <module>   
#8 69.69     sys.exit(main())
#8 69.69   File "/opt/conda/lib/python3.7/site-packages/toree/toreeapp.py", line 165, in main      
#8 69.69     ToreeApp.launch_instance()    
#8 69.69   File "/opt/conda/lib/python3.7/site-packages/traitlets/config/application.py", line 658, in launch_instance  
#8 69.69     app.start()     
#8 69.69   File "/opt/conda/lib/python3.7/site-packages/toree/toreeapp.py", line 162, in start     
#8 69.69     return self.subapp.start()    
#8 69.69   File "/opt/conda/lib/python3.7/site-packages/toree/toreeapp.py", line 125, in start     
#8 69.69     install_dir = self.kernel_spec_manager.install_kernel_spec(self.sourcedir,     
#8 69.69   File "/opt/conda/lib/python3.7/site-packages/traitlets/traitlets.py", line 556, in __get__     
#8 69.69     return self.get(obj, cls)     
#8 69.69   File "/opt/conda/lib/python3.7/site-packages/traitlets/traitlets.py", line 535, in get  
#8 69.69     value = self._validate(obj, dynamic_default())     
#8 69.69   File "/opt/conda/lib/python3.7/site-packages/jupyter_client/kernelspecapp.py", line 87, in _kernel_spec_manager_default    
#8 69.69     return KernelSpecManager(data_dir=self.data_dir)   
#8 69.69   File "/opt/conda/lib/python3.7/site-packages/traitlets/traitlets.py", line 556, in __get__     
#8 69.69     return self.get(obj, cls)     
#8 69.69   File "/opt/conda/lib/python3.7/site-packages/traitlets/traitlets.py", line 535, in get  
#8 69.69     value = self._validate(obj, dynamic_default())     
#8 69.69   File "/opt/conda/lib/python3.7/site-packages/jupyter_core/application.py", line 93, in _data_dir_default     
#8 69.69     ensure_dir_exists(d, mode=0o700)     
#8 69.69   File "/opt/conda/lib/python3.7/site-packages/jupyter_core/utils/__init__.py", line 13, in ensure_dir_exists  
#8 69.69     os.makedirs(path, mode=mode)  
#8 69.69   File "/opt/conda/lib/python3.7/os.py", line 211, in makedirs
#8 69.69     makedirs(head, exist_ok=exist_ok)    
#8 69.69   File "/opt/conda/lib/python3.7/os.py", line 211, in makedirs
#8 69.69     makedirs(head, exist_ok=exist_ok)    
#8 69.69   File "/opt/conda/lib/python3.7/os.py", line 221, in makedirs
#8 69.69     mkdir(name, mode)      
#8 69.69 PermissionError: [Errno 13] Permission denied: '/home/jovyan/.local' 
------  
failed to solve: rpc error: code = Unknown desc = executor failed running [/bin/sh -c pip install --no-cache-dir     https://dist.apach
e.org/repos/dist/release/incubator/toree/0.3.0-incubating/toree-pip/toree-0.3.0.tar.gz     &&     jupyter toree install --sys-prefix &&
     rm -rf /home/$NB_USER/.local &&     fix-permissions $CONDA_DIR &&     fix-permissions /home/$NB_USER]: exit code: 1

Here is the docker version

$ docker version
Client:
 Version:           19.03.0-beta3
 API version:       1.40
 Go version:        go1.12.4
 Git commit:        c55e026
 Built:             Thu Apr 25 02:58:59 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          19.03.0-beta3
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.4
  Git commit:       c55e026
  Built:            Thu Apr 25 02:57:32 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.5
  GitCommit:        bb71b10fd8f58240ca47fbb579b9d1028eea7c84
 runc:
  Version:          1.0.0-rc6+dev
  GitCommit:        2b18fe1d885ee5083ef9f0838fee39b62d653e30
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
$ docker buildx version
github.com/docker/buildx  .m

add «env» driver that connects to $BUILDKIT_HOST

This driver allows creating a buildx instance directly pointing to the existing buildkit endpoint.

docker buildx create --driver=env unix:///var/lib/buildkitd.sock
docker buildx create --driver=env uses BUILDKIT_HOST env value with UNIX sock default.

I guess TLS info is just passed with a custom driver-opt:

docker buildx create --driver=env --driver-opt ca=mycafile.pem https://foobar

rm/stop commands are no-op in this driver.

A better name could be «remote».

Wrong Accept header set when doing a multi-arch build

Steps to Reproduce

  1. Clone the Docker doodle and cd into a doodle dir (e.g. birthday2019)

  2. Do a multi-arch build and push to a registry

docker buildx build -f Dockerfile.cross --platform linux/amd64,linux/arm64,linux/arm/v8,linux/s390x,linux/ppc64le,windows/amd64 -t rabrams-dev.dtr.caas.docker.io/admin/alpine:latest --push .
  1. Observe that buildx will try to push up individual image manifests before trying to unify into a manifest list. You will get a PUT request to /v2/admin/alpine/manifests/latest with the payload
{
   "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
   "schemaVersion": 2,
   "config": {
      "mediaType": "application/vnd.docker.container.image.v1+json",
      "digest": "sha256:517227285745b426e5eb4d9d63dedc1b6759c6ac640fd432c0c5f011b709aa74",
      "size": 801
   },
   "layers": [
      {
         "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
         "digest": "sha256:58fd16eaae0bf5c343b8f43832206c1e7f3ff712bee5257a90cf2b51703b58e9",
         "size": 100457713
      },
      {
         "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
         "digest": "sha256:3ac6604e4ee640afc57931337dc8691e323d4f52280b2986b4e54b83b179e932",
         "size": 1403916
      }
   ]
}
  1. Buildx will do a HEAD request for the manifest it just created but set a manifest list header
Accept:         application/vnd.docker.distribution.manifest.list.v2+json, *

and as a consequence the manifest is considered invalid and the client will get a 400

add process driver

Add driver that can detect buildkitd in the $PATH and run it directly. For example, this driver can be used with #12 . Pidfile should be used for keeping track of the daemon and monitoring its state.

Sharing a build stage between architectures

Hi,

We have a lot of images which have an npm stage purely to build/compile JS/CSS assets. Doing a multi-arch build runs the stage for each — which can get very time-consuming even though the output is the same. For instance the project I have open just now took 7 minutes to run the npm scripts for the linux/arm/v7 stage compared to about 1 for linux/amd64.

I suspect that kind of asset step is quite common so I was just wanting to raise a feature request of some way of marking a stage as ‘shared’ during multi-arch builds.

And just wanted to also say — buildx is really nice — thank you! :-)

Documentation missing some points

Hi all,

I’ve been using buildx on osx-desktop (edge version) to build some multiarch images. I’m now trying to replicate the buildx experience on linux. In this case debian stretch, QEMU installed and docker 19.03. I’ve had limited success, but I’m really looking forward to getting this working.

First thing is that if starting from scratch we need to add a list of required installs on the host to make this work (if there are any). Secondly in order to make the build on 19.03+ work i had to enable experimental on docker. It might be nice if this could be added to the docs.

Now its build I can get this working:

$ docker buildx build .
[+] Building 8.4s (23/32)
 => ...

my docker buildx ls however shows only support for linux/amd64 despite my having QEMU installed. I’m guessing i need to link this somehow. I tried:

docker buildx create --platform linux/amd64/,linux/arm64,linux/arm/v7 --name mybuilder

This seems to work:

NAME/NODE    DRIVER/ENDPOINT             STATUS   PLATFORMS
mybuilder *  docker-container
  mybuilder0 unix:///var/run/docker.sock inactive linux/amd64, linux/arm64, linux/arm/v7

However i can’t build on those target platforms :(

[email protected]:/etc/docker# docker buildx build --platform linux/arm64 -t richarvey/nginx-demo --push .
[+] Building 2.0s (3/3) FINISHED
 => [internal] booting buildkit                                                                                                                                                                                    1.8s
 => => pulling image moby/buildkit:master                                                                                                                                                                          1.3s
 => => creating container buildx_buildkit_mybuilder0                                                                                                                                                               0.5s
 => [internal] load .dockerignore                                                                                                                                                                                  0.0s
 => => transferring context: 2B                                                                                                                                                                                    0.0s
 => [internal] load build definition from Dockerfile                                                                                                                                                               0.1s
 => => transferring dockerfile: 2B                                                                                                                                                                                 0.0s
failed to solve: rpc error: code = Unknown desc = failed to read dockerfile: open /tmp/buildkit-mount550903397/Dockerfile: no such file or directory

I’m probably missing a real simple step but couldn’t find it int he guide. Any help really appreciated.

[bug] Error: failed to solve: rpc error: code = Unknown desc = content digest sha256:…: not found

Summary

This error occurs randomly, and causes a build to fail when otherwise it would have succeeded. On re-running the exact same build, the error goes away.

Environment

docker-buildx version: standalone binary downloaded from https://github.com/docker/buildx/releases/download/v0.2.2/buildx-v0.2.2.linux-amd64 (sha256 5c31e171440213bb7fffd023611b1aaa7e0498162c54cb708c2a8abe3679717e)

Docker engine version:

Server: Docker Engine - Community
 Engine:
  Version:          18.09.6
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.8
  Git commit:       481bc77
  Built:            Sat May  4 01:59:36 2019
  OS/Arch:          linux/amd64
  Experimental:     false

How to reproduce

Unfortunately, I have not found a way to reproduce it reliably. However it does not seem to be related to the contents of the Dockerfile, since I have reproduced it (accidentally) with two completely unrelated Dockerfiles.

Based on my understanding of the error message, it appears that --load is required to reproduce.

Here is the output of the most recent occurrence:

+ /home/sh/bin/docker-buildx build --load --build-arg user=sh --build-arg uid=1002 --build-arg docker_guid=999 -t dev -
[+] Building 11.1s (54/54) FINISHED
 => [internal] load .dockerignore                                                               0.0s
 => => transferring context: 2B                                                                 0.0s
 => [internal] load build definition from Dockerfile                                            0.1s
 => => transferring dockerfile: 2.30kB                                                          0.0s
 => [internal] load metadata for docker.io/library/alpine:latest                                0.4s
 => [1/49] FROM docker.io/library/[email protected]:ca1c944a4f8486a153024d9965aafbe24f5723c1d5c02f  0.0s
 => => resolve docker.io/library/[email protected]:ca1c944a4f8486a153024d9965aafbe24f5723c1d5c02f4  0.0s
 => CACHED [2/49] RUN apk update                                                                0.0s
 => CACHED [3/49] RUN apk add openssh                                                           0.0s
 => CACHED [4/49] RUN apk add bash                                                              0.0s
 => CACHED [5/49] RUN apk add bind-tools                                                        0.0s
 => CACHED [6/49] RUN apk add curl                                                              0.0s
 => CACHED [7/49] RUN apk add docker                                                            0.0s
 => CACHED [8/49] RUN apk add g++                                                               0.0s
 => CACHED [9/49] RUN apk add gcc                                                               0.0s
 => CACHED [10/49] RUN apk add git                                                              0.0s
 => CACHED [11/49] RUN apk add git-perl                                                         0.0s
 => CACHED [12/49] RUN apk add make                                                             0.0s
 => CACHED [13/49] RUN apk add python                                                           0.0s
 => CACHED [14/49] RUN apk add openssl-dev                                                      0.0s
 => CACHED [15/49] RUN apk add vim                                                              0.0s
 => CACHED [16/49] RUN apk add py-pip                                                           0.0s
 => CACHED [17/49] RUN apk add file                                                             0.0s
 => CACHED [18/49] RUN apk add groff                                                            0.0s
 => CACHED [19/49] RUN apk add jq                                                               0.0s
 => CACHED [20/49] RUN apk add man                                                              0.0s
 => CACHED [21/49] RUN cd /tmp && git clone https://github.com/AGWA/git-crypt && cd git-crypt   0.0s
 => CACHED [22/49] RUN apk add go                                                               0.0s
 => CACHED [23/49] RUN apk add coreutils                                                        0.0s
 => CACHED [24/49] RUN apk add python2-dev                                                      0.0s
 => CACHED [25/49] RUN apk add python3-dev                                                      0.0s
 => CACHED [26/49] RUN apk add tar                                                              0.0s
 => CACHED [27/49] RUN apk add vim                                                              0.0s
 => CACHED [28/49] RUN apk add rsync                                                            0.0s
 => CACHED [29/49] RUN apk add less                                                             0.0s
 => CACHED [30/49] RUN pip install awscli                                                       0.0s
 => CACHED [31/49] RUN curl --silent --location "https://github.com/weaveworks/eksctl/releases  0.0s
 => CACHED [32/49] RUN curl https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-  0.0s
 => CACHED [33/49] RUN curl -L -o /usr/local/bin/kubectl https://storage.googleapis.com/kubern  0.0s
 => CACHED [34/49] RUN curl -L -o /usr/local/bin/kustomize  https://github.com/kubernetes-sigs  0.0s
 => CACHED [35/49] RUN apk add ruby                                                             0.0s
 => CACHED [36/49] RUN apk add ruby-dev                                                         0.0s
 => CACHED [37/49] RUN gem install bigdecimal --no-ri --no-rdoc                                 0.0s
 => CACHED [38/49] RUN gem install kubernetes-deploy --no-ri --no-rdoc                          0.0s
 => CACHED [39/49] RUN apk add npm                                                              0.0s
 => CACHED [40/49] RUN npm config set unsafe-perm true                                          0.0s
 => CACHED [41/49] RUN npm install -g yarn                                                      0.0s
 => CACHED [42/49] RUN npm install -g netlify-cli                                               0.0s
 => CACHED [43/49] RUN apk add libffi-dev                                                       0.0s
 => CACHED [44/49] RUN pip install docker-compose                                               0.0s
 => CACHED [45/49] RUN apk add shadow sudo                                                      0.0s
 => CACHED [46/49] RUN echo '%wheel ALL=(ALL) NOPASSWD: ALL' > /etc/sudoers                     0.0s
 => CACHED [47/49] RUN useradd -G docker,wheel -m -s /bin/bash -u 1002 sh                       0.0s
 => CACHED [48/49] RUN groupmod -o -g 999 docker                                                0.0s
 => CACHED [49/49] WORKDIR /home/sh                                                             0.0s
 => ERROR exporting to oci image format                                                        10.5s
 => => exporting layers                                                                         0.6s
 => => exporting manifest sha256:97c8dedb84183d0d87b6c930b12e1b9396bb81fd9c6587fe2fbb9ae092d30  0.0s
 => => exporting config sha256:ef15b3f6a374fd494a0a586f7b33ac9435953460776e268b20f2eee5f14def6  0.0s
 => => sending tarball                                                                          9.6s
 => importing to docker                                                                         0.0s
------
 > exporting to oci image format:
------
Error: failed to solve: rpc error: code = Unknown desc = content digest sha256:ef15b3f6a374fd494a0a586f7b33ac9435953460776e268b20f2eee5f14def65: not found
Usage:
  /home/sh/bin/docker-buildx build [OPTIONS] PATH | URL | - [flags]

Aliases:
  build, b

Flags:
      --add-host strings         Add a custom host-to-IP mapping (host:ip)
      --build-arg stringArray    Set build-time variables
      --cache-from stringArray   External cache sources (eg. user/app:cache, type=local,src=path/to/dir)
      --cache-to stringArray     Cache export destinations (eg. user/app:cache, type=local,dest=path/to/dir)
  -f, --file string              Name of the Dockerfile (Default is 'PATH/Dockerfile')
  -h, --help                     help for build
      --iidfile string           Write the image ID to the file
      --label stringArray        Set metadata for an image
      --load                     Shorthand for --output=type=docker
      --network string           Set the networking mode for the RUN instructions during build (default "default")
      --no-cache                 Do not use cache when building the image
  -o, --output stringArray       Output destination (format: type=local,dest=path)
      --platform stringArray     Set target platform for build
      --progress string          Set type of progress output (auto, plain, tty). Use plain to show container output (default "auto")
      --pull                     Always attempt to pull a newer version of the image
      --push                     Shorthand for --output=type=registry
      --secret stringArray       Secret file to expose to the build: id=mysecret,src=/local/secret
      --ssh stringArray          SSH agent socket or keys to expose to the build (format: default|<id>[=<socket>|<key>[,<key>]])
  -t, --tag stringArray          Name and optionally a tag in the 'name:tag' format
--target string Set the target build stage to build.

Support Jsonnet in `buildx bake` command

Is there interest on adding Jsonnet (http://jsonnet.org/) support to buildx bake template files?

Jsonnet allows lots of flexibility on generating Json objects with variables, functions and much more.

Platform darwin, what the? Since when? Is this for real?

In the readme there is mention of building for multiple platforms:

When invoking a build, the —platform flag can be used to specify the target platform for the build output, (e.g. linux/amd64, linux/arm64, darwin/amd64).

Is this suggesting that docker can now run natively on MacOs? Similar to building and running a native Windows container instead of running a Linux container inside a virtual machine.

I did a quick google but couldn’t find anything obvious, I would have assumed something as major as this would have had plenty of advertisement so I am guessing it’s not the holy grail I thought it was, so what does darwin/amd64 do in the context of docker buildx build --platform?

bring back prune command

Identical to docker builder prune

Add ability to pipe Dockerfile

This is a feature request. I’d like to pipe a Dockerfile into the buildx build command. Right now I get:

cat Dockerfile | docker buildx build -f - .
WARN[0000] No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
[+] Building 0.1s (2/2) FINISHED
 => [internal] load .dockerignore                                                                                                                                                                                                                                                                                                                                      0.0s
 => => transferring context: 2B                                                                                                                                                                                                                                                                                                                                        0.0s
 => [internal] load build definition from -                                                                                                                                                                                                                                                                                                                            0.0s
 => => transferring dockerfile: 2B                                                                                                                                                                                                                                                                                                                                     0.0s
failed to solve: rpc error: code = Unknown desc = failed to read dockerfile: open /tmp/buildkit-mount406044302/-: no such file or directory
[I] $

ls output is weird

$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS  PLATFORMS
default * docker
  default default         running linux/amd64

Reuse Go build cache to speed up incremental builds of Go applications

Does docker-buildx have a way to reuse intermediary outputs such as Go build cache, to speed up incremental builds? I believe that the regular docker build does not offer a way to do this, so I’m trying my luck here on the bleeding edge :)

The goal would be to make docker-buildx of my Go app just as fast as a direct go build. At the moment that’s not the case, because all my dependencies need to be re-built every time. I imagine this would be useful in many cases, not just for Go applications.

Apk add not working on building buildx

Cloning git

# git clone git://github.com/docker/buildx && cd buildx
Cloning into 'buildx'...
remote: Enumerating objects: 15, done.
remote: Counting objects: 100% (15/15), done.
remote: Compressing objects: 100% (15/15), done.
remote: Total 5065 (delta 0), reused 4 (delta 0), pack-reused 5050
Receiving objects: 100% (5065/5065), 5.70 MiB | 9.00 MiB/s, done.
Resolving deltas: 100% (1618/1618), done.

Installing

# make install
./hack/binaries
+ progressFlag=
+ '[' '' == true ']'
+ case $buildmode in
+ binariesDocker
+ mkdir -p bin/tmp
+ export DOCKER_BUILDKIT=1
+ DOCKER_BUILDKIT=1
++ mktemp -t docker-iidfile.XXXXXXXXXX
+ iidfile=/tmp/docker-iidfile.DBjShhbJPS
+ platformFlag=
+ '[' -n '' ']'
+ docker build --target=binaries --iidfile /tmp/docker-iidfile.DBjShhbJPS --force-rm .
[+] Building 12.9s (14/17)
 => [internal] load build definition from Dockerfile                                                                         0.0s
 => => transferring dockerfile: 3.01kB                                                                                       0.0s
 => [internal] load .dockerignore                                                                                            0.0s
 => => transferring context: 56B                                                                                             0.0s
 => resolve image config for docker.io/docker/dockerfile:1.1-experimental                                                    1.2s
 => CACHED docker-image://docker.io/docker/dockerfile:[email protected]:9022e911101f01b2854c7a4b2c77f524b998891941da5  0.0s
 => [internal] load build definition from Dockerfile                                                                         0.0s
 => => transferring dockerfile: 3.01kB                                                                                       0.0s
 => [internal] load .dockerignore                                                                                            0.0s
 => [internal] load metadata for docker.io/tonistiigi/xx:[email protected]:6f7d999551dd471b58f70716754290495690efa8421e0a1fcf18  0.0s
 => [internal] load metadata for docker.io/library/golang:1.12-alpine                                                        0.4s
 => CACHED [xgo 1/1] FROM docker.io/tonistiigi/xx:[email protected]:6f7d999551dd471b58f70716754290495690efa8421e0a1fcf18eb11d0c  0.0s
 => CACHED [internal] helper image for file operations                                                                       0.0s
 => [gobase 1/3] FROM docker.io/library/golang:[email protected]:87e527712342efdb8ec5ddf2d57e87de7bd4d2fedf9f6f3547ee5768b  0.0s
 => [internal] load build context                                                                                            0.7s
 => => transferring context: 29.64MB                                                                                         0.7s
 => CACHED [gobase 2/3] COPY --from=xgo / /                                                                                  0.0s
 => ERROR [gobase 3/3] RUN apk add --no-cache file git                                                                      10.7s
------
 > [gobase 3/3] RUN apk add --no-cache file git:
#13 0.526 fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
#13 5.534 fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
#13 5.534 WARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz: temporary error (try again later)
#13 10.54 WARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz: temporary error (try again later)
#13 10.54 ERROR: unsatisfiable constraints:
#13 10.54   file (missing):
#13 10.54     required by: world[file]
#13 10.54   git (missing):
#13 10.54     required by: world[git]
------
rpc error: code = Unknown desc = executor failed running [/bin/sh -c apk add --no-cache file git]: exit code: 2
Makefile:5: recipe for target 'binaries' failed
make: *** [binaries] Error 1

Docker version

# docker version
Client:
 Version:           18.09.7
 API version:       1.39
 Go version:        go1.10.1
 Git commit:        2d0083d
 Built:             Wed Jul  3 12:13:59 2019
 OS/Arch:           linux/amd64
 Experimental:      true

Server:
 Engine:
  Version:          18.09.7
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.1
  Git commit:       2d0083d
  Built:            Mon Jul  1 19:31:12 2019
  OS/Arch:          linux/amd64
  Experimental:     true

The apk command also fails when DOCKER_CLI_EXPERIMENTAL and DOCKER_BUILDKIT are enabled in env and trying to build images.

Building non experimental works fine.

buildfile command

Add another command that produces a higher level build operation based on config from a file. An input file can be compose or another format that can stack the cli options into separate targets.

On invoking command all builds are invoked concurrently, looking like a single build to the user.

buildx buildfile -f compose.yml [target] [target]..

The file allows stacking all cli options, including things like exporter and remote cache settings (but probably excluding entitlements). When using multiple files you can override some properties with a file that is not committed to git. The cli flags can be used for the highest priority override. The non-compose format can support sharing code so that groups of options can be used as part of multiple targets.

The name needs improvement.

Support for linux/arm/v8?

First of all, the new multi-arch support in docker is amazing and the QEMU integration into Docker Desktop is a real life-saver!

I work with a huge variety of ARM machines all the way from armv5l to aarch64. We have a number of devices running armv8l. Is there any chance this could be added as a supported target (i.e. linux/arm/v8)?

Building images for multi-arch with —load parameter fails

While trying to build images for multi-architecture (AMD64 and ARM64), I tried to load them into the Docker daemon with the --load parameter but I got an error:

➜ docker buildx build --platform linux/arm64,linux/amd64 --load  -t carlosedp/test:v1  .
[+] Building 1.3s (24/24) FINISHED
 => [internal] load .dockerignore                                                                                                                                                        0.0s
 => => transferring context: 2B                                                                                                                                                          0.0s
 => [internal] load build definition from Dockerfile                                                                                                                                     0.0s
 => => transferring dockerfile: 115B                                                                                                                                                     0.0s
 => [linux/amd64 internal] load metadata for docker.io/library/alpine:latest                                                                                                             0.8s
 => [linux/amd64 internal] load metadata for docker.io/library/golang:1.12-alpine                                                                                                        1.0s
 => [linux/arm64 internal] load metadata for docker.io/library/golang:1.12-alpine                                                                                                        1.2s
 => [linux/arm64 internal] load metadata for docker.io/library/alpine:latest                                                                                                             1.2s
 => [linux/amd64 builder 1/5] FROM docker.io/library/golang:[email protected]:1a5f8b6db670a7776ce5beeb69054a7cf7047a5d83176d39b94665a54cfb9756                                          0.0s
 => => resolve docker.io/library/golang:[email protected]:1a5f8b6db670a7776ce5beeb69054a7cf7047a5d83176d39b94665a54cfb9756                                                              0.0s
 => [linux/amd64 stage-1 1/4] FROM docker.io/library/[email protected]:28ef97b8686a0b5399129e9b763d5b7e5ff03576aa5580d6f4182a49c5fe1913                                                      0.0s
 => => resolve docker.io/library/[email protected]:28ef97b8686a0b5399129e9b763d5b7e5ff03576aa5580d6f4182a49c5fe1913                                                                          0.0s
 => [internal] load build context                                                                                                                                                        0.0s
 => => transferring context: 232B                                                                                                                                                        0.0s
 => CACHED [linux/amd64 stage-1 2/4] RUN apk add --no-cache file &&     rm -rf /var/cache/apk/*                                                                                          0.0s
 => CACHED [linux/amd64 builder 2/5] WORKDIR /go/src/app                                                                                                                                 0.0s
 => CACHED [linux/amd64 builder 3/5] ADD . /go/src/app/                                                                                                                                  0.0s
 => CACHED [linux/amd64 builder 4/5] RUN CGO_ENABLED=0 go build -o main .                                                                                                                0.0s
 => CACHED [linux/amd64 builder 5/5] RUN mv /go/src/app/main /                                                                                                                           0.0s
 => CACHED [linux/amd64 stage-1 3/4] COPY --from=builder /main /main                                                                                                                     0.0s
 => [linux/arm64 builder 1/5] FROM docker.io/library/golang:[email protected]:1a5f8b6db670a7776ce5beeb69054a7cf7047a5d83176d39b94665a54cfb9756                                          0.0s
 => => resolve docker.io/library/golang:[email protected]:1a5f8b6db670a7776ce5beeb69054a7cf7047a5d83176d39b94665a54cfb9756                                                              0.0s
 => [linux/arm64 stage-1 1/4] FROM docker.io/library/[email protected]:28ef97b8686a0b5399129e9b763d5b7e5ff03576aa5580d6f4182a49c5fe1913                                                      0.0s
 => => resolve docker.io/library/[email protected]:28ef97b8686a0b5399129e9b763d5b7e5ff03576aa5580d6f4182a49c5fe1913                                                                          0.0s
 => CACHED [linux/arm64 stage-1 2/4] RUN apk add --no-cache file &&     rm -rf /var/cache/apk/*                                                                                          0.0s
 => CACHED [linux/arm64 builder 2/5] WORKDIR /go/src/app                                                                                                                                 0.0s
 => CACHED [linux/arm64 builder 3/5] ADD . /go/src/app/                                                                                                                                  0.0s
 => CACHED [linux/arm64 builder 4/5] RUN CGO_ENABLED=0 go build -o main .                                                                                                                0.0s
 => CACHED [linux/arm64 builder 5/5] RUN mv /go/src/app/main /                                                                                                                           0.0s
 => CACHED [linux/arm64 stage-1 3/4] COPY --from=builder /main /main                                                                                                                     0.0s
 => ERROR exporting to oci image format                                                                                                                                                  0.0s
------
 > exporting to oci image format:
------
failed to solve: rpc error: code = Unknown desc = docker exporter does not currently support exporting manifest lists

I understand that the daemon can’t see the manifest lists but I believe there should be a way to tag the images with some variable, like:

docker buildx build --platform linux/arm64,linux/amd64 --load -t carlosedp/test:v1-$ARCH .

To have both images loaded into the daemon and ignoring the manifest list in this case.

simplify controlling plain output

Some CI systems run through terminal even if they really shouldn’t and setting the --progress=plain flag in scripts is hard. We should also provide an environment variable that is much easier to configure for the auto behavior. Maybe we could just detect CONTINUOUS_INTEGRATION=1 directly but some users may prefer tty output even in ci.

BUILDKIT_PROGRESS_PLAIN=1
BUILDKIT_PROGRESS=plain

@tiborvass

Понравилась статья? Поделить с друзьями:
  • Error dmg image is corrupted
  • Error dlopen failed
  • Error dll prototype2engine dll does not contain required exports что делать
  • Error dll msvcr120 dll
  • Error dll msvcp100 dll