switch to go.podman.io/image/v5

This commit is contained in:
CrazyMax
2025-09-13 19:39:11 +02:00
parent a70601f4b0
commit ed706b60ad
233 changed files with 299 additions and 289 deletions

189
vendor/go.podman.io/image/v5/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,189 @@
Apache License
Version 2.0, January 2004
https://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

253
vendor/go.podman.io/image/v5/docker/body_reader.go generated vendored Normal file
View File

@@ -0,0 +1,253 @@
package docker
import (
"context"
"errors"
"fmt"
"io"
"math"
"math/rand/v2"
"net/http"
"net/url"
"strconv"
"strings"
"syscall"
"time"
"github.com/sirupsen/logrus"
)
const (
// bodyReaderMinimumProgress is the minimum progress we consider a good reason to retry
bodyReaderMinimumProgress = 1 * 1024 * 1024
// bodyReaderMSSinceLastRetry is the minimum time since a last retry we consider a good reason to retry
bodyReaderMSSinceLastRetry = 60 * 1_000
)
// bodyReader is an io.ReadCloser returned by dockerImageSource.GetBlob,
// which can transparently resume some (very limited) kinds of aborted connections.
type bodyReader struct {
ctx context.Context
c *dockerClient
path string // path to pass to makeRequest to retry
logURL *url.URL // a string to use in error messages
firstConnectionTime time.Time
body io.ReadCloser // The currently open connection we use to read data, or nil if there is nothing to read from / close.
lastRetryOffset int64 // -1 if N/A
lastRetryTime time.Time // IsZero() if N/A
offset int64 // Current offset within the blob
lastSuccessTime time.Time // IsZero() if N/A
}
// newBodyReader creates a bodyReader for request path in c.
// firstBody is an already correctly opened body for the blob, returning the full blob from the start.
// If reading from firstBody fails, bodyReader may heuristically decide to resume.
func newBodyReader(ctx context.Context, c *dockerClient, path string, firstBody io.ReadCloser) (io.ReadCloser, error) {
logURL, err := c.resolveRequestURL(path)
if err != nil {
return nil, err
}
res := &bodyReader{
ctx: ctx,
c: c,
path: path,
logURL: logURL,
firstConnectionTime: time.Now(),
body: firstBody,
lastRetryOffset: -1,
lastRetryTime: time.Time{},
offset: 0,
lastSuccessTime: time.Time{},
}
return res, nil
}
// parseDecimalInString ensures that s[start:] starts with a non-negative decimal number, and returns that number and the offset after the number.
func parseDecimalInString(s string, start int) (int64, int, error) {
i := start
for i < len(s) && s[i] >= '0' && s[i] <= '9' {
i++
}
if i == start {
return -1, -1, errors.New("missing decimal number")
}
v, err := strconv.ParseInt(s[start:i], 10, 64)
if err != nil {
return -1, -1, fmt.Errorf("parsing number: %w", err)
}
return v, i, nil
}
// parseExpectedChar ensures that s[pos] is the expected byte, and returns the offset after it.
func parseExpectedChar(s string, pos int, expected byte) (int, error) {
if pos == len(s) || s[pos] != expected {
return -1, fmt.Errorf("missing expected %q", expected)
}
return pos + 1, nil
}
// parseContentRange ensures that res contains a Content-Range header with a byte range, and returns (first, last, completeLength) on success. Size can be -1.
func parseContentRange(res *http.Response) (int64, int64, int64, error) {
hdrs := res.Header.Values("Content-Range")
switch len(hdrs) {
case 0:
return -1, -1, -1, errors.New("missing Content-Range: header")
case 1:
break
default:
return -1, -1, -1, fmt.Errorf("ambiguous Content-Range:, %d header values", len(hdrs))
}
hdr := hdrs[0]
expectedPrefix := "bytes "
if !strings.HasPrefix(hdr, expectedPrefix) {
return -1, -1, -1, fmt.Errorf("invalid Content-Range: %q, missing prefix %q", hdr, expectedPrefix)
}
first, pos, err := parseDecimalInString(hdr, len(expectedPrefix))
if err != nil {
return -1, -1, -1, fmt.Errorf("invalid Content-Range: %q, parsing first-pos: %w", hdr, err)
}
pos, err = parseExpectedChar(hdr, pos, '-')
if err != nil {
return -1, -1, -1, fmt.Errorf("invalid Content-Range: %q: %w", hdr, err)
}
last, pos, err := parseDecimalInString(hdr, pos)
if err != nil {
return -1, -1, -1, fmt.Errorf("invalid Content-Range: %q, parsing last-pos: %w", hdr, err)
}
pos, err = parseExpectedChar(hdr, pos, '/')
if err != nil {
return -1, -1, -1, fmt.Errorf("invalid Content-Range: %q: %w", hdr, err)
}
completeLength := int64(-1)
if pos < len(hdr) && hdr[pos] == '*' {
pos++
} else {
completeLength, pos, err = parseDecimalInString(hdr, pos)
if err != nil {
return -1, -1, -1, fmt.Errorf("invalid Content-Range: %q, parsing complete-length: %w", hdr, err)
}
}
if pos < len(hdr) {
return -1, -1, -1, fmt.Errorf("invalid Content-Range: %q, unexpected trailing content", hdr)
}
return first, last, completeLength, nil
}
// Read implements io.ReadCloser
func (br *bodyReader) Read(p []byte) (int, error) {
if br.body == nil {
return 0, fmt.Errorf("internal error: bodyReader.Read called on a closed object for %s", br.logURL.Redacted())
}
n, err := br.body.Read(p)
br.offset += int64(n)
switch {
case err == nil || err == io.EOF:
br.lastSuccessTime = time.Now()
return n, err // Unlike the default: case, dont log anything.
case errors.Is(err, io.ErrUnexpectedEOF) || errors.Is(err, syscall.ECONNRESET):
originalErr := err
redactedURL := br.logURL.Redacted()
if err := br.errorIfNotReconnecting(originalErr, redactedURL); err != nil {
return n, err
}
if err := br.body.Close(); err != nil {
logrus.Debugf("Error closing blob body: %v", err) // … and ignore err otherwise
}
br.body = nil
time.Sleep(1*time.Second + rand.N(100_000*time.Microsecond)) // Some jitter so that a failure blip doesnt cause a deterministic stampede
headers := map[string][]string{
"Range": {fmt.Sprintf("bytes=%d-", br.offset)},
}
res, err := br.c.makeRequest(br.ctx, http.MethodGet, br.path, headers, nil, v2Auth, nil)
if err != nil {
return n, fmt.Errorf("%w (while reconnecting: %v)", originalErr, err)
}
consumedBody := false
defer func() {
if !consumedBody {
res.Body.Close()
}
}()
switch res.StatusCode {
case http.StatusPartialContent: // OK
// A client MUST inspect a 206 response's Content-Type and Content-Range field(s) to determine what parts are enclosed and whether additional requests are needed.
// The recipient of an invalid Content-Range MUST NOT attempt to recombine the received content with a stored representation.
first, last, completeLength, err := parseContentRange(res)
if err != nil {
return n, fmt.Errorf("%w (after reconnecting, invalid Content-Range header: %v)", originalErr, err)
}
// We dont handle responses that start at an unrequested offset, nor responses that terminate before the end of the full blob.
if first != br.offset || (completeLength != -1 && last+1 != completeLength) {
return n, fmt.Errorf("%w (after reconnecting at offset %d, got unexpected Content-Range %d-%d/%d)", originalErr, br.offset, first, last, completeLength)
}
// Continue below
case http.StatusOK:
return n, fmt.Errorf("%w (after reconnecting, server did not process a Range: header, status %d)", originalErr, http.StatusOK)
default:
err := registryHTTPResponseToError(res)
return n, fmt.Errorf("%w (after reconnecting, fetching blob: %v)", originalErr, err)
}
logrus.Debugf("Successfully reconnected to %s", redactedURL)
consumedBody = true
br.body = res.Body
br.lastRetryOffset = br.offset
br.lastRetryTime = time.Now()
return n, nil
default:
logrus.Debugf("Error reading blob body from %s: %#v", br.logURL.Redacted(), err)
return n, err
}
}
// millisecondsSinceOptional is like currentTime.Sub(tm).Milliseconds, but it returns a floating-point value.
// If tm.IsZero(), it returns math.NaN()
func millisecondsSinceOptional(currentTime time.Time, tm time.Time) float64 {
if tm.IsZero() {
return math.NaN()
}
return float64(currentTime.Sub(tm).Nanoseconds()) / 1_000_000.0
}
// errorIfNotReconnecting makes a heuristic decision whether we should reconnect after err at redactedURL; if so, it returns nil,
// otherwise it returns an appropriate error to return to the caller (possibly augmented with data about the heuristic)
func (br *bodyReader) errorIfNotReconnecting(originalErr error, redactedURL string) error {
currentTime := time.Now()
msSinceFirstConnection := millisecondsSinceOptional(currentTime, br.firstConnectionTime)
msSinceLastRetry := millisecondsSinceOptional(currentTime, br.lastRetryTime)
msSinceLastSuccess := millisecondsSinceOptional(currentTime, br.lastSuccessTime)
logrus.Debugf("Reading blob body from %s failed (%#v), decision inputs: total %d @%.3f ms, last retry %d @%.3f ms, last progress @%.3f ms",
redactedURL, originalErr, br.offset, msSinceFirstConnection, br.lastRetryOffset, msSinceLastRetry, msSinceLastSuccess)
progress := br.offset - br.lastRetryOffset
if progress >= bodyReaderMinimumProgress {
logrus.Infof("Reading blob body from %s failed (%v), reconnecting after %d bytes…", redactedURL, originalErr, progress)
return nil
}
if br.lastRetryTime.IsZero() {
logrus.Infof("Reading blob body from %s failed (%v), reconnecting (first reconnection)…", redactedURL, originalErr)
return nil
}
if msSinceLastRetry >= bodyReaderMSSinceLastRetry {
logrus.Infof("Reading blob body from %s failed (%v), reconnecting after %.3f ms…", redactedURL, originalErr, msSinceLastRetry)
return nil
}
logrus.Debugf("Not reconnecting to %s: insufficient progress %d / time since last retry %.3f ms", redactedURL, progress, msSinceLastRetry)
return fmt.Errorf("(heuristic tuning data: total %d @%.3f ms, last retry %d @%.3f ms, last progress @ %.3f ms): %w",
br.offset, msSinceFirstConnection, br.lastRetryOffset, msSinceLastRetry, msSinceLastSuccess, originalErr)
}
// Close implements io.ReadCloser
func (br *bodyReader) Close() error {
if br.body == nil {
return nil
}
err := br.body.Close()
br.body = nil
return err
}

23
vendor/go.podman.io/image/v5/docker/cache.go generated vendored Normal file
View File

@@ -0,0 +1,23 @@
package docker
import (
"go.podman.io/image/v5/docker/reference"
"go.podman.io/image/v5/types"
)
// bicTransportScope returns a BICTransportScope appropriate for ref.
func bicTransportScope(ref dockerReference) types.BICTransportScope {
// Blobs can be reused across the whole registry.
return types.BICTransportScope{Opaque: reference.Domain(ref.ref)}
}
// newBICLocationReference returns a BICLocationReference appropriate for ref.
func newBICLocationReference(ref dockerReference) types.BICLocationReference {
// Blobs are scoped to repositories (the tag/digest are not necessary to reuse a blob).
return types.BICLocationReference{Opaque: ref.ref.Name()}
}
// parseBICLocationReference returns a repository for encoded lr.
func parseBICLocationReference(lr types.BICLocationReference) (reference.Named, error) {
return reference.ParseNormalizedNamed(lr.Opaque)
}

View File

@@ -0,0 +1,161 @@
// Code below is taken from https://github.com/distribution/distribution/blob/a4d9db5a884b70be0c96dd6a7a9dbef4f2798c51/registry/client/errors.go
// Copyright 2022 github.com/distribution/distribution authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package docker
import (
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"slices"
"github.com/docker/distribution/registry/api/errcode"
)
// errNoErrorsInBody is returned when an HTTP response body parses to an empty
// errcode.Errors slice.
var errNoErrorsInBody = errors.New("no error details found in HTTP response body")
// UnexpectedHTTPStatusError is returned when an unexpected HTTP status is
// returned when making a registry api call.
type UnexpectedHTTPStatusError struct {
// StatusCode code as returned from the server, so callers can
// match the exact code to make certain decisions if needed.
StatusCode int
// status text as displayed in the error message, not exposed as callers should match the number.
status string
}
func (e UnexpectedHTTPStatusError) Error() string {
return fmt.Sprintf("received unexpected HTTP status: %s", e.status)
}
func newUnexpectedHTTPStatusError(resp *http.Response) UnexpectedHTTPStatusError {
return UnexpectedHTTPStatusError{
StatusCode: resp.StatusCode,
status: resp.Status,
}
}
// unexpectedHTTPResponseError is returned when an expected HTTP status code
// is returned, but the content was unexpected and failed to be parsed.
type unexpectedHTTPResponseError struct {
ParseErr error
StatusCode int
Response []byte
}
func (e *unexpectedHTTPResponseError) Error() string {
return fmt.Sprintf("error parsing HTTP %d response body: %s: %q", e.StatusCode, e.ParseErr.Error(), string(e.Response))
}
func parseHTTPErrorResponse(statusCode int, r io.Reader) error {
var errors errcode.Errors
body, err := io.ReadAll(r)
if err != nil {
return err
}
// For backward compatibility, handle irregularly formatted
// messages that contain a "details" field.
var detailsErr struct {
Details string `json:"details"`
}
err = json.Unmarshal(body, &detailsErr)
if err == nil && detailsErr.Details != "" {
switch statusCode {
case http.StatusUnauthorized:
return errcode.ErrorCodeUnauthorized.WithMessage(detailsErr.Details)
case http.StatusTooManyRequests:
return errcode.ErrorCodeTooManyRequests.WithMessage(detailsErr.Details)
default:
return errcode.ErrorCodeUnknown.WithMessage(detailsErr.Details)
}
}
if err := json.Unmarshal(body, &errors); err != nil {
return &unexpectedHTTPResponseError{
ParseErr: err,
StatusCode: statusCode,
Response: body,
}
}
if len(errors) == 0 {
// If there was no error specified in the body, return
// UnexpectedHTTPResponseError.
return &unexpectedHTTPResponseError{
ParseErr: errNoErrorsInBody,
StatusCode: statusCode,
Response: body,
}
}
return errors
}
func makeErrorList(err error) []error {
if errL, ok := err.(errcode.Errors); ok {
return []error(errL)
}
return []error{err}
}
func mergeErrors(err1, err2 error) error {
return errcode.Errors(append(slices.Clone(makeErrorList(err1)), makeErrorList(err2)...))
}
// handleErrorResponse returns error parsed from HTTP response for an
// unsuccessful HTTP response code (in the range 400 - 499 inclusive). An
// UnexpectedHTTPStatusError returned for response code outside of expected
// range.
func handleErrorResponse(resp *http.Response) error {
switch {
case resp.StatusCode == http.StatusUnauthorized:
// Check for OAuth errors within the `WWW-Authenticate` header first
// See https://tools.ietf.org/html/rfc6750#section-3
for c := range iterateAuthHeader(resp.Header) {
if c.Scheme == "bearer" {
var err errcode.Error
// codes defined at https://tools.ietf.org/html/rfc6750#section-3.1
switch c.Parameters["error"] {
case "invalid_token":
err.Code = errcode.ErrorCodeUnauthorized
case "insufficient_scope":
err.Code = errcode.ErrorCodeDenied
default:
continue
}
if description := c.Parameters["error_description"]; description != "" {
err.Message = description
} else {
err.Message = err.Code.Message()
}
return mergeErrors(err, parseHTTPErrorResponse(resp.StatusCode, resp.Body))
}
}
fallthrough
case resp.StatusCode >= 400 && resp.StatusCode < 500:
err := parseHTTPErrorResponse(resp.StatusCode, resp.Body)
if uErr, ok := err.(*unexpectedHTTPResponseError); ok && resp.StatusCode == 401 {
return errcode.ErrorCodeUnauthorized.WithDetail(uErr.Response)
}
return err
}
return newUnexpectedHTTPStatusError(resp)
}

1214
vendor/go.podman.io/image/v5/docker/docker_client.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

186
vendor/go.podman.io/image/v5/docker/docker_image.go generated vendored Normal file
View File

@@ -0,0 +1,186 @@
package docker
import (
"context"
"encoding/json"
"errors"
"fmt"
"net/http"
"net/url"
"strings"
"github.com/opencontainers/go-digest"
"github.com/sirupsen/logrus"
"go.podman.io/image/v5/docker/reference"
"go.podman.io/image/v5/internal/image"
"go.podman.io/image/v5/manifest"
"go.podman.io/image/v5/types"
)
// Image is a Docker-specific implementation of types.ImageCloser with a few extra methods
// which are specific to Docker.
type Image struct {
types.ImageCloser
src *dockerImageSource
}
// newImage returns a new Image interface type after setting up
// a client to the registry hosting the given image.
// The caller must call .Close() on the returned Image.
func newImage(ctx context.Context, sys *types.SystemContext, ref dockerReference) (types.ImageCloser, error) {
s, err := newImageSource(ctx, sys, ref)
if err != nil {
return nil, err
}
img, err := image.FromSource(ctx, sys, s)
if err != nil {
return nil, err
}
return &Image{ImageCloser: img, src: s}, nil
}
// SourceRefFullName returns a fully expanded name for the repository this image is in.
func (i *Image) SourceRefFullName() string {
return i.src.logicalRef.ref.Name()
}
// GetRepositoryTags list all tags available in the repository. The tag
// provided inside the ImageReference will be ignored. (This is a
// backward-compatible shim method which calls the module-level
// GetRepositoryTags)
func (i *Image) GetRepositoryTags(ctx context.Context) ([]string, error) {
return GetRepositoryTags(ctx, i.src.c.sys, i.src.logicalRef)
}
// GetRepositoryTags list all tags available in the repository. The tag
// provided inside the ImageReference will be ignored.
func GetRepositoryTags(ctx context.Context, sys *types.SystemContext, ref types.ImageReference) ([]string, error) {
dr, ok := ref.(dockerReference)
if !ok {
return nil, errors.New("ref must be a dockerReference")
}
registryConfig, err := loadRegistryConfiguration(sys)
if err != nil {
return nil, err
}
path := fmt.Sprintf(tagsPath, reference.Path(dr.ref))
client, err := newDockerClientFromRef(sys, dr, registryConfig, false, "pull")
if err != nil {
return nil, fmt.Errorf("failed to create client: %w", err)
}
defer client.Close()
tags := make([]string, 0)
for {
res, err := client.makeRequest(ctx, http.MethodGet, path, nil, nil, v2Auth, nil)
if err != nil {
return nil, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
return nil, fmt.Errorf("fetching tags list: %w", registryHTTPResponseToError(res))
}
var tagsHolder struct {
Tags []string
}
if err = json.NewDecoder(res.Body).Decode(&tagsHolder); err != nil {
return nil, err
}
for _, tag := range tagsHolder.Tags {
if _, err := reference.WithTag(dr.ref, tag); err != nil { // Ensure the tag does not contain unexpected values
// Per https://github.com/containers/skopeo/issues/2409 , Sonatype Nexus 3.58, contrary
// to the spec, may include JSON null values in the list; and Go silently parses them as "".
if tag == "" {
logrus.Debugf("Ignoring invalid empty tag")
continue
}
// Per https://github.com/containers/skopeo/issues/2346 , unknown versions of JFrog Artifactory,
// contrary to the tag format specified in
// https://github.com/opencontainers/distribution-spec/blob/8a871c8234977df058f1a14e299fe0a673853da2/spec.md?plain=1#L160 ,
// include digests in the list.
if _, err := digest.Parse(tag); err == nil {
logrus.Debugf("Ignoring invalid tag %q matching a digest format", tag)
continue
}
return nil, fmt.Errorf("registry returned invalid tag %q: %w", tag, err)
}
tags = append(tags, tag)
}
link := res.Header.Get("Link")
if link == "" {
break
}
linkURLPart, _, _ := strings.Cut(link, ";")
linkURL, err := url.Parse(strings.Trim(linkURLPart, "<>"))
if err != nil {
return tags, err
}
// can be relative or absolute, but we only want the path (and I
// guess we're in trouble if it forwards to a new place...)
path = linkURL.Path
if linkURL.RawQuery != "" {
path += "?"
path += linkURL.RawQuery
}
}
return tags, nil
}
// GetDigest returns the image's digest
// Use this to optimize and avoid use of an ImageSource based on the returned digest;
// if you are going to use an ImageSource anyway, its more efficient to create it first
// and compute the digest from the value returned by GetManifest.
// NOTE: Implemented to avoid Docker Hub API limits, and mirror configuration may be
// ignored (but may be implemented in the future)
func GetDigest(ctx context.Context, sys *types.SystemContext, ref types.ImageReference) (digest.Digest, error) {
dr, ok := ref.(dockerReference)
if !ok {
return "", errors.New("ref must be a dockerReference")
}
if dr.isUnknownDigest {
return "", fmt.Errorf("docker: reference %q is for unknown digest case; cannot get digest", dr.StringWithinTransport())
}
tagOrDigest, err := dr.tagOrDigest()
if err != nil {
return "", err
}
registryConfig, err := loadRegistryConfiguration(sys)
if err != nil {
return "", err
}
client, err := newDockerClientFromRef(sys, dr, registryConfig, false, "pull")
if err != nil {
return "", fmt.Errorf("failed to create client: %w", err)
}
defer client.Close()
path := fmt.Sprintf(manifestPath, reference.Path(dr.ref), tagOrDigest)
headers := map[string][]string{
"Accept": manifest.DefaultRequestedManifestMIMETypes,
}
res, err := client.makeRequest(ctx, http.MethodHead, path, headers, nil, v2Auth, nil)
if err != nil {
return "", err
}
defer res.Body.Close()
if res.StatusCode != http.StatusOK {
return "", fmt.Errorf("reading digest %s in %s: %w", tagOrDigest, dr.ref.Name(), registryHTTPResponseToError(res))
}
dig, err := digest.Parse(res.Header.Get("Docker-Content-Digest"))
if err != nil {
return "", err
}
return dig, nil
}

View File

@@ -0,0 +1,937 @@
package docker
import (
"bytes"
"context"
"crypto/rand"
"encoding/json"
"errors"
"fmt"
"io"
"maps"
"net/http"
"net/url"
"os"
"path/filepath"
"slices"
"strings"
"github.com/docker/distribution/registry/api/errcode"
v2 "github.com/docker/distribution/registry/api/v2"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/sirupsen/logrus"
"go.podman.io/image/v5/docker/reference"
"go.podman.io/image/v5/internal/blobinfocache"
"go.podman.io/image/v5/internal/imagedestination/impl"
"go.podman.io/image/v5/internal/imagedestination/stubs"
"go.podman.io/image/v5/internal/iolimits"
"go.podman.io/image/v5/internal/private"
"go.podman.io/image/v5/internal/putblobdigest"
"go.podman.io/image/v5/internal/set"
"go.podman.io/image/v5/internal/signature"
"go.podman.io/image/v5/internal/streamdigest"
"go.podman.io/image/v5/internal/uploadreader"
"go.podman.io/image/v5/manifest"
"go.podman.io/image/v5/pkg/blobinfocache/none"
compressiontypes "go.podman.io/image/v5/pkg/compression/types"
"go.podman.io/image/v5/types"
)
type dockerImageDestination struct {
impl.Compat
impl.PropertyMethodsInitialize
stubs.IgnoresOriginalOCIConfig
stubs.NoPutBlobPartialInitialize
ref dockerReference
c *dockerClient
// State
manifestDigest digest.Digest // or "" if not yet known.
}
// newImageDestination creates a new ImageDestination for the specified image reference.
func newImageDestination(sys *types.SystemContext, ref dockerReference) (private.ImageDestination, error) {
registryConfig, err := loadRegistryConfiguration(sys)
if err != nil {
return nil, err
}
c, err := newDockerClientFromRef(sys, ref, registryConfig, true, "pull,push")
if err != nil {
return nil, err
}
mimeTypes := []string{
imgspecv1.MediaTypeImageManifest,
manifest.DockerV2Schema2MediaType,
imgspecv1.MediaTypeImageIndex,
manifest.DockerV2ListMediaType,
}
if c.sys == nil || !c.sys.DockerDisableDestSchema1MIMETypes {
mimeTypes = append(mimeTypes, manifest.DockerV2Schema1SignedMediaType, manifest.DockerV2Schema1MediaType)
}
dest := &dockerImageDestination{
PropertyMethodsInitialize: impl.PropertyMethods(impl.Properties{
SupportedManifestMIMETypes: mimeTypes,
DesiredLayerCompression: types.Compress,
MustMatchRuntimeOS: false,
IgnoresEmbeddedDockerReference: false, // We do want the manifest updated; older registry versions refuse manifests if the embedded reference does not match.
HasThreadSafePutBlob: true,
}),
NoPutBlobPartialInitialize: stubs.NoPutBlobPartial(ref),
ref: ref,
c: c,
}
dest.Compat = impl.AddCompat(dest)
return dest, nil
}
// Reference returns the reference used to set up this destination. Note that this should directly correspond to user's intent,
// e.g. it should use the public hostname instead of the result of resolving CNAMEs or following redirects.
func (d *dockerImageDestination) Reference() types.ImageReference {
return d.ref
}
// Close removes resources associated with an initialized ImageDestination, if any.
func (d *dockerImageDestination) Close() error {
return d.c.Close()
}
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
func (d *dockerImageDestination) SupportsSignatures(ctx context.Context) error {
if err := d.c.detectProperties(ctx); err != nil {
return err
}
switch {
case d.c.supportsSignatures:
return nil
case d.c.signatureBase != nil:
return nil
default:
return errors.New("Internal error: X-Registry-Supports-Signatures extension not supported, and lookaside should not be empty configuration")
}
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
func (d *dockerImageDestination) AcceptsForeignLayerURLs() bool {
return true
}
// sizeCounter is an io.Writer which only counts the total size of its input.
type sizeCounter struct{ size int64 }
func (c *sizeCounter) Write(p []byte) (n int, err error) {
c.size += int64(len(p))
return len(p), nil
}
// PutBlobWithOptions writes contents of stream and returns data representing the result.
// inputInfo.Digest can be optionally provided if known; if provided, and stream is read to the end without error, the digest MUST match the stream contents.
// inputInfo.Size is the expected length of stream, if known.
// inputInfo.MediaType describes the blob format, if known.
// WARNING: The contents of stream are being verified on the fly. Until stream.Read() returns io.EOF, the contents of the data SHOULD NOT be available
// to any other readers for download using the supplied digest.
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlobWithOptions MUST 1) fail, and 2) delete any data stored so far.
func (d *dockerImageDestination) PutBlobWithOptions(ctx context.Context, stream io.Reader, inputInfo types.BlobInfo, options private.PutBlobOptions) (private.UploadedBlob, error) {
// If requested, precompute the blob digest to prevent uploading layers that already exist on the registry.
// This functionality is particularly useful when BlobInfoCache has not been populated with compressed digests,
// the source blob is uncompressed, and the destination blob is being compressed "on the fly".
if inputInfo.Digest == "" && d.c.sys != nil && d.c.sys.DockerRegistryPushPrecomputeDigests {
logrus.Debugf("Precomputing digest layer for %s", reference.Path(d.ref.ref))
streamCopy, cleanup, err := streamdigest.ComputeBlobInfo(d.c.sys, stream, &inputInfo)
if err != nil {
return private.UploadedBlob{}, err
}
defer cleanup()
stream = streamCopy
}
if inputInfo.Digest != "" {
// This should not really be necessary, at least the copy code calls TryReusingBlob automatically.
// Still, we need to check, if only because the "initiate upload" endpoint does not have a documented "blob already exists" return value.
haveBlob, reusedInfo, err := d.tryReusingExactBlob(ctx, inputInfo, options.Cache)
if err != nil {
return private.UploadedBlob{}, err
}
if haveBlob {
return private.UploadedBlob{Digest: reusedInfo.Digest, Size: reusedInfo.Size}, nil
}
}
// FIXME? Chunked upload, progress reporting, etc.
uploadPath := fmt.Sprintf(blobUploadPath, reference.Path(d.ref.ref))
logrus.Debugf("Uploading %s", uploadPath)
res, err := d.c.makeRequest(ctx, http.MethodPost, uploadPath, nil, nil, v2Auth, nil)
if err != nil {
return private.UploadedBlob{}, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusAccepted {
logrus.Debugf("Error initiating layer upload, response %#v", *res)
return private.UploadedBlob{}, fmt.Errorf("initiating layer upload to %s in %s: %w", uploadPath, d.c.registry, registryHTTPResponseToError(res))
}
uploadLocation, err := res.Location()
if err != nil {
return private.UploadedBlob{}, fmt.Errorf("determining upload URL: %w", err)
}
digester, stream := putblobdigest.DigestIfCanonicalUnknown(stream, inputInfo)
sizeCounter := &sizeCounter{}
stream = io.TeeReader(stream, sizeCounter)
uploadLocation, err = func() (*url.URL, error) { // A scope for defer
uploadReader := uploadreader.NewUploadReader(stream)
// This error text should never be user-visible, we terminate only after makeRequestToResolvedURL
// returns, so there isnt a way for the error text to be provided to any of our callers.
defer uploadReader.Terminate(errors.New("Reading data from an already terminated upload"))
res, err = d.c.makeRequestToResolvedURL(ctx, http.MethodPatch, uploadLocation, map[string][]string{"Content-Type": {"application/octet-stream"}}, uploadReader, inputInfo.Size, v2Auth, nil)
if err != nil {
logrus.Debugf("Error uploading layer chunked %v", err)
return nil, err
}
defer res.Body.Close()
if !successStatus(res.StatusCode) {
return nil, fmt.Errorf("uploading layer chunked: %w", registryHTTPResponseToError(res))
}
uploadLocation, err := res.Location()
if err != nil {
return nil, fmt.Errorf("determining upload URL: %w", err)
}
return uploadLocation, nil
}()
if err != nil {
return private.UploadedBlob{}, err
}
blobDigest := digester.Digest()
// FIXME: DELETE uploadLocation on failure (does not really work in docker/distribution servers, which incorrectly require the "delete" action in the token's scope)
locationQuery := uploadLocation.Query()
locationQuery.Set("digest", blobDigest.String())
uploadLocation.RawQuery = locationQuery.Encode()
res, err = d.c.makeRequestToResolvedURL(ctx, http.MethodPut, uploadLocation, map[string][]string{"Content-Type": {"application/octet-stream"}}, nil, -1, v2Auth, nil)
if err != nil {
return private.UploadedBlob{}, err
}
defer res.Body.Close()
if res.StatusCode != http.StatusCreated {
logrus.Debugf("Error uploading layer, response %#v", *res)
return private.UploadedBlob{}, fmt.Errorf("uploading layer to %s: %w", uploadLocation, registryHTTPResponseToError(res))
}
logrus.Debugf("Upload of layer %s complete", blobDigest)
options.Cache.RecordKnownLocation(d.ref.Transport(), bicTransportScope(d.ref), blobDigest, newBICLocationReference(d.ref))
return private.UploadedBlob{Digest: blobDigest, Size: sizeCounter.size}, nil
}
// blobExists returns true iff repo contains a blob with digest, and if so, also its size.
// If the destination does not contain the blob, or it is unknown, blobExists ordinarily returns (false, -1, nil);
// it returns a non-nil error only on an unexpected failure.
func (d *dockerImageDestination) blobExists(ctx context.Context, repo reference.Named, digest digest.Digest, extraScope *authScope) (bool, int64, error) {
if err := digest.Validate(); err != nil { // Make sure digest.String() does not contain any unexpected characters
return false, -1, err
}
checkPath := fmt.Sprintf(blobsPath, reference.Path(repo), digest.String())
logrus.Debugf("Checking %s", checkPath)
res, err := d.c.makeRequest(ctx, http.MethodHead, checkPath, nil, nil, v2Auth, extraScope)
if err != nil {
return false, -1, err
}
defer res.Body.Close()
switch res.StatusCode {
case http.StatusOK:
size, err := getBlobSize(res)
if err != nil {
return false, -1, fmt.Errorf("determining size of blob %s in %s: %w", digest, repo.Name(), err)
}
logrus.Debugf("... already exists")
return true, size, nil
case http.StatusUnauthorized:
logrus.Debugf("... not authorized")
return false, -1, fmt.Errorf("checking whether a blob %s exists in %s: %w", digest, repo.Name(), registryHTTPResponseToError(res))
case http.StatusNotFound:
logrus.Debugf("... not present")
return false, -1, nil
default:
return false, -1, fmt.Errorf("checking whether a blob %s exists in %s: %w", digest, repo.Name(), registryHTTPResponseToError(res))
}
}
// mountBlob tries to mount blob srcDigest from srcRepo to the current destination.
func (d *dockerImageDestination) mountBlob(ctx context.Context, srcRepo reference.Named, srcDigest digest.Digest, extraScope *authScope) error {
u := url.URL{
Path: fmt.Sprintf(blobUploadPath, reference.Path(d.ref.ref)),
RawQuery: url.Values{
"mount": {srcDigest.String()},
"from": {reference.Path(srcRepo)},
}.Encode(),
}
logrus.Debugf("Trying to mount %s", u.Redacted())
res, err := d.c.makeRequest(ctx, http.MethodPost, u.String(), nil, nil, v2Auth, extraScope)
if err != nil {
return err
}
defer res.Body.Close()
switch res.StatusCode {
case http.StatusCreated:
logrus.Debugf("... mount OK")
return nil
case http.StatusAccepted:
// Oops, the mount was ignored - either the registry does not support that yet, or the blob does not exist; the registry has started an ordinary upload process.
// Abort, and let the ultimate caller do an upload when its ready, instead.
// NOTE: This does not really work in docker/distribution servers, which incorrectly require the "delete" action in the token's scope, and is thus entirely untested.
uploadLocation, err := res.Location()
if err != nil {
return fmt.Errorf("determining upload URL after a mount attempt: %w", err)
}
logrus.Debugf("... started an upload instead of mounting, trying to cancel at %s", uploadLocation.Redacted())
res2, err := d.c.makeRequestToResolvedURL(ctx, http.MethodDelete, uploadLocation, nil, nil, -1, v2Auth, extraScope)
if err != nil {
logrus.Debugf("Error trying to cancel an inadvertent upload: %s", err)
} else {
defer res2.Body.Close()
if res2.StatusCode != http.StatusNoContent {
logrus.Debugf("Error trying to cancel an inadvertent upload, status %s", http.StatusText(res.StatusCode))
}
}
// Anyway, if canceling the upload fails, ignore it and return the more important error:
return fmt.Errorf("Mounting %s from %s to %s started an upload instead", srcDigest, srcRepo.Name(), d.ref.ref.Name())
default:
logrus.Debugf("Error mounting, response %#v", *res)
return fmt.Errorf("mounting %s from %s to %s: %w", srcDigest, srcRepo.Name(), d.ref.ref.Name(), registryHTTPResponseToError(res))
}
}
// tryReusingExactBlob is a subset of TryReusingBlob which _only_ looks for exactly the specified
// blob in the current repository, with no cross-repo reuse or mounting; cache may be updated, it is not read.
// The caller must ensure info.Digest is set.
func (d *dockerImageDestination) tryReusingExactBlob(ctx context.Context, info types.BlobInfo, cache blobinfocache.BlobInfoCache2) (bool, private.ReusedBlob, error) {
exists, size, err := d.blobExists(ctx, d.ref.ref, info.Digest, nil)
if err != nil {
return false, private.ReusedBlob{}, err
}
if exists {
cache.RecordKnownLocation(d.ref.Transport(), bicTransportScope(d.ref), info.Digest, newBICLocationReference(d.ref))
return true, private.ReusedBlob{Digest: info.Digest, Size: size}, nil
}
return false, private.ReusedBlob{}, nil
}
func optionalCompressionName(algo *compressiontypes.Algorithm) string {
if algo != nil {
return algo.Name()
}
return "nil"
}
// TryReusingBlobWithOptions checks whether the transport already contains, or can efficiently reuse, a blob, and if so, applies it to the current destination
// (e.g. if the blob is a filesystem layer, this signifies that the changes it describes need to be applied again when composing a filesystem tree).
// info.Digest must not be empty.
// If the blob has been successfully reused, returns (true, info, nil).
// If the transport can not reuse the requested blob, TryReusingBlob returns (false, {}, nil); it returns a non-nil error only on an unexpected failure.
func (d *dockerImageDestination) TryReusingBlobWithOptions(ctx context.Context, info types.BlobInfo, options private.TryReusingBlobOptions) (bool, private.ReusedBlob, error) {
if info.Digest == "" {
return false, private.ReusedBlob{}, errors.New("Can not check for a blob with unknown digest")
}
originalCandidateKnownToBeMissing := false
if impl.OriginalCandidateMatchesTryReusingBlobOptions(options) {
// First, check whether the blob happens to already exist at the destination.
haveBlob, reusedInfo, err := d.tryReusingExactBlob(ctx, info, options.Cache)
if err != nil {
return false, private.ReusedBlob{}, err
}
if haveBlob {
return true, reusedInfo, nil
}
originalCandidateKnownToBeMissing = true
} else {
logrus.Debugf("Ignoring exact blob match, compression %s does not match required %s or MIME types %#v",
optionalCompressionName(options.OriginalCompression), optionalCompressionName(options.RequiredCompression), options.PossibleManifestFormats)
// We can get here with a blob detected to be zstd when the user wants a zstd:chunked.
// In that case we keep originalCandiateKnownToBeMissing = false, so that if we find
// a BIC entry for this blob, we do use that entry and return a zstd:chunked entry
// with the BICs annotations.
// This is not quite correct, it only works if the BIC also contains an acceptable _location_.
// Ideally, we could look up just the compression algorithm/annotations for info.digest,
// and use it even if no location candidate exists and the original dandidate is present.
}
// Then try reusing blobs from other locations.
candidates := options.Cache.CandidateLocations2(d.ref.Transport(), bicTransportScope(d.ref), info.Digest, blobinfocache.CandidateLocations2Options{
CanSubstitute: options.CanSubstitute,
PossibleManifestFormats: options.PossibleManifestFormats,
RequiredCompression: options.RequiredCompression,
})
for _, candidate := range candidates {
var candidateRepo reference.Named
if !candidate.UnknownLocation {
var err error
candidateRepo, err = parseBICLocationReference(candidate.Location)
if err != nil {
logrus.Debugf("Error parsing BlobInfoCache location reference: %s", err)
continue
}
if candidate.CompressionAlgorithm != nil {
logrus.Debugf("Trying to reuse blob with cached digest %s compressed with %s in destination repo %s", candidate.Digest.String(), candidate.CompressionAlgorithm.Name(), candidateRepo.Name())
} else {
logrus.Debugf("Trying to reuse blob with cached digest %s in destination repo %s", candidate.Digest.String(), candidateRepo.Name())
}
// Sanity checks:
if reference.Domain(candidateRepo) != reference.Domain(d.ref.ref) {
// OCI distribution spec 1.1 allows mounting blobs without specifying the source repo
// (the "from" parameter); in that case we might try to use these candidates as well.
//
// OTOH that would mean we cant do the “blobExists” check, and if there is no match
// we could get an upload request that we would have to cancel.
logrus.Debugf("... Internal error: domain %s does not match destination %s", reference.Domain(candidateRepo), reference.Domain(d.ref.ref))
continue
}
} else {
if candidate.CompressionAlgorithm != nil {
logrus.Debugf("Trying to reuse blob with cached digest %s compressed with %s with no location match, checking current repo", candidate.Digest.String(), candidate.CompressionAlgorithm.Name())
} else {
logrus.Debugf("Trying to reuse blob with cached digest %s in destination repo with no location match, checking current repo", candidate.Digest.String())
}
// This digest is a known variant of this blob but we dont
// have a recorded location in this registry, lets try looking
// for it in the current repo.
candidateRepo = reference.TrimNamed(d.ref.ref)
}
if originalCandidateKnownToBeMissing &&
candidateRepo.Name() == d.ref.ref.Name() && candidate.Digest == info.Digest {
logrus.Debug("... Already tried the primary destination")
continue
}
// Whatever happens here, don't abort the entire operation. It's likely we just don't have permissions, and if it is a critical network error, we will find out soon enough anyway.
// Checking candidateRepo, and mounting from it, requires an
// expanded token scope.
extraScope := &authScope{
resourceType: "repository",
remoteName: reference.Path(candidateRepo),
actions: "pull",
}
// This existence check is not, strictly speaking, necessary: We only _really_ need it to get the blob size, and we could record that in the cache instead.
// But a "failed" d.mountBlob currently leaves around an unterminated server-side upload, which we would try to cancel.
// So, without this existence check, it would be 1 request on success, 2 requests on failure; with it, it is 2 requests on success, 1 request on failure.
// On success we avoid the actual costly upload; so, in a sense, the success case is "free", but failures are always costly.
// Even worse, docker/distribution does not actually reasonably implement canceling uploads
// (it would require a "delete" action in the token, and Quay does not give that to anyone, so we can't ask);
// so, be a nice client and don't create unnecessary upload sessions on the server.
exists, size, err := d.blobExists(ctx, candidateRepo, candidate.Digest, extraScope)
if err != nil {
logrus.Debugf("... Failed: %v", err)
continue
}
if !exists {
// FIXME? Should we drop the blob from cache here (and elsewhere?)?
continue // logrus.Debug() already happened in blobExists
}
if candidateRepo.Name() != d.ref.ref.Name() {
if err := d.mountBlob(ctx, candidateRepo, candidate.Digest, extraScope); err != nil {
logrus.Debugf("... Mount failed: %v", err)
continue
}
}
options.Cache.RecordKnownLocation(d.ref.Transport(), bicTransportScope(d.ref), candidate.Digest, newBICLocationReference(d.ref))
return true, private.ReusedBlob{
Digest: candidate.Digest,
Size: size,
CompressionOperation: candidate.CompressionOperation,
CompressionAlgorithm: candidate.CompressionAlgorithm,
CompressionAnnotations: candidate.CompressionAnnotations,
}, nil
}
return false, private.ReusedBlob{}, nil
}
// PutManifest writes manifest to the destination.
// When the primary manifest is a manifest list, if instanceDigest is nil, we're saving the list
// itself, else instanceDigest contains a digest of the specific manifest instance to overwrite the
// manifest for; when the primary manifest is not a manifest list, instanceDigest should always be nil.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
func (d *dockerImageDestination) PutManifest(ctx context.Context, m []byte, instanceDigest *digest.Digest) error {
var refTail string
// If d.ref.isUnknownDigest=true, then we push without a tag, so get the
// digest that will be used
if d.ref.isUnknownDigest {
digest, err := manifest.Digest(m)
if err != nil {
return err
}
refTail = digest.String()
} else if instanceDigest != nil {
// If the instanceDigest is provided, then use it as the refTail, because the reference,
// whether it includes a tag or a digest, refers to the list as a whole, and not this
// particular instance.
refTail = instanceDigest.String()
// Double-check that the manifest we've been given matches the digest we've been given.
// This also validates the format of instanceDigest.
matches, err := manifest.MatchesDigest(m, *instanceDigest)
if err != nil {
return fmt.Errorf("digesting manifest in PutManifest: %w", err)
}
if !matches {
manifestDigest, merr := manifest.Digest(m)
if merr != nil {
return fmt.Errorf("Attempted to PutManifest using an explicitly specified digest (%q) that didn't match the manifest's digest: %w", instanceDigest.String(), merr)
}
return fmt.Errorf("Attempted to PutManifest using an explicitly specified digest (%q) that didn't match the manifest's digest (%q)", instanceDigest.String(), manifestDigest.String())
}
} else {
// Compute the digest of the main manifest, or the list if it's a list, so that we
// have a digest value to use if we're asked to save a signature for the manifest.
digest, err := manifest.Digest(m)
if err != nil {
return err
}
d.manifestDigest = digest
// The refTail should be either a digest (which we expect to match the value we just
// computed) or a tag name.
refTail, err = d.ref.tagOrDigest()
if err != nil {
return err
}
}
return d.uploadManifest(ctx, m, refTail)
}
// uploadManifest writes manifest to tagOrDigest.
func (d *dockerImageDestination) uploadManifest(ctx context.Context, m []byte, tagOrDigest string) error {
path := fmt.Sprintf(manifestPath, reference.Path(d.ref.ref), tagOrDigest)
headers := map[string][]string{}
mimeType := manifest.GuessMIMEType(m)
if mimeType != "" {
headers["Content-Type"] = []string{mimeType}
}
res, err := d.c.makeRequest(ctx, http.MethodPut, path, headers, bytes.NewReader(m), v2Auth, nil)
if err != nil {
return err
}
defer res.Body.Close()
if !successStatus(res.StatusCode) {
rawErr := registryHTTPResponseToError(res)
err := fmt.Errorf("uploading manifest %s to %s: %w", tagOrDigest, d.ref.ref.Name(), rawErr)
if isManifestInvalidError(rawErr) {
err = types.ManifestTypeRejectedError{Err: err}
}
return err
}
// A HTTP server may not be a registry at all, and just return 200 OK to everything
// (in particular that can fairly easily happen after tearing down a website and
// replacing it with a global 302 redirect to a new website, completely ignoring the
// path in the request); in that case we could “succeed” uploading a whole image.
// With docker/distribution we could rely on a Docker-Content-Digest header being present
// (because docker/distribution/registry/client has been failing uploads if it was missing),
// but that has been defined as explicitly optional by
// https://github.com/opencontainers/distribution-spec/blob/ec90a2af85fe4d612cf801e1815b95bfa40ae72b/spec.md#legacy-docker-support-http-headers
// So, just note the missing header in a debug log.
if v := res.Header.Values("Docker-Content-Digest"); len(v) == 0 {
logrus.Debugf("Manifest upload response didnt contain a Docker-Content-Digest header, it might not be a container registry")
}
return nil
}
// successStatus returns true if the argument is a successful HTTP response
// code (in the range 200 - 399 inclusive).
func successStatus(status int) bool {
return status >= 200 && status <= 399
}
// isManifestInvalidError returns true iff err from registryHTTPResponseToError is a “manifest invalid” error.
func isManifestInvalidError(err error) bool {
var ec errcode.ErrorCoder
if ok := errors.As(err, &ec); !ok {
return false
}
switch ec.ErrorCode() {
// ErrorCodeManifestInvalid is returned by OpenShift with acceptschema2=false.
case v2.ErrorCodeManifestInvalid:
return true
// ErrorCodeTagInvalid is returned by docker/distribution (at least as of commit ec87e9b6971d831f0eff752ddb54fb64693e51cd)
// when uploading to a tag (because it cant find a matching tag inside the manifest)
case v2.ErrorCodeTagInvalid:
return true
// ErrorCodeUnsupported with 'Invalid JSON syntax' is returned by AWS ECR when
// uploading an OCI manifest that is (correctly, according to the spec) missing
// a top-level media type. See libpod issue #1719
// FIXME: remove this case when ECR behavior is fixed
case errcode.ErrorCodeUnsupported:
return strings.Contains(err.Error(), "Invalid JSON syntax")
default:
return false
}
}
// PutSignaturesWithFormat writes a set of signatures to the destination.
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to write or overwrite the signatures for
// (when the primary manifest is a manifest list); this should always be nil if the primary manifest is not a manifest list.
// MUST be called after PutManifest (signatures may reference manifest contents).
func (d *dockerImageDestination) PutSignaturesWithFormat(ctx context.Context, signatures []signature.Signature, instanceDigest *digest.Digest) error {
if instanceDigest == nil {
if d.manifestDigest == "" {
// This shouldnt happen, ImageDestination users are required to call PutManifest before PutSignatures
return errors.New("Unknown manifest digest, can't add signatures")
}
instanceDigest = &d.manifestDigest
}
sigstoreSignatures := []signature.Sigstore{}
otherSignatures := []signature.Signature{}
for _, sig := range signatures {
if sigstoreSig, ok := sig.(signature.Sigstore); ok {
sigstoreSignatures = append(sigstoreSignatures, sigstoreSig)
} else {
otherSignatures = append(otherSignatures, sig)
}
}
// Only write sigstores signatures to sigstores attachments. We _could_ store them to lookaside
// instead, but that would probably be rather surprising.
// FIXME: So should we enable sigstores in all cases? Or write in all cases, but opt-in to read?
if len(sigstoreSignatures) != 0 {
if err := d.putSignaturesToSigstoreAttachments(ctx, sigstoreSignatures, *instanceDigest); err != nil {
return err
}
}
if len(otherSignatures) != 0 {
if err := d.c.detectProperties(ctx); err != nil {
return err
}
switch {
case d.c.supportsSignatures:
if err := d.putSignaturesToAPIExtension(ctx, otherSignatures, *instanceDigest); err != nil {
return err
}
case d.c.signatureBase != nil:
if err := d.putSignaturesToLookaside(otherSignatures, *instanceDigest); err != nil {
return err
}
default:
return errors.New("Internal error: X-Registry-Supports-Signatures extension not supported, and lookaside should not be empty configuration")
}
}
return nil
}
// putSignaturesToLookaside implements PutSignaturesWithFormat() from the lookaside location configured in s.c.signatureBase,
// which is not nil, for a manifest with manifestDigest.
func (d *dockerImageDestination) putSignaturesToLookaside(signatures []signature.Signature, manifestDigest digest.Digest) error {
// FIXME? This overwrites files one at a time, definitely not atomic.
// A failure when updating signatures with a reordered copy could lose some of them.
// Skip dealing with the manifest digest if not necessary.
if len(signatures) == 0 {
return nil
}
// NOTE: Keep this in sync with docs/signature-protocols.md!
for i, signature := range signatures {
sigURL, err := lookasideStorageURL(d.c.signatureBase, manifestDigest, i)
if err != nil {
return err
}
if err := d.putOneSignature(sigURL, signature); err != nil {
return err
}
}
// Remove any other signatures, if present.
// We stop at the first missing signature; if a previous deleting loop aborted
// prematurely, this may not clean up all of them, but one missing signature
// is enough for dockerImageSource to stop looking for other signatures, so that
// is sufficient.
for i := len(signatures); ; i++ {
sigURL, err := lookasideStorageURL(d.c.signatureBase, manifestDigest, i)
if err != nil {
return err
}
missing, err := d.c.deleteOneSignature(sigURL)
if err != nil {
return err
}
if missing {
break
}
}
return nil
}
// putOneSignature stores sig to sigURL.
// NOTE: Keep this in sync with docs/signature-protocols.md!
func (d *dockerImageDestination) putOneSignature(sigURL *url.URL, sig signature.Signature) error {
switch sigURL.Scheme {
case "file":
logrus.Debugf("Writing to %s", sigURL.Path)
err := os.MkdirAll(filepath.Dir(sigURL.Path), 0755)
if err != nil {
return err
}
blob, err := signature.Blob(sig)
if err != nil {
return err
}
err = os.WriteFile(sigURL.Path, blob, 0644)
if err != nil {
return err
}
return nil
case "http", "https":
return fmt.Errorf("Writing directly to a %s lookaside %s is not supported. Configure a lookaside-staging: location", sigURL.Scheme, sigURL.Redacted())
default:
return fmt.Errorf("Unsupported scheme when writing signature to %s", sigURL.Redacted())
}
}
func (d *dockerImageDestination) putSignaturesToSigstoreAttachments(ctx context.Context, signatures []signature.Sigstore, manifestDigest digest.Digest) error {
if !d.c.useSigstoreAttachments {
return errors.New("writing sigstore attachments is disabled by configuration")
}
ociManifest, err := d.c.getSigstoreAttachmentManifest(ctx, d.ref, manifestDigest)
if err != nil {
return err
}
var ociConfig imgspecv1.Image // Most fields empty by default
if ociManifest == nil {
ociManifest = manifest.OCI1FromComponents(imgspecv1.Descriptor{
MediaType: imgspecv1.MediaTypeImageConfig,
Digest: "", // We will fill this in later.
Size: 0,
}, nil)
ociConfig.RootFS.Type = "layers"
} else {
logrus.Debugf("Fetching sigstore attachment config %s", ociManifest.Config.Digest.String())
// We dont benefit from a real BlobInfoCache here because we never try to reuse/mount configs.
configBlob, err := d.c.getOCIDescriptorContents(ctx, d.ref, ociManifest.Config, iolimits.MaxConfigBodySize,
none.NoCache)
if err != nil {
return err
}
if err := json.Unmarshal(configBlob, &ociConfig); err != nil {
return fmt.Errorf("parsing sigstore attachment config %s in %s: %w", ociManifest.Config.Digest.String(),
d.ref.ref.Name(), err)
}
}
// To make sure we can safely append to the slices of ociManifest, without adding a remote dependency on the code that creates it.
ociManifest.Layers = slices.Clone(ociManifest.Layers)
// We dont need to ^^^ for ociConfig.RootFS.DiffIDs because we have created it empty ourselves, and json.Unmarshal is documented to append() to
// the slice in the original object (or in a newly allocated object).
for _, sig := range signatures {
mimeType := sig.UntrustedMIMEType()
payloadBlob := sig.UntrustedPayload()
annotations := sig.UntrustedAnnotations()
alreadyOnRegistry := false
for _, layer := range ociManifest.Layers {
if layerMatchesSigstoreSignature(layer, mimeType, payloadBlob, annotations) {
logrus.Debugf("Signature with digest %s already exists on the registry", layer.Digest.String())
alreadyOnRegistry = true
break
}
}
if alreadyOnRegistry {
continue
}
// We dont benefit from a real BlobInfoCache here because we never try to reuse/mount attachment payloads.
// That might eventually need to change if payloads grow to be not just signatures, but something
// significantly large.
sigDesc, err := d.putBlobBytesAsOCI(ctx, payloadBlob, mimeType, private.PutBlobOptions{
Cache: none.NoCache,
IsConfig: false,
EmptyLayer: false,
LayerIndex: nil,
})
if err != nil {
return err
}
sigDesc.Annotations = annotations
ociManifest.Layers = append(ociManifest.Layers, sigDesc)
ociConfig.RootFS.DiffIDs = append(ociConfig.RootFS.DiffIDs, sigDesc.Digest)
logrus.Debugf("Adding new signature, digest %s", sigDesc.Digest.String())
}
configBlob, err := json.Marshal(ociConfig)
if err != nil {
return err
}
logrus.Debugf("Uploading updated sigstore attachment config")
// We dont benefit from a real BlobInfoCache here because we never try to reuse/mount configs.
configDesc, err := d.putBlobBytesAsOCI(ctx, configBlob, imgspecv1.MediaTypeImageConfig, private.PutBlobOptions{
Cache: none.NoCache,
IsConfig: true,
EmptyLayer: false,
LayerIndex: nil,
})
if err != nil {
return err
}
ociManifest.Config = configDesc
manifestBlob, err := ociManifest.Serialize()
if err != nil {
return err
}
attachmentTag, err := sigstoreAttachmentTag(manifestDigest)
if err != nil {
return err
}
logrus.Debugf("Uploading sigstore attachment manifest")
return d.uploadManifest(ctx, manifestBlob, attachmentTag)
}
func layerMatchesSigstoreSignature(layer imgspecv1.Descriptor, mimeType string,
payloadBlob []byte, annotations map[string]string) bool {
if layer.MediaType != mimeType ||
layer.Size != int64(len(payloadBlob)) ||
// This is not quite correct, we should use the layers digest algorithm.
// But right now we dont want to deal with corner cases like bad digest formats
// or unavailable algorithms; in the worst case we end up with duplicate signature
// entries.
layer.Digest.String() != digest.FromBytes(payloadBlob).String() ||
!maps.Equal(layer.Annotations, annotations) {
return false
}
return true
}
// putBlobBytesAsOCI uploads a blob with the specified contents, and returns an appropriate
// OCI descriptor.
func (d *dockerImageDestination) putBlobBytesAsOCI(ctx context.Context, contents []byte, mimeType string, options private.PutBlobOptions) (imgspecv1.Descriptor, error) {
blobDigest := digest.FromBytes(contents)
info, err := d.PutBlobWithOptions(ctx, bytes.NewReader(contents),
types.BlobInfo{
Digest: blobDigest,
Size: int64(len(contents)),
MediaType: mimeType,
}, options)
if err != nil {
return imgspecv1.Descriptor{}, fmt.Errorf("writing blob %s: %w", blobDigest.String(), err)
}
return imgspecv1.Descriptor{
MediaType: mimeType,
Digest: info.Digest,
Size: info.Size,
}, nil
}
// deleteOneSignature deletes a signature from sigURL, if it exists.
// If it successfully determines that the signature does not exist, returns (true, nil)
// NOTE: Keep this in sync with docs/signature-protocols.md!
func (c *dockerClient) deleteOneSignature(sigURL *url.URL) (missing bool, err error) {
switch sigURL.Scheme {
case "file":
logrus.Debugf("Deleting %s", sigURL.Path)
err := os.Remove(sigURL.Path)
if err != nil && os.IsNotExist(err) {
return true, nil
}
return false, err
case "http", "https":
return false, fmt.Errorf("Writing directly to a %s lookaside %s is not supported. Configure a lookaside-staging: location", sigURL.Scheme, sigURL.Redacted())
default:
return false, fmt.Errorf("Unsupported scheme when deleting signature from %s", sigURL.Redacted())
}
}
// putSignaturesToAPIExtension implements PutSignaturesWithFormat() using the X-Registry-Supports-Signatures API extension,
// for a manifest with manifestDigest.
func (d *dockerImageDestination) putSignaturesToAPIExtension(ctx context.Context, signatures []signature.Signature, manifestDigest digest.Digest) error {
// Skip dealing with the manifest digest, or reading the old state, if not necessary.
if len(signatures) == 0 {
return nil
}
// Because image signatures are a shared resource in Atomic Registry, the default upload
// always adds signatures. Eventually we should also allow removing signatures,
// but the X-Registry-Supports-Signatures API extension does not support that yet.
existingSignatures, err := d.c.getExtensionsSignatures(ctx, d.ref, manifestDigest)
if err != nil {
return err
}
existingSigNames := set.New[string]()
for _, sig := range existingSignatures.Signatures {
existingSigNames.Add(sig.Name)
}
for _, newSigWithFormat := range signatures {
newSigSimple, ok := newSigWithFormat.(signature.SimpleSigning)
if !ok {
return signature.UnsupportedFormatError(newSigWithFormat)
}
newSig := newSigSimple.UntrustedSignature()
if slices.ContainsFunc(existingSignatures.Signatures, func(existingSig extensionSignature) bool {
return existingSig.Version == extensionSignatureSchemaVersion && existingSig.Type == extensionSignatureTypeAtomic && bytes.Equal(existingSig.Content, newSig)
}) {
continue
}
// The API expect us to invent a new unique name. This is racy, but hopefully good enough.
var signatureName string
for {
randBytes := make([]byte, 16)
n, err := rand.Read(randBytes)
if err != nil || n != 16 {
return fmt.Errorf("generating random signature len %d: %w", n, err)
}
signatureName = fmt.Sprintf("%s@%032x", manifestDigest.String(), randBytes)
if !existingSigNames.Contains(signatureName) {
break
}
}
sig := extensionSignature{
Version: extensionSignatureSchemaVersion,
Name: signatureName,
Type: extensionSignatureTypeAtomic,
Content: newSig,
}
body, err := json.Marshal(sig)
if err != nil {
return err
}
// manifestDigest is known to be valid because it was not rejected by getExtensionsSignatures above.
path := fmt.Sprintf(extensionsSignaturePath, reference.Path(d.ref.ref), manifestDigest.String())
res, err := d.c.makeRequest(ctx, http.MethodPut, path, nil, bytes.NewReader(body), v2Auth, nil)
if err != nil {
return err
}
defer res.Body.Close()
if res.StatusCode != http.StatusCreated {
logrus.Debugf("Error uploading signature, status %d, %#v", res.StatusCode, res)
return fmt.Errorf("uploading signature to %s in %s: %w", path, d.c.registry, registryHTTPResponseToError(res))
}
}
return nil
}
// CommitWithOptions marks the process of storing the image as successful and asks for the image to be persisted.
// WARNING: This does not have any transactional semantics:
// - Uploaded data MAY be visible to others before CommitWithOptions() is called
// - Uploaded data MAY be removed or MAY remain around if Close() is called without CommitWithOptions() (i.e. rollback is allowed but not guaranteed)
func (d *dockerImageDestination) CommitWithOptions(ctx context.Context, options private.CommitOptions) error {
return nil
}

863
vendor/go.podman.io/image/v5/docker/docker_image_src.go generated vendored Normal file
View File

@@ -0,0 +1,863 @@
package docker
import (
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"io"
"math"
"mime"
"mime/multipart"
"net/http"
"net/url"
"os"
"os/exec"
"strings"
"sync"
digest "github.com/opencontainers/go-digest"
"github.com/sirupsen/logrus"
"go.podman.io/image/v5/docker/reference"
"go.podman.io/image/v5/internal/imagesource/impl"
"go.podman.io/image/v5/internal/imagesource/stubs"
"go.podman.io/image/v5/internal/iolimits"
"go.podman.io/image/v5/internal/private"
"go.podman.io/image/v5/internal/signature"
"go.podman.io/image/v5/manifest"
"go.podman.io/image/v5/pkg/blobinfocache/none"
"go.podman.io/image/v5/pkg/sysregistriesv2"
"go.podman.io/image/v5/types"
"go.podman.io/storage/pkg/regexp"
)
// maxLookasideSignatures is an arbitrary limit for the total number of signatures we would try to read from a lookaside server,
// even if it were broken or malicious and it continued serving an enormous number of items.
const maxLookasideSignatures = 128
type dockerImageSource struct {
impl.Compat
impl.PropertyMethodsInitialize
impl.DoesNotAffectLayerInfosForCopy
stubs.ImplementsGetBlobAt
logicalRef dockerReference // The reference the user requested. This must satisfy !isUnknownDigest
physicalRef dockerReference // The actual reference we are accessing (possibly a mirror). This must satisfy !isUnknownDigest
c *dockerClient
// State
cachedManifest []byte // nil if not loaded yet
cachedManifestMIMEType string // Only valid if cachedManifest != nil
}
// newImageSource creates a new ImageSource for the specified image reference.
// The caller must call .Close() on the returned ImageSource.
// The caller must ensure !ref.isUnknownDigest.
func newImageSource(ctx context.Context, sys *types.SystemContext, ref dockerReference) (*dockerImageSource, error) {
if ref.isUnknownDigest {
return nil, fmt.Errorf("reading images from docker: reference %q without a tag or digest is not supported", ref.StringWithinTransport())
}
registryConfig, err := loadRegistryConfiguration(sys)
if err != nil {
return nil, err
}
registry, err := sysregistriesv2.FindRegistry(sys, ref.ref.Name())
if err != nil {
return nil, fmt.Errorf("loading registries configuration: %w", err)
}
if registry == nil {
// No configuration was found for the provided reference, so use the
// equivalent of a default configuration.
registry = &sysregistriesv2.Registry{
Endpoint: sysregistriesv2.Endpoint{
Location: ref.ref.String(),
},
Prefix: ref.ref.String(),
}
}
// Check all endpoints for the manifest availability. If we find one that does
// contain the image, it will be used for all future pull actions. Always try the
// non-mirror original location last; this both transparently handles the case
// of no mirrors configured, and ensures we return the error encountered when
// accessing the upstream location if all endpoints fail.
pullSources, err := registry.PullSourcesFromReference(ref.ref)
if err != nil {
return nil, err
}
type attempt struct {
ref reference.Named
err error
}
attempts := []attempt{}
for _, pullSource := range pullSources {
if sys != nil && sys.DockerLogMirrorChoice {
logrus.Infof("Trying to access %q", pullSource.Reference)
} else {
logrus.Debugf("Trying to access %q", pullSource.Reference)
}
s, err := newImageSourceAttempt(ctx, sys, ref, pullSource, registryConfig)
if err == nil {
return s, nil
}
logrus.Debugf("Accessing %q failed: %v", pullSource.Reference, err)
attempts = append(attempts, attempt{
ref: pullSource.Reference,
err: err,
})
}
switch len(attempts) {
case 0:
return nil, errors.New("Internal error: newImageSource returned without trying any endpoint")
case 1:
return nil, attempts[0].err // If no mirrors are used, perfectly preserve the error type and add no noise.
default:
// Dont just build a string, try to preserve the typed error.
primary := &attempts[len(attempts)-1]
extras := []string{}
for _, attempt := range attempts[:len(attempts)-1] {
// This is difficult to fit into a single-line string, when the error can contain arbitrary strings including any metacharacters we decide to use.
// The paired [] at least have some chance of being unambiguous.
extras = append(extras, fmt.Sprintf("[%s: %v]", attempt.ref.String(), attempt.err))
}
return nil, fmt.Errorf("(Mirrors also failed: %s): %s: %w", strings.Join(extras, "\n"), primary.ref.String(), primary.err)
}
}
// newImageSourceAttempt is an internal helper for newImageSource. Everyone else must call newImageSource.
// Given a logicalReference and a pullSource, return a dockerImageSource if it is reachable.
// The caller must call .Close() on the returned ImageSource.
func newImageSourceAttempt(ctx context.Context, sys *types.SystemContext, logicalRef dockerReference, pullSource sysregistriesv2.PullSource,
registryConfig *registryConfiguration) (*dockerImageSource, error) {
physicalRef, err := newReference(pullSource.Reference, false)
if err != nil {
return nil, err
}
endpointSys := sys
// sys.DockerAuthConfig does not explicitly specify a registry; we must not blindly send the credentials intended for the primary endpoint to mirrors.
if endpointSys != nil && endpointSys.DockerAuthConfig != nil && reference.Domain(physicalRef.ref) != reference.Domain(logicalRef.ref) {
copy := *endpointSys
copy.DockerAuthConfig = nil
copy.DockerBearerRegistryToken = ""
endpointSys = &copy
}
client, err := newDockerClientFromRef(endpointSys, physicalRef, registryConfig, false, "pull")
if err != nil {
return nil, err
}
client.tlsClientConfig.InsecureSkipVerify = pullSource.Endpoint.Insecure
s := &dockerImageSource{
PropertyMethodsInitialize: impl.PropertyMethods(impl.Properties{
HasThreadSafeGetBlob: true,
}),
logicalRef: logicalRef,
physicalRef: physicalRef,
c: client,
}
s.Compat = impl.AddCompat(s)
if err := s.ensureManifestIsLoaded(ctx); err != nil {
client.Close()
return nil, err
}
if h, err := sysregistriesv2.AdditionalLayerStoreAuthHelper(endpointSys); err == nil && h != "" {
acf := map[string]struct {
Username string `json:"username,omitempty"`
Password string `json:"password,omitempty"`
IdentityToken string `json:"identityToken,omitempty"`
}{
physicalRef.ref.String(): {
Username: client.auth.Username,
Password: client.auth.Password,
IdentityToken: client.auth.IdentityToken,
},
}
acfD, err := json.Marshal(acf)
if err != nil {
logrus.Warnf("failed to marshal auth config: %v", err)
} else {
cmd := exec.Command(h)
cmd.Stdin = bytes.NewReader(acfD)
if err := cmd.Run(); err != nil {
var stderr string
if ee, ok := err.(*exec.ExitError); ok {
stderr = string(ee.Stderr)
}
logrus.Warnf("Failed to call additional-layer-store-auth-helper (stderr:%s): %v", stderr, err)
}
}
}
return s, nil
}
// Reference returns the reference used to set up this source, _as specified by the user_
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
func (s *dockerImageSource) Reference() types.ImageReference {
return s.logicalRef
}
// Close removes resources associated with an initialized ImageSource, if any.
func (s *dockerImageSource) Close() error {
return s.c.Close()
}
// simplifyContentType drops parameters from a HTTP media type (see https://tools.ietf.org/html/rfc7231#section-3.1.1.1)
// Alternatively, an empty string is returned unchanged, and invalid values are "simplified" to an empty string.
func simplifyContentType(contentType string) string {
if contentType == "" {
return contentType
}
mimeType, _, err := mime.ParseMediaType(contentType)
if err != nil {
return ""
}
return mimeType
}
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
// It may use a remote (= slow) service.
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve (when the primary manifest is a manifest list);
// this never happens if the primary manifest is not a manifest list (e.g. if the source never returns manifest lists).
func (s *dockerImageSource) GetManifest(ctx context.Context, instanceDigest *digest.Digest) ([]byte, string, error) {
if instanceDigest != nil {
if err := instanceDigest.Validate(); err != nil { // Make sure instanceDigest.String() does not contain any unexpected characters
return nil, "", err
}
return s.fetchManifest(ctx, instanceDigest.String())
}
err := s.ensureManifestIsLoaded(ctx)
if err != nil {
return nil, "", err
}
return s.cachedManifest, s.cachedManifestMIMEType, nil
}
// fetchManifest fetches a manifest for tagOrDigest.
// The caller is responsible for ensuring tagOrDigest uses the expected format.
func (s *dockerImageSource) fetchManifest(ctx context.Context, tagOrDigest string) ([]byte, string, error) {
return s.c.fetchManifest(ctx, s.physicalRef, tagOrDigest)
}
// ensureManifestIsLoaded sets s.cachedManifest and s.cachedManifestMIMEType
//
// ImageSource implementations are not required or expected to do any caching,
// but because our signatures are “attached” to the manifest digest,
// we need to ensure that the digest of the manifest returned by GetManifest(ctx, nil)
// and used by GetSignatures(ctx, nil) are consistent, otherwise we would get spurious
// signature verification failures when pulling while a tag is being updated.
func (s *dockerImageSource) ensureManifestIsLoaded(ctx context.Context) error {
if s.cachedManifest != nil {
return nil
}
reference, err := s.physicalRef.tagOrDigest()
if err != nil {
return err
}
manblob, mt, err := s.fetchManifest(ctx, reference)
if err != nil {
return err
}
// We might validate manblob against the Docker-Content-Digest header here to protect against transport errors.
s.cachedManifest = manblob
s.cachedManifestMIMEType = mt
return nil
}
// splitHTTP200ResponseToPartial splits a 200 response in multiple streams as specified by the chunks
func splitHTTP200ResponseToPartial(streams chan io.ReadCloser, errs chan error, body io.ReadCloser, chunks []private.ImageSourceChunk) {
defer close(streams)
defer close(errs)
currentOffset := uint64(0)
body = makeBufferedNetworkReader(body, 64, 16384)
defer body.Close()
for _, c := range chunks {
if c.Offset != currentOffset {
if c.Offset < currentOffset {
errs <- fmt.Errorf("invalid chunk offset specified %v (expected >= %v)", c.Offset, currentOffset)
break
}
toSkip := c.Offset - currentOffset
if _, err := io.Copy(io.Discard, io.LimitReader(body, int64(toSkip))); err != nil {
errs <- err
break
}
currentOffset += toSkip
}
var reader io.Reader
if c.Length == math.MaxUint64 {
reader = body
} else {
reader = io.LimitReader(body, int64(c.Length))
}
s := signalCloseReader{
closed: make(chan struct{}),
stream: io.NopCloser(reader),
consumeStream: true,
}
streams <- s
// Wait until the stream is closed before going to the next chunk
<-s.closed
currentOffset += c.Length
}
}
// handle206Response reads a 206 response and send each part as a separate ReadCloser to the streams chan.
func handle206Response(streams chan io.ReadCloser, errs chan error, body io.ReadCloser, chunks []private.ImageSourceChunk, mediaType string, params map[string]string) {
defer close(streams)
defer close(errs)
if !strings.HasPrefix(mediaType, "multipart/") {
streams <- body
return
}
boundary, found := params["boundary"]
if !found {
errs <- errors.New("could not find boundary")
body.Close()
return
}
buffered := makeBufferedNetworkReader(body, 64, 16384)
defer buffered.Close()
mr := multipart.NewReader(buffered, boundary)
parts := 0
for {
p, err := mr.NextPart()
if err != nil {
if err != io.EOF {
errs <- err
}
if parts != len(chunks) {
errs <- errors.New("invalid number of chunks returned by the server")
}
return
}
if parts >= len(chunks) {
errs <- errors.New("too many parts returned by the server")
break
}
s := signalCloseReader{
closed: make(chan struct{}),
stream: p,
}
streams <- s
// NextPart() cannot be called while the current part
// is being read, so wait until it is closed
<-s.closed
parts++
}
}
var multipartByteRangesRe = regexp.Delayed("multipart/byteranges; boundary=([A-Za-z-0-9:]+)")
func parseMediaType(contentType string) (string, map[string]string, error) {
mediaType, params, err := mime.ParseMediaType(contentType)
if err != nil {
if err == mime.ErrInvalidMediaParameter {
// CloudFront returns an invalid MIME type, that contains an unquoted ":" in the boundary
// param, let's handle it here.
matches := multipartByteRangesRe.FindStringSubmatch(contentType)
if len(matches) == 2 {
mediaType = "multipart/byteranges"
params = map[string]string{
"boundary": matches[1],
}
err = nil
}
}
if err != nil {
return "", nil, err
}
}
return mediaType, params, err
}
// GetBlobAt returns a sequential channel of readers that contain data for the requested
// blob chunks, and a channel that might get a single error value.
// The specified chunks must be not overlapping and sorted by their offset.
// The readers must be fully consumed, in the order they are returned, before blocking
// to read the next chunk.
// If the Length for the last chunk is set to math.MaxUint64, then it
// fully fetches the remaining data from the offset to the end of the blob.
func (s *dockerImageSource) GetBlobAt(ctx context.Context, info types.BlobInfo, chunks []private.ImageSourceChunk) (chan io.ReadCloser, chan error, error) {
headers := make(map[string][]string)
rangeVals := make([]string, 0, len(chunks))
lastFound := false
for _, c := range chunks {
if lastFound {
return nil, nil, fmt.Errorf("internal error: another chunk requested after an util-EOF chunk")
}
// If the Length is set to -1, then request anything after the specified offset.
if c.Length == math.MaxUint64 {
lastFound = true
rangeVals = append(rangeVals, fmt.Sprintf("%d-", c.Offset))
} else {
rangeVals = append(rangeVals, fmt.Sprintf("%d-%d", c.Offset, c.Offset+c.Length-1))
}
}
headers["Range"] = []string{fmt.Sprintf("bytes=%s", strings.Join(rangeVals, ","))}
if len(info.URLs) != 0 {
return nil, nil, fmt.Errorf("external URLs not supported with GetBlobAt")
}
if err := info.Digest.Validate(); err != nil { // Make sure info.Digest.String() does not contain any unexpected characters
return nil, nil, err
}
path := fmt.Sprintf(blobsPath, reference.Path(s.physicalRef.ref), info.Digest.String())
logrus.Debugf("Downloading %s", path)
res, err := s.c.makeRequest(ctx, http.MethodGet, path, headers, nil, v2Auth, nil)
if err != nil {
return nil, nil, err
}
switch res.StatusCode {
case http.StatusOK:
// if the server replied with a 200 status code, convert the full body response to a series of
// streams as it would have been done with 206.
streams := make(chan io.ReadCloser)
errs := make(chan error)
go splitHTTP200ResponseToPartial(streams, errs, res.Body, chunks)
return streams, errs, nil
case http.StatusPartialContent:
mediaType, params, err := parseMediaType(res.Header.Get("Content-Type"))
if err != nil {
return nil, nil, err
}
streams := make(chan io.ReadCloser)
errs := make(chan error)
go handle206Response(streams, errs, res.Body, chunks, mediaType, params)
return streams, errs, nil
case http.StatusBadRequest:
res.Body.Close()
return nil, nil, private.BadPartialRequestError{Status: res.Status}
default:
err := registryHTTPResponseToError(res)
res.Body.Close()
return nil, nil, fmt.Errorf("fetching partial blob: %w", err)
}
}
// GetBlob returns a stream for the specified blob, and the blobs size (or -1 if unknown).
// The Digest field in BlobInfo is guaranteed to be provided, Size may be -1 and MediaType may be optionally provided.
// May update BlobInfoCache, preferably after it knows for certain that a blob truly exists at a specific location.
func (s *dockerImageSource) GetBlob(ctx context.Context, info types.BlobInfo, cache types.BlobInfoCache) (io.ReadCloser, int64, error) {
return s.c.getBlob(ctx, s.physicalRef, info, cache)
}
// GetSignaturesWithFormat returns the image's signatures. It may use a remote (= slow) service.
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve signatures for
// (when the primary manifest is a manifest list); this never happens if the primary manifest is not a manifest list
// (e.g. if the source never returns manifest lists).
func (s *dockerImageSource) GetSignaturesWithFormat(ctx context.Context, instanceDigest *digest.Digest) ([]signature.Signature, error) {
if err := s.c.detectProperties(ctx); err != nil {
return nil, err
}
var res []signature.Signature
switch {
case s.c.supportsSignatures:
if err := s.appendSignaturesFromAPIExtension(ctx, &res, instanceDigest); err != nil {
return nil, err
}
case s.c.signatureBase != nil:
if err := s.appendSignaturesFromLookaside(ctx, &res, instanceDigest); err != nil {
return nil, err
}
default:
return nil, errors.New("Internal error: X-Registry-Supports-Signatures extension not supported, and lookaside should not be empty configuration")
}
if err := s.appendSignaturesFromSigstoreAttachments(ctx, &res, instanceDigest); err != nil {
return nil, err
}
return res, nil
}
// manifestDigest returns a digest of the manifest, from instanceDigest if non-nil; or from the supplied reference,
// or finally, from a fetched manifest.
func (s *dockerImageSource) manifestDigest(ctx context.Context, instanceDigest *digest.Digest) (digest.Digest, error) {
if instanceDigest != nil {
return *instanceDigest, nil
}
if digested, ok := s.physicalRef.ref.(reference.Digested); ok {
d := digested.Digest()
if d.Algorithm() == digest.Canonical {
return d, nil
}
}
if err := s.ensureManifestIsLoaded(ctx); err != nil {
return "", err
}
return manifest.Digest(s.cachedManifest)
}
// appendSignaturesFromLookaside implements GetSignaturesWithFormat() from the lookaside location configured in s.c.signatureBase,
// which is not nil, storing the signatures to *dest.
// On error, the contents of *dest are undefined.
func (s *dockerImageSource) appendSignaturesFromLookaside(ctx context.Context, dest *[]signature.Signature, instanceDigest *digest.Digest) error {
manifestDigest, err := s.manifestDigest(ctx, instanceDigest)
if err != nil {
return err
}
// NOTE: Keep this in sync with docs/signature-protocols.md!
for i := 0; ; i++ {
if i >= maxLookasideSignatures {
return fmt.Errorf("server provided %d signatures, assuming that's unreasonable and a server error", maxLookasideSignatures)
}
sigURL, err := lookasideStorageURL(s.c.signatureBase, manifestDigest, i)
if err != nil {
return err
}
signature, missing, err := s.getOneSignature(ctx, sigURL)
if err != nil {
return err
}
if missing {
break
}
*dest = append(*dest, signature)
}
return nil
}
// getOneSignature downloads one signature from sigURL, and returns (signature, false, nil)
// If it successfully determines that the signature does not exist, returns (nil, true, nil).
// NOTE: Keep this in sync with docs/signature-protocols.md!
func (s *dockerImageSource) getOneSignature(ctx context.Context, sigURL *url.URL) (signature.Signature, bool, error) {
switch sigURL.Scheme {
case "file":
logrus.Debugf("Reading %s", sigURL.Path)
sigBlob, err := os.ReadFile(sigURL.Path)
if err != nil {
if os.IsNotExist(err) {
return nil, true, nil
}
return nil, false, err
}
sig, err := signature.FromBlob(sigBlob)
if err != nil {
return nil, false, fmt.Errorf("parsing signature %q: %w", sigURL.Path, err)
}
return sig, false, nil
case "http", "https":
logrus.Debugf("GET %s", sigURL.Redacted())
req, err := http.NewRequestWithContext(ctx, http.MethodGet, sigURL.String(), nil)
if err != nil {
return nil, false, err
}
res, err := s.c.client.Do(req)
if err != nil {
return nil, false, err
}
defer res.Body.Close()
if res.StatusCode == http.StatusNotFound {
logrus.Debugf("... got status 404, as expected = end of signatures")
return nil, true, nil
} else if res.StatusCode != http.StatusOK {
return nil, false, fmt.Errorf("reading signature from %s: %w", sigURL.Redacted(), newUnexpectedHTTPStatusError(res))
}
contentType := res.Header.Get("Content-Type")
if mimeType := simplifyContentType(contentType); mimeType == "text/html" {
logrus.Warnf("Signature %q has Content-Type %q, unexpected for a signature", sigURL.Redacted(), contentType)
// Dont immediately fail; the lookaside spec does not place any requirements on Content-Type.
// If the content really is HTML, its going to fail in signature.FromBlob.
}
sigBlob, err := iolimits.ReadAtMost(res.Body, iolimits.MaxSignatureBodySize)
if err != nil {
return nil, false, err
}
sig, err := signature.FromBlob(sigBlob)
if err != nil {
return nil, false, fmt.Errorf("parsing signature %s: %w", sigURL.Redacted(), err)
}
return sig, false, nil
default:
return nil, false, fmt.Errorf("Unsupported scheme when reading signature from %s", sigURL.Redacted())
}
}
// appendSignaturesFromAPIExtension implements GetSignaturesWithFormat() using the X-Registry-Supports-Signatures API extension,
// storing the signatures to *dest.
// On error, the contents of *dest are undefined.
func (s *dockerImageSource) appendSignaturesFromAPIExtension(ctx context.Context, dest *[]signature.Signature, instanceDigest *digest.Digest) error {
manifestDigest, err := s.manifestDigest(ctx, instanceDigest)
if err != nil {
return err
}
parsedBody, err := s.c.getExtensionsSignatures(ctx, s.physicalRef, manifestDigest)
if err != nil {
return err
}
for _, sig := range parsedBody.Signatures {
if sig.Version == extensionSignatureSchemaVersion && sig.Type == extensionSignatureTypeAtomic {
*dest = append(*dest, signature.SimpleSigningFromBlob(sig.Content))
}
}
return nil
}
// appendSignaturesFromSigstoreAttachments implements GetSignaturesWithFormat() using the sigstore tag convention,
// storing the signatures to *dest.
// On error, the contents of *dest are undefined.
func (s *dockerImageSource) appendSignaturesFromSigstoreAttachments(ctx context.Context, dest *[]signature.Signature, instanceDigest *digest.Digest) error {
if !s.c.useSigstoreAttachments {
logrus.Debugf("Not looking for sigstore attachments: disabled by configuration")
return nil
}
manifestDigest, err := s.manifestDigest(ctx, instanceDigest)
if err != nil {
return err
}
ociManifest, err := s.c.getSigstoreAttachmentManifest(ctx, s.physicalRef, manifestDigest)
if err != nil {
return err
}
if ociManifest == nil {
return nil
}
logrus.Debugf("Found a sigstore attachment manifest with %d layers", len(ociManifest.Layers))
for layerIndex, layer := range ociManifest.Layers {
// Note that this copies all kinds of attachments: attestations, and whatever else is there,
// not just signatures. We leave the signature consumers to decide based on the MIME type.
logrus.Debugf("Fetching sigstore attachment %d/%d: %s", layerIndex+1, len(ociManifest.Layers), layer.Digest.String())
// We dont benefit from a real BlobInfoCache here because we never try to reuse/mount attachment payloads.
// That might eventually need to change if payloads grow to be not just signatures, but something
// significantly large.
payload, err := s.c.getOCIDescriptorContents(ctx, s.physicalRef, layer, iolimits.MaxSignatureBodySize,
none.NoCache)
if err != nil {
return err
}
*dest = append(*dest, signature.SigstoreFromComponents(layer.MediaType, payload, layer.Annotations))
}
return nil
}
// deleteImage deletes the named image from the registry, if supported.
func deleteImage(ctx context.Context, sys *types.SystemContext, ref dockerReference) error {
if ref.isUnknownDigest {
return fmt.Errorf("Docker reference without a tag or digest cannot be deleted")
}
registryConfig, err := loadRegistryConfiguration(sys)
if err != nil {
return err
}
// docker/distribution does not document what action should be used for deleting images.
//
// Current docker/distribution requires "pull" for reading the manifest and "delete" for deleting it.
// quay.io requires "push" (an explicit "pull" is unnecessary), does not grant any token (fails parsing the request) if "delete" is included.
// OpenShift ignores the action string (both the password and the token is an OpenShift API token identifying a user).
//
// We have to hard-code a single string, luckily both docker/distribution and quay.io support "*" to mean "everything".
c, err := newDockerClientFromRef(sys, ref, registryConfig, true, "*")
if err != nil {
return err
}
defer c.Close()
headers := map[string][]string{
"Accept": manifest.DefaultRequestedManifestMIMETypes,
}
refTail, err := ref.tagOrDigest()
if err != nil {
return err
}
getPath := fmt.Sprintf(manifestPath, reference.Path(ref.ref), refTail)
get, err := c.makeRequest(ctx, http.MethodGet, getPath, headers, nil, v2Auth, nil)
if err != nil {
return err
}
defer get.Body.Close()
switch get.StatusCode {
case http.StatusOK:
case http.StatusNotFound:
return fmt.Errorf("Unable to delete %v. Image may not exist or is not stored with a v2 Schema in a v2 registry", ref.ref)
default:
return fmt.Errorf("deleting %v: %w", ref.ref, registryHTTPResponseToError(get))
}
manifestBody, err := iolimits.ReadAtMost(get.Body, iolimits.MaxManifestBodySize)
if err != nil {
return err
}
manifestDigest, err := manifest.Digest(manifestBody)
if err != nil {
return fmt.Errorf("computing manifest digest: %w", err)
}
deletePath := fmt.Sprintf(manifestPath, reference.Path(ref.ref), manifestDigest)
// When retrieving the digest from a registry >= 2.3 use the following header:
// "Accept": "application/vnd.docker.distribution.manifest.v2+json"
delete, err := c.makeRequest(ctx, http.MethodDelete, deletePath, headers, nil, v2Auth, nil)
if err != nil {
return err
}
defer delete.Body.Close()
if delete.StatusCode != http.StatusAccepted {
return fmt.Errorf("deleting %v: %w", ref.ref, registryHTTPResponseToError(delete))
}
for i := 0; ; i++ {
sigURL, err := lookasideStorageURL(c.signatureBase, manifestDigest, i)
if err != nil {
return err
}
missing, err := c.deleteOneSignature(sigURL)
if err != nil {
return err
}
if missing {
break
}
}
return nil
}
type bufferedNetworkReaderBuffer struct {
data []byte
len int
consumed int
err error
}
type bufferedNetworkReader struct {
stream io.ReadCloser
emptyBuffer chan *bufferedNetworkReaderBuffer
readyBuffer chan *bufferedNetworkReaderBuffer
terminate chan bool
current *bufferedNetworkReaderBuffer
mutex sync.Mutex
gotEOF bool
}
// handleBufferedNetworkReader runs in a goroutine
func handleBufferedNetworkReader(br *bufferedNetworkReader) {
defer close(br.readyBuffer)
for {
select {
case b := <-br.emptyBuffer:
b.len, b.err = br.stream.Read(b.data)
br.readyBuffer <- b
if b.err != nil {
return
}
case <-br.terminate:
return
}
}
}
func (n *bufferedNetworkReader) Close() error {
close(n.terminate)
close(n.emptyBuffer)
return n.stream.Close()
}
func (n *bufferedNetworkReader) read(p []byte) (int, error) {
if n.current != nil {
copied := copy(p, n.current.data[n.current.consumed:n.current.len])
n.current.consumed += copied
if n.current.consumed == n.current.len {
n.emptyBuffer <- n.current
n.current = nil
}
if copied > 0 {
return copied, nil
}
}
if n.gotEOF {
return 0, io.EOF
}
var b *bufferedNetworkReaderBuffer
select {
case b = <-n.readyBuffer:
if b.err != nil {
if b.err != io.EOF {
return b.len, b.err
}
n.gotEOF = true
}
b.consumed = 0
n.current = b
return n.read(p)
case <-n.terminate:
return 0, io.EOF
}
}
func (n *bufferedNetworkReader) Read(p []byte) (int, error) {
n.mutex.Lock()
defer n.mutex.Unlock()
return n.read(p)
}
func makeBufferedNetworkReader(stream io.ReadCloser, nBuffers, bufferSize uint) *bufferedNetworkReader {
br := bufferedNetworkReader{
stream: stream,
emptyBuffer: make(chan *bufferedNetworkReaderBuffer, nBuffers),
readyBuffer: make(chan *bufferedNetworkReaderBuffer, nBuffers),
terminate: make(chan bool),
}
go func() {
handleBufferedNetworkReader(&br)
}()
for range nBuffers {
b := bufferedNetworkReaderBuffer{
data: make([]byte, bufferSize),
}
br.emptyBuffer <- &b
}
return &br
}
type signalCloseReader struct {
closed chan struct{}
stream io.ReadCloser
consumeStream bool
}
func (s signalCloseReader) Read(p []byte) (int, error) {
return s.stream.Read(p)
}
func (s signalCloseReader) Close() error {
defer close(s.closed)
if s.consumeStream {
if _, err := io.Copy(io.Discard, s.stream); err != nil {
s.stream.Close()
return err
}
}
return s.stream.Close()
}

211
vendor/go.podman.io/image/v5/docker/docker_transport.go generated vendored Normal file
View File

@@ -0,0 +1,211 @@
package docker
import (
"context"
"errors"
"fmt"
"strings"
"go.podman.io/image/v5/docker/policyconfiguration"
"go.podman.io/image/v5/docker/reference"
"go.podman.io/image/v5/transports"
"go.podman.io/image/v5/types"
)
// UnknownDigestSuffix can be appended to a reference when the caller
// wants to push an image without a tag or digest.
// NewReferenceUnknownDigest() is called when this const is detected.
const UnknownDigestSuffix = "@@unknown-digest@@"
func init() {
transports.Register(Transport)
}
// Transport is an ImageTransport for container registry-hosted images.
var Transport = dockerTransport{}
type dockerTransport struct{}
func (t dockerTransport) Name() string {
return "docker"
}
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an ImageReference.
func (t dockerTransport) ParseReference(reference string) (types.ImageReference, error) {
return ParseReference(reference)
}
// ValidatePolicyConfigurationScope checks that scope is a valid name for a signature.PolicyTransportScopes keys
// (i.e. a valid PolicyConfigurationIdentity() or PolicyConfigurationNamespaces() return value).
// It is acceptable to allow an invalid value which will never be matched, it can "only" cause user confusion.
// scope passed to this function will not be "", that value is always allowed.
func (t dockerTransport) ValidatePolicyConfigurationScope(scope string) error {
// FIXME? We could be verifying the various character set and length restrictions
// from docker/distribution/reference.regexp.go, but other than that there
// are few semantically invalid strings.
return nil
}
// dockerReference is an ImageReference for Docker images.
type dockerReference struct {
ref reference.Named // By construction we know that !reference.IsNameOnly(ref) unless isUnknownDigest=true
isUnknownDigest bool
}
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an Docker ImageReference.
func ParseReference(refString string) (types.ImageReference, error) {
refString, ok := strings.CutPrefix(refString, "//")
if !ok {
return nil, fmt.Errorf("docker: image reference %s does not start with //", refString)
}
refString, unknownDigest := strings.CutSuffix(refString, UnknownDigestSuffix)
ref, err := reference.ParseNormalizedNamed(refString)
if err != nil {
return nil, err
}
if unknownDigest {
if !reference.IsNameOnly(ref) {
return nil, fmt.Errorf("docker: image reference %q has unknown digest set but it contains either a tag or digest", ref.String()+UnknownDigestSuffix)
}
return NewReferenceUnknownDigest(ref)
}
ref = reference.TagNameOnly(ref)
return NewReference(ref)
}
// NewReference returns a Docker reference for a named reference. The reference must satisfy !reference.IsNameOnly().
func NewReference(ref reference.Named) (types.ImageReference, error) {
return newReference(ref, false)
}
// NewReferenceUnknownDigest returns a Docker reference for a named reference, which can be used to write images without setting
// a tag on the registry. The reference must satisfy reference.IsNameOnly()
func NewReferenceUnknownDigest(ref reference.Named) (types.ImageReference, error) {
return newReference(ref, true)
}
// newReference returns a dockerReference for a named reference.
func newReference(ref reference.Named, unknownDigest bool) (dockerReference, error) {
if reference.IsNameOnly(ref) && !unknownDigest {
return dockerReference{}, fmt.Errorf("Docker reference %s is not for an unknown digest case; tag or digest is needed", reference.FamiliarString(ref))
}
if !reference.IsNameOnly(ref) && unknownDigest {
return dockerReference{}, fmt.Errorf("Docker reference %s is for an unknown digest case but reference has a tag or digest", reference.FamiliarString(ref))
}
// A github.com/distribution/reference value can have a tag and a digest at the same time!
// The docker/distribution API does not really support that (we cant ask for an image with a specific
// tag and digest), so fail. This MAY be accepted in the future.
// (Even if it were supported, the semantics of policy namespaces are unclear - should we drop
// the tag or the digest first?)
_, isTagged := ref.(reference.NamedTagged)
_, isDigested := ref.(reference.Canonical)
if isTagged && isDigested {
return dockerReference{}, errors.New("Docker references with both a tag and digest are currently not supported")
}
return dockerReference{
ref: ref,
isUnknownDigest: unknownDigest,
}, nil
}
func (ref dockerReference) Transport() types.ImageTransport {
return Transport
}
// StringWithinTransport returns a string representation of the reference, which MUST be such that
// reference.Transport().ParseReference(reference.StringWithinTransport()) returns an equivalent reference.
// NOTE: The returned string is not promised to be equal to the original input to ParseReference;
// e.g. default attribute values omitted by the user may be filled in the return value, or vice versa.
// WARNING: Do not use the return value in the UI to describe an image, it does not contain the Transport().Name() prefix.
func (ref dockerReference) StringWithinTransport() string {
famString := "//" + reference.FamiliarString(ref.ref)
if ref.isUnknownDigest {
return famString + UnknownDigestSuffix
}
return famString
}
// DockerReference returns a Docker reference associated with this reference
// (fully explicit, i.e. !reference.IsNameOnly, but reflecting user intent,
// not e.g. after redirect or alias processing), or nil if unknown/not applicable.
func (ref dockerReference) DockerReference() reference.Named {
return ref.ref
}
// PolicyConfigurationIdentity returns a string representation of the reference, suitable for policy lookup.
// This MUST reflect user intent, not e.g. after processing of third-party redirects or aliases;
// The value SHOULD be fully explicit about its semantics, with no hidden defaults, AND canonical
// (i.e. various references with exactly the same semantics should return the same configuration identity)
// It is fine for the return value to be equal to StringWithinTransport(), and it is desirable but
// not required/guaranteed that it will be a valid input to Transport().ParseReference().
// Returns "" if configuration identities for these references are not supported.
func (ref dockerReference) PolicyConfigurationIdentity() string {
if ref.isUnknownDigest {
return ref.ref.Name()
}
res, err := policyconfiguration.DockerReferenceIdentity(ref.ref)
if res == "" || err != nil { // Coverage: Should never happen, NewReference above should refuse values which could cause a failure.
panic(fmt.Sprintf("Internal inconsistency: policyconfiguration.DockerReferenceIdentity returned %#v, %v", res, err))
}
return res
}
// PolicyConfigurationNamespaces returns a list of other policy configuration namespaces to search
// for if explicit configuration for PolicyConfigurationIdentity() is not set. The list will be processed
// in order, terminating on first match, and an implicit "" is always checked at the end.
// It is STRONGLY recommended for the first element, if any, to be a prefix of PolicyConfigurationIdentity(),
// and each following element to be a prefix of the element preceding it.
func (ref dockerReference) PolicyConfigurationNamespaces() []string {
namespaces := policyconfiguration.DockerReferenceNamespaces(ref.ref)
if ref.isUnknownDigest {
if len(namespaces) != 0 && namespaces[0] == ref.ref.Name() {
namespaces = namespaces[1:]
}
}
return namespaces
}
// NewImage returns a types.ImageCloser for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned ImageCloser.
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
// WARNING: This may not do the right thing for a manifest list, see image.FromSource for details.
func (ref dockerReference) NewImage(ctx context.Context, sys *types.SystemContext) (types.ImageCloser, error) {
return newImage(ctx, sys, ref)
}
// NewImageSource returns a types.ImageSource for this reference.
// The caller must call .Close() on the returned ImageSource.
func (ref dockerReference) NewImageSource(ctx context.Context, sys *types.SystemContext) (types.ImageSource, error) {
return newImageSource(ctx, sys, ref)
}
// NewImageDestination returns a types.ImageDestination for this reference.
// The caller must call .Close() on the returned ImageDestination.
func (ref dockerReference) NewImageDestination(ctx context.Context, sys *types.SystemContext) (types.ImageDestination, error) {
return newImageDestination(sys, ref)
}
// DeleteImage deletes the named image from the registry, if supported.
func (ref dockerReference) DeleteImage(ctx context.Context, sys *types.SystemContext) error {
return deleteImage(ctx, sys, ref)
}
// tagOrDigest returns a tag or digest from the reference.
func (ref dockerReference) tagOrDigest() (string, error) {
if ref, ok := ref.ref.(reference.Canonical); ok {
return ref.Digest().String(), nil
}
if ref, ok := ref.ref.(reference.NamedTagged); ok {
return ref.Tag(), nil
}
if ref.isUnknownDigest {
return "", fmt.Errorf("Docker reference %q is for an unknown digest case, has neither a digest nor a tag", reference.FamiliarString(ref.ref))
}
// This should not happen, NewReference above refuses reference.IsNameOnly values.
return "", fmt.Errorf("Internal inconsistency: Reference %s unexpectedly has neither a digest nor a tag", reference.FamiliarString(ref.ref))
}

102
vendor/go.podman.io/image/v5/docker/errors.go generated vendored Normal file
View File

@@ -0,0 +1,102 @@
package docker
import (
"errors"
"fmt"
"net/http"
"github.com/docker/distribution/registry/api/errcode"
"github.com/sirupsen/logrus"
)
var (
// ErrV1NotSupported is returned when we're trying to talk to a
// docker V1 registry.
// Deprecated: The V1 container registry detection is no longer performed, so this error is never returned.
ErrV1NotSupported = errors.New("can't talk to a V1 container registry")
// ErrTooManyRequests is returned when the status code returned is 429
ErrTooManyRequests = errors.New("too many requests to registry")
)
// ErrUnauthorizedForCredentials is returned when the status code returned is 401
type ErrUnauthorizedForCredentials struct { // We only use a struct to allow a type assertion, without limiting the contents of the error otherwise.
Err error
}
func (e ErrUnauthorizedForCredentials) Error() string {
return fmt.Sprintf("unable to retrieve auth token: invalid username/password: %s", e.Err.Error())
}
// httpResponseToError translates the https.Response into an error, possibly prefixing it with the supplied context. It returns
// nil if the response is not considered an error.
// NOTE: Almost all callers in this package should use registryHTTPResponseToError instead.
func httpResponseToError(res *http.Response, context string) error {
switch res.StatusCode {
case http.StatusOK:
return nil
case http.StatusTooManyRequests:
return ErrTooManyRequests
case http.StatusUnauthorized:
err := registryHTTPResponseToError(res)
return ErrUnauthorizedForCredentials{Err: err}
default:
if context == "" {
return newUnexpectedHTTPStatusError(res)
}
return fmt.Errorf("%s: %w", context, newUnexpectedHTTPStatusError(res))
}
}
// registryHTTPResponseToError creates a Go error from an HTTP error response of a docker/distribution
// registry.
//
// WARNING: The OCI distribution spec says
// “A `4XX` response code from the registry MAY return a body in any format.”; but if it is
// JSON, it MUST use the errcode.Error structure.
// So, callers should primarily decide based on HTTP StatusCode, not based on error type here.
func registryHTTPResponseToError(res *http.Response) error {
err := handleErrorResponse(res)
// len(errs) == 0 should never be returned by handleErrorResponse; if it does, we don't modify it and let the caller report it as is.
if errs, ok := err.(errcode.Errors); ok && len(errs) > 0 {
// The docker/distribution registry implementation almost never returns
// more than one error in the HTTP body; it seems there is only one
// possible instance, where the second error reports a cleanup failure
// we don't really care about.
//
// The only _common_ case where a multi-element error is returned is
// created by the handleErrorResponse parser when OAuth authorization fails:
// the first element contains errors from a WWW-Authenticate header, the second
// element contains errors from the response body.
//
// In that case the first one is currently _slightly_ more informative (ErrorCodeUnauthorized
// for invalid tokens, ErrorCodeDenied for permission denied with a valid token
// for the first error, vs. ErrorCodeUnauthorized for both cases for the second error.)
//
// Also, docker/docker similarly only logs the other errors and returns the
// first one.
if len(errs) > 1 {
logrus.Debugf("Discarding non-primary errors:")
for _, err := range errs[1:] {
logrus.Debugf(" %s", err.Error())
}
}
err = errs[0]
}
switch e := err.(type) {
case *unexpectedHTTPResponseError:
response := string(e.Response)
if len(response) > 50 {
response = response[:50] + "..."
}
// %.0w makes e visible to error.Unwrap() without including any text
err = fmt.Errorf("StatusCode: %d, %q%.0w", e.StatusCode, response, e)
case errcode.Error:
// e.Error() is fmt.Sprintf("%s: %s", e.Code.Error(), e.Message, which is usually
// rather redundant. So reword it without using e.Code.Error() if e.Message is the default.
if e.Message == e.Code.Message() {
// %.0w makes e visible to error.Unwrap() without including any text
err = fmt.Errorf("%s%.0w", e.Message, e)
}
}
return err
}

5
vendor/go.podman.io/image/v5/docker/paths_common.go generated vendored Normal file
View File

@@ -0,0 +1,5 @@
//go:build !freebsd
package docker
const etcDir = "/etc"

5
vendor/go.podman.io/image/v5/docker/paths_freebsd.go generated vendored Normal file
View File

@@ -0,0 +1,5 @@
//go:build freebsd
package docker
const etcDir = "/usr/local/etc"

View File

@@ -0,0 +1,78 @@
package policyconfiguration
import (
"errors"
"fmt"
"strings"
"go.podman.io/image/v5/docker/reference"
)
// DockerReferenceIdentity returns a string representation of the reference, suitable for policy lookup,
// as a backend for ImageReference.PolicyConfigurationIdentity.
// The reference must satisfy !reference.IsNameOnly().
func DockerReferenceIdentity(ref reference.Named) (string, error) {
res := ref.Name()
tagged, isTagged := ref.(reference.NamedTagged)
digested, isDigested := ref.(reference.Canonical)
switch {
case isTagged && isDigested: // Note that this CAN actually happen.
return "", fmt.Errorf("Unexpected Docker reference %s with both a name and a digest", reference.FamiliarString(ref))
case !isTagged && !isDigested: // This should not happen, the caller is expected to ensure !reference.IsNameOnly()
return "", fmt.Errorf("Internal inconsistency: Docker reference %s with neither a tag nor a digest", reference.FamiliarString(ref))
case isTagged:
res = res + ":" + tagged.Tag()
case isDigested:
res = res + "@" + digested.Digest().String()
default: // Coverage: The above was supposed to be exhaustive.
return "", errors.New("Internal inconsistency, unexpected default branch")
}
return res, nil
}
// DockerReferenceNamespaces returns a list of other policy configuration namespaces to search,
// as a backend for ImageReference.PolicyConfigurationIdentity.
// The reference must satisfy !reference.IsNameOnly().
func DockerReferenceNamespaces(ref reference.Named) []string {
// Look for a match of the repository, and then of the possible parent
// namespaces. Note that this only happens on the expanded host names
// and repository names, i.e. "busybox" is looked up as "docker.io/library/busybox",
// then in its parent "docker.io/library"; in none of "busybox",
// un-namespaced "library" nor in "" supposedly implicitly representing "library/".
//
// ref.Name() == ref.Domain() + "/" + ref.Path(), so the last
// iteration matches the host name (for any namespace).
res := []string{}
name := ref.Name()
for {
res = append(res, name)
lastSlash := strings.LastIndex(name, "/")
if lastSlash == -1 {
break
}
name = name[:lastSlash]
}
// Strip port number if any, before appending to res slice.
// Currently, the most compatible behavior is to return
// example.com:8443/ns, example.com:8443, *.com.
// If a port number is not specified, the expected behavior would be
// example.com/ns, example.com, *.com
portNumColon := strings.Index(name, ":")
if portNumColon != -1 {
name = name[:portNumColon]
}
// Append wildcarded domains to res slice
for {
firstDot := strings.Index(name, ".")
if firstDot == -1 {
break
}
name = name[firstDot+1:]
res = append(res, "*."+name)
}
return res
}

View File

@@ -0,0 +1,2 @@
This is a copy of github.com/docker/distribution/reference as of commit 3226863cbcba6dbc2f6c83a37b28126c934af3f8,
except that ParseAnyReferenceWithSet has been removed to drop the dependency on github.com/docker/distribution/digestset.

View File

@@ -0,0 +1,42 @@
package reference
import "path"
// IsNameOnly returns true if reference only contains a repo name.
func IsNameOnly(ref Named) bool {
if _, ok := ref.(NamedTagged); ok {
return false
}
if _, ok := ref.(Canonical); ok {
return false
}
return true
}
// FamiliarName returns the familiar name string
// for the given named, familiarizing if needed.
func FamiliarName(ref Named) string {
if nn, ok := ref.(normalizedNamed); ok {
return nn.Familiar().Name()
}
return ref.Name()
}
// FamiliarString returns the familiar string representation
// for the given reference, familiarizing if needed.
func FamiliarString(ref Reference) string {
if nn, ok := ref.(normalizedNamed); ok {
return nn.Familiar().String()
}
return ref.String()
}
// FamiliarMatch reports whether ref matches the specified pattern.
// See https://godoc.org/path#Match for supported patterns.
func FamiliarMatch(pattern string, ref Reference) (bool, error) {
matched, err := path.Match(pattern, FamiliarString(ref))
if namedRef, isNamed := ref.(Named); isNamed && !matched {
matched, _ = path.Match(pattern, FamiliarName(namedRef))
}
return matched, err
}

View File

@@ -0,0 +1,181 @@
package reference
import (
"errors"
"fmt"
"strings"
"github.com/opencontainers/go-digest"
)
var (
legacyDefaultDomain = "index.docker.io"
defaultDomain = "docker.io"
officialRepoName = "library"
defaultTag = "latest"
)
// normalizedNamed represents a name which has been
// normalized and has a familiar form. A familiar name
// is what is used in Docker UI. An example normalized
// name is "docker.io/library/ubuntu" and corresponding
// familiar name of "ubuntu".
type normalizedNamed interface {
Named
Familiar() Named
}
// ParseNormalizedNamed parses a string into a named reference
// transforming a familiar name from Docker UI to a fully
// qualified reference. If the value may be an identifier
// use ParseAnyReference.
func ParseNormalizedNamed(s string) (Named, error) {
if ok := anchoredIdentifierRegexp.MatchString(s); ok {
return nil, fmt.Errorf("invalid repository name (%s), cannot specify 64-byte hexadecimal strings", s)
}
domain, remainder := splitDockerDomain(s)
var remoteName string
if tagSep := strings.IndexRune(remainder, ':'); tagSep > -1 {
remoteName = remainder[:tagSep]
} else {
remoteName = remainder
}
if strings.ToLower(remoteName) != remoteName {
return nil, errors.New("invalid reference format: repository name must be lowercase")
}
ref, err := Parse(domain + "/" + remainder)
if err != nil {
return nil, err
}
named, isNamed := ref.(Named)
if !isNamed {
return nil, fmt.Errorf("reference %s has no name", ref.String())
}
return named, nil
}
// ParseDockerRef normalizes the image reference following the docker convention. This is added
// mainly for backward compatibility.
// The reference returned can only be either tagged or digested. For reference contains both tag
// and digest, the function returns digested reference, e.g. docker.io/library/busybox:latest@
// sha256:7cc4b5aefd1d0cadf8d97d4350462ba51c694ebca145b08d7d41b41acc8db5aa will be returned as
// docker.io/library/busybox@sha256:7cc4b5aefd1d0cadf8d97d4350462ba51c694ebca145b08d7d41b41acc8db5aa.
func ParseDockerRef(ref string) (Named, error) {
named, err := ParseNormalizedNamed(ref)
if err != nil {
return nil, err
}
if _, ok := named.(NamedTagged); ok {
if canonical, ok := named.(Canonical); ok {
// The reference is both tagged and digested, only
// return digested.
newNamed, err := WithName(canonical.Name())
if err != nil {
return nil, err
}
newCanonical, err := WithDigest(newNamed, canonical.Digest())
if err != nil {
return nil, err
}
return newCanonical, nil
}
}
return TagNameOnly(named), nil
}
// splitDockerDomain splits a repository name to domain and remotename string.
// If no valid domain is found, the default domain is used. Repository name
// needs to be already validated before.
func splitDockerDomain(name string) (domain, remainder string) {
i := strings.IndexRune(name, '/')
if i == -1 || (!strings.ContainsAny(name[:i], ".:") && name[:i] != "localhost") {
domain, remainder = defaultDomain, name
} else {
domain, remainder = name[:i], name[i+1:]
}
if domain == legacyDefaultDomain {
domain = defaultDomain
}
if domain == defaultDomain && !strings.ContainsRune(remainder, '/') {
remainder = officialRepoName + "/" + remainder
}
return
}
// familiarizeName returns a shortened version of the name familiar
// to the Docker UI. Familiar names have the default domain
// "docker.io" and "library/" repository prefix removed.
// For example, "docker.io/library/redis" will have the familiar
// name "redis" and "docker.io/dmcgowan/myapp" will be "dmcgowan/myapp".
// Returns a familiarized named only reference.
func familiarizeName(named namedRepository) repository {
repo := repository{
domain: named.Domain(),
path: named.Path(),
}
if repo.domain == defaultDomain {
repo.domain = ""
// Handle official repositories which have the pattern "library/<official repo name>"
if split := strings.Split(repo.path, "/"); len(split) == 2 && split[0] == officialRepoName {
repo.path = split[1]
}
}
return repo
}
func (r reference) Familiar() Named {
return reference{
namedRepository: familiarizeName(r.namedRepository),
tag: r.tag,
digest: r.digest,
}
}
func (r repository) Familiar() Named {
return familiarizeName(r)
}
func (t taggedReference) Familiar() Named {
return taggedReference{
namedRepository: familiarizeName(t.namedRepository),
tag: t.tag,
}
}
func (c canonicalReference) Familiar() Named {
return canonicalReference{
namedRepository: familiarizeName(c.namedRepository),
digest: c.digest,
}
}
// TagNameOnly adds the default tag "latest" to a reference if it only has
// a repo name.
func TagNameOnly(ref Named) Named {
if IsNameOnly(ref) {
namedTagged, err := WithTag(ref, defaultTag)
if err != nil {
// Default tag must be valid, to create a NamedTagged
// type with non-validated input the WithTag function
// should be used instead
panic(err)
}
return namedTagged
}
return ref
}
// ParseAnyReference parses a reference string as a possible identifier,
// full digest, or familiar name.
func ParseAnyReference(ref string) (Reference, error) {
if ok := anchoredIdentifierRegexp.MatchString(ref); ok {
return digestReference("sha256:" + ref), nil
}
if dgst, err := digest.Parse(ref); err == nil {
return digestReference(dgst), nil
}
return ParseNormalizedNamed(ref)
}

View File

@@ -0,0 +1,433 @@
// Package reference provides a general type to represent any way of referencing images within the registry.
// Its main purpose is to abstract tags and digests (content-addressable hash).
//
// Grammar
//
// reference := name [ ":" tag ] [ "@" digest ]
// name := [domain '/'] path-component ['/' path-component]*
// domain := domain-component ['.' domain-component]* [':' port-number]
// domain-component := /([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])/
// port-number := /[0-9]+/
// path-component := alphanumeric [separator alphanumeric]*
// alphanumeric := /[a-z0-9]+/
// separator := /[_.]|__|[-]*/
//
// tag := /[\w][\w.-]{0,127}/
//
// digest := digest-algorithm ":" digest-hex
// digest-algorithm := digest-algorithm-component [ digest-algorithm-separator digest-algorithm-component ]*
// digest-algorithm-separator := /[+.-_]/
// digest-algorithm-component := /[A-Za-z][A-Za-z0-9]*/
// digest-hex := /[0-9a-fA-F]{32,}/ ; At least 128 bit digest value
//
// identifier := /[a-f0-9]{64}/
// short-identifier := /[a-f0-9]{6,64}/
package reference
import (
"errors"
"fmt"
"strings"
"github.com/opencontainers/go-digest"
)
const (
// NameTotalLengthMax is the maximum total number of characters in a repository name.
NameTotalLengthMax = 255
)
var (
// ErrReferenceInvalidFormat represents an error while trying to parse a string as a reference.
ErrReferenceInvalidFormat = errors.New("invalid reference format")
// ErrTagInvalidFormat represents an error while trying to parse a string as a tag.
ErrTagInvalidFormat = errors.New("invalid tag format")
// ErrDigestInvalidFormat represents an error while trying to parse a string as a tag.
ErrDigestInvalidFormat = errors.New("invalid digest format")
// ErrNameContainsUppercase is returned for invalid repository names that contain uppercase characters.
ErrNameContainsUppercase = errors.New("repository name must be lowercase")
// ErrNameEmpty is returned for empty, invalid repository names.
ErrNameEmpty = errors.New("repository name must have at least one component")
// ErrNameTooLong is returned when a repository name is longer than NameTotalLengthMax.
ErrNameTooLong = fmt.Errorf("repository name must not be more than %v characters", NameTotalLengthMax)
// ErrNameNotCanonical is returned when a name is not canonical.
ErrNameNotCanonical = errors.New("repository name must be canonical")
)
// Reference is an opaque object reference identifier that may include
// modifiers such as a hostname, name, tag, and digest.
type Reference interface {
// String returns the full reference
String() string
}
// Field provides a wrapper type for resolving correct reference types when
// working with encoding.
type Field struct {
reference Reference
}
// AsField wraps a reference in a Field for encoding.
func AsField(reference Reference) Field {
return Field{reference}
}
// Reference unwraps the reference type from the field to
// return the Reference object. This object should be
// of the appropriate type to further check for different
// reference types.
func (f Field) Reference() Reference {
return f.reference
}
// MarshalText serializes the field to byte text which
// is the string of the reference.
func (f Field) MarshalText() (p []byte, err error) {
return []byte(f.reference.String()), nil
}
// UnmarshalText parses text bytes by invoking the
// reference parser to ensure the appropriately
// typed reference object is wrapped by field.
func (f *Field) UnmarshalText(p []byte) error {
r, err := Parse(string(p))
if err != nil {
return err
}
f.reference = r
return nil
}
// Named is an object with a full name
type Named interface {
Reference
Name() string
}
// Tagged is an object which has a tag
type Tagged interface {
Reference
Tag() string
}
// NamedTagged is an object including a name and tag.
type NamedTagged interface {
Named
Tag() string
}
// Digested is an object which has a digest
// in which it can be referenced by
type Digested interface {
Reference
Digest() digest.Digest
}
// Canonical reference is an object with a fully unique
// name including a name with domain and digest
type Canonical interface {
Named
Digest() digest.Digest
}
// namedRepository is a reference to a repository with a name.
// A namedRepository has both domain and path components.
type namedRepository interface {
Named
Domain() string
Path() string
}
// Domain returns the domain part of the Named reference
func Domain(named Named) string {
if r, ok := named.(namedRepository); ok {
return r.Domain()
}
domain, _ := splitDomain(named.Name())
return domain
}
// Path returns the name without the domain part of the Named reference
func Path(named Named) (name string) {
if r, ok := named.(namedRepository); ok {
return r.Path()
}
_, path := splitDomain(named.Name())
return path
}
func splitDomain(name string) (string, string) {
match := anchoredNameRegexp.FindStringSubmatch(name)
if len(match) != 3 {
return "", name
}
return match[1], match[2]
}
// SplitHostname splits a named reference into a
// hostname and name string. If no valid hostname is
// found, the hostname is empty and the full value
// is returned as name
// Deprecated: Use Domain or Path
func SplitHostname(named Named) (string, string) {
if r, ok := named.(namedRepository); ok {
return r.Domain(), r.Path()
}
return splitDomain(named.Name())
}
// Parse parses s and returns a syntactically valid Reference.
// If an error was encountered it is returned, along with a nil Reference.
// NOTE: Parse will not handle short digests.
func Parse(s string) (Reference, error) {
matches := ReferenceRegexp.FindStringSubmatch(s)
if matches == nil {
if s == "" {
return nil, ErrNameEmpty
}
if ReferenceRegexp.FindStringSubmatch(strings.ToLower(s)) != nil {
return nil, ErrNameContainsUppercase
}
return nil, ErrReferenceInvalidFormat
}
if len(matches[1]) > NameTotalLengthMax {
return nil, ErrNameTooLong
}
var repo repository
nameMatch := anchoredNameRegexp.FindStringSubmatch(matches[1])
if len(nameMatch) == 3 {
repo.domain = nameMatch[1]
repo.path = nameMatch[2]
} else {
repo.domain = ""
repo.path = matches[1]
}
ref := reference{
namedRepository: repo,
tag: matches[2],
}
if matches[3] != "" {
var err error
ref.digest, err = digest.Parse(matches[3])
if err != nil {
return nil, err
}
}
r := getBestReferenceType(ref)
if r == nil {
return nil, ErrNameEmpty
}
return r, nil
}
// ParseNamed parses s and returns a syntactically valid reference implementing
// the Named interface. The reference must have a name and be in the canonical
// form, otherwise an error is returned.
// If an error was encountered it is returned, along with a nil Reference.
// NOTE: ParseNamed will not handle short digests.
func ParseNamed(s string) (Named, error) {
named, err := ParseNormalizedNamed(s)
if err != nil {
return nil, err
}
if named.String() != s {
return nil, ErrNameNotCanonical
}
return named, nil
}
// WithName returns a named object representing the given string. If the input
// is invalid ErrReferenceInvalidFormat will be returned.
func WithName(name string) (Named, error) {
if len(name) > NameTotalLengthMax {
return nil, ErrNameTooLong
}
match := anchoredNameRegexp.FindStringSubmatch(name)
if match == nil || len(match) != 3 {
return nil, ErrReferenceInvalidFormat
}
return repository{
domain: match[1],
path: match[2],
}, nil
}
// WithTag combines the name from "name" and the tag from "tag" to form a
// reference incorporating both the name and the tag.
func WithTag(name Named, tag string) (NamedTagged, error) {
if !anchoredTagRegexp.MatchString(tag) {
return nil, ErrTagInvalidFormat
}
var repo repository
if r, ok := name.(namedRepository); ok {
repo.domain = r.Domain()
repo.path = r.Path()
} else {
repo.path = name.Name()
}
if canonical, ok := name.(Canonical); ok {
return reference{
namedRepository: repo,
tag: tag,
digest: canonical.Digest(),
}, nil
}
return taggedReference{
namedRepository: repo,
tag: tag,
}, nil
}
// WithDigest combines the name from "name" and the digest from "digest" to form
// a reference incorporating both the name and the digest.
func WithDigest(name Named, digest digest.Digest) (Canonical, error) {
if !anchoredDigestRegexp.MatchString(digest.String()) {
return nil, ErrDigestInvalidFormat
}
var repo repository
if r, ok := name.(namedRepository); ok {
repo.domain = r.Domain()
repo.path = r.Path()
} else {
repo.path = name.Name()
}
if tagged, ok := name.(Tagged); ok {
return reference{
namedRepository: repo,
tag: tagged.Tag(),
digest: digest,
}, nil
}
return canonicalReference{
namedRepository: repo,
digest: digest,
}, nil
}
// TrimNamed removes any tag or digest from the named reference.
func TrimNamed(ref Named) Named {
domain, path := SplitHostname(ref)
return repository{
domain: domain,
path: path,
}
}
func getBestReferenceType(ref reference) Reference {
if ref.Name() == "" {
// Allow digest only references
if ref.digest != "" {
return digestReference(ref.digest)
}
return nil
}
if ref.tag == "" {
if ref.digest != "" {
return canonicalReference{
namedRepository: ref.namedRepository,
digest: ref.digest,
}
}
return ref.namedRepository
}
if ref.digest == "" {
return taggedReference{
namedRepository: ref.namedRepository,
tag: ref.tag,
}
}
return ref
}
type reference struct {
namedRepository
tag string
digest digest.Digest
}
func (r reference) String() string {
return r.Name() + ":" + r.tag + "@" + r.digest.String()
}
func (r reference) Tag() string {
return r.tag
}
func (r reference) Digest() digest.Digest {
return r.digest
}
type repository struct {
domain string
path string
}
func (r repository) String() string {
return r.Name()
}
func (r repository) Name() string {
if r.domain == "" {
return r.path
}
return r.domain + "/" + r.path
}
func (r repository) Domain() string {
return r.domain
}
func (r repository) Path() string {
return r.path
}
type digestReference digest.Digest
func (d digestReference) String() string {
return digest.Digest(d).String()
}
func (d digestReference) Digest() digest.Digest {
return digest.Digest(d)
}
type taggedReference struct {
namedRepository
tag string
}
func (t taggedReference) String() string {
return t.Name() + ":" + t.tag
}
func (t taggedReference) Tag() string {
return t.tag
}
type canonicalReference struct {
namedRepository
digest digest.Digest
}
func (c canonicalReference) String() string {
return c.Name() + "@" + c.digest.String()
}
func (c canonicalReference) Digest() digest.Digest {
return c.digest
}

View File

@@ -0,0 +1,6 @@
package reference
// Return true if the specified string fully matches `IdentifierRegexp`.
func IsFullIdentifier(s string) bool {
return anchoredIdentifierRegexp.MatchString(s)
}

156
vendor/go.podman.io/image/v5/docker/reference/regexp.go generated vendored Normal file
View File

@@ -0,0 +1,156 @@
package reference
import (
"regexp"
"strings"
storageRegexp "go.podman.io/storage/pkg/regexp"
)
const (
// alphaNumeric defines the alpha numeric atom, typically a
// component of names. This only allows lower case characters and digits.
alphaNumeric = `[a-z0-9]+`
// separator defines the separators allowed to be embedded in name
// components. This allow one period, one or two underscore and multiple
// dashes. Repeated dashes and underscores are intentionally treated
// differently. In order to support valid hostnames as name components,
// supporting repeated dash was added. Additionally double underscore is
// now allowed as a separator to loosen the restriction for previously
// supported names.
separator = `(?:[._]|__|[-]*)`
// repository name to start with a component as defined by DomainRegexp
// and followed by an optional port.
domainComponent = `(?:[a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])`
// The string counterpart for TagRegexp.
tag = `[\w][\w.-]{0,127}`
// The string counterpart for DigestRegexp.
digestPat = `[A-Za-z][A-Za-z0-9]*(?:[-_+.][A-Za-z][A-Za-z0-9]*)*[:][[:xdigit:]]{32,}`
// The string counterpart for IdentifierRegexp.
identifier = `([a-f0-9]{64})`
// The string counterpart for ShortIdentifierRegexp.
shortIdentifier = `([a-f0-9]{6,64})`
)
var (
// nameComponent restricts registry path component names to start
// with at least one letter or number, with following parts able to be
// separated by one period, one or two underscore and multiple dashes.
nameComponent = expression(
alphaNumeric,
optional(repeated(separator, alphaNumeric)))
domain = expression(
domainComponent,
optional(repeated(literal(`.`), domainComponent)),
optional(literal(`:`), `[0-9]+`))
// DomainRegexp defines the structure of potential domain components
// that may be part of image names. This is purposely a subset of what is
// allowed by DNS to ensure backwards compatibility with Docker image
// names.
DomainRegexp = re(domain)
// TagRegexp matches valid tag names. From docker/docker:graph/tags.go.
TagRegexp = re(tag)
anchoredTag = anchored(tag)
// anchoredTagRegexp matches valid tag names, anchored at the start and
// end of the matched string.
anchoredTagRegexp = storageRegexp.Delayed(anchoredTag)
// DigestRegexp matches valid digests.
DigestRegexp = re(digestPat)
anchoredDigest = anchored(digestPat)
// anchoredDigestRegexp matches valid digests, anchored at the start and
// end of the matched string.
anchoredDigestRegexp = storageRegexp.Delayed(anchoredDigest)
namePat = expression(
optional(domain, literal(`/`)),
nameComponent,
optional(repeated(literal(`/`), nameComponent)))
// NameRegexp is the format for the name component of references. The
// regexp has capturing groups for the domain and name part omitting
// the separating forward slash from either.
NameRegexp = re(namePat)
anchoredName = anchored(
optional(capture(domain), literal(`/`)),
capture(nameComponent,
optional(repeated(literal(`/`), nameComponent))))
// anchoredNameRegexp is used to parse a name value, capturing the
// domain and trailing components.
anchoredNameRegexp = storageRegexp.Delayed(anchoredName)
referencePat = anchored(capture(namePat),
optional(literal(":"), capture(tag)),
optional(literal("@"), capture(digestPat)))
// ReferenceRegexp is the full supported format of a reference. The regexp
// is anchored and has capturing groups for name, tag, and digest
// components.
ReferenceRegexp = re(referencePat)
// IdentifierRegexp is the format for string identifier used as a
// content addressable identifier using sha256. These identifiers
// are like digests without the algorithm, since sha256 is used.
IdentifierRegexp = re(identifier)
// ShortIdentifierRegexp is the format used to represent a prefix
// of an identifier. A prefix may be used to match a sha256 identifier
// within a list of trusted identifiers.
ShortIdentifierRegexp = re(shortIdentifier)
anchoredIdentifier = anchored(identifier)
// anchoredIdentifierRegexp is used to check or match an
// identifier value, anchored at start and end of string.
anchoredIdentifierRegexp = storageRegexp.Delayed(anchoredIdentifier)
)
// re compiles the string to a regular expression.
var re = regexp.MustCompile
// literal compiles s into a literal regular expression, escaping any regexp
// reserved characters.
func literal(s string) string {
return regexp.QuoteMeta(s)
}
// expression defines a full expression, where each regular expression must
// follow the previous.
func expression(res ...string) string {
return strings.Join(res, "")
}
// optional wraps the expression in a non-capturing group and makes the
// production optional.
func optional(res ...string) string {
return group(expression(res...)) + `?`
}
// repeated wraps the regexp in a non-capturing group to get one or more
// matches.
func repeated(res ...string) string {
return group(expression(res...)) + `+`
}
// group wraps the regexp in a non-capturing group.
func group(res ...string) string {
return `(?:` + expression(res...) + `)`
}
// capture wraps the expression in a capturing group.
func capture(res ...string) string {
return `(` + expression(res...) + `)`
}
// anchored anchors the regular expression by adding start and end delimiters.
func anchored(res ...string) string {
return `^` + expression(res...) + `$`
}

303
vendor/go.podman.io/image/v5/docker/registries_d.go generated vendored Normal file
View File

@@ -0,0 +1,303 @@
package docker
import (
"errors"
"fmt"
"io/fs"
"net/url"
"os"
"path"
"path/filepath"
"strings"
"github.com/opencontainers/go-digest"
"github.com/sirupsen/logrus"
"go.podman.io/image/v5/docker/reference"
"go.podman.io/image/v5/internal/rootless"
"go.podman.io/image/v5/types"
"go.podman.io/storage/pkg/fileutils"
"go.podman.io/storage/pkg/homedir"
"gopkg.in/yaml.v3"
)
// systemRegistriesDirPath is the path to registries.d, used for locating lookaside Docker signature storage.
// You can override this at build time with
// -ldflags '-X go.podman.io/image/v5/docker.systemRegistriesDirPath=$your_path'
var systemRegistriesDirPath = builtinRegistriesDirPath
// builtinRegistriesDirPath is the path to registries.d.
// DO NOT change this, instead see systemRegistriesDirPath above.
const builtinRegistriesDirPath = etcDir + "/containers/registries.d"
// userRegistriesDirPath is the path to the per user registries.d.
var userRegistriesDir = filepath.FromSlash(".config/containers/registries.d")
// defaultUserDockerDir is the default lookaside directory for unprivileged user
var defaultUserDockerDir = filepath.FromSlash(".local/share/containers/sigstore")
// defaultDockerDir is the default lookaside directory for root
var defaultDockerDir = "/var/lib/containers/sigstore"
// registryConfiguration is one of the files in registriesDirPath configuring lookaside locations, or the result of merging them all.
// NOTE: Keep this in sync with docs/registries.d.md!
type registryConfiguration struct {
DefaultDocker *registryNamespace `yaml:"default-docker"`
// The key is a namespace, using fully-expanded Docker reference format or parent namespaces (per dockerReference.PolicyConfiguration*),
Docker map[string]registryNamespace `yaml:"docker"`
}
// registryNamespace defines lookaside locations for a single namespace.
type registryNamespace struct {
Lookaside string `yaml:"lookaside"` // For reading, and if LookasideStaging is not present, for writing.
LookasideStaging string `yaml:"lookaside-staging"` // For writing only.
SigStore string `yaml:"sigstore"` // For compatibility, deprecated in favor of Lookaside.
SigStoreStaging string `yaml:"sigstore-staging"` // For compatibility, deprecated in favor of LookasideStaging.
UseSigstoreAttachments *bool `yaml:"use-sigstore-attachments,omitempty"`
}
// lookasideStorageBase is an "opaque" type representing a lookaside Docker signature storage.
// Users outside of this file should use SignatureStorageBaseURL and lookasideStorageURL below.
type lookasideStorageBase *url.URL
// SignatureStorageBaseURL reads configuration to find an appropriate lookaside storage URL for ref, for write access if “write”.
// the usage of the BaseURL is defined under docker/distribution registries—separate storage of docs/signature-protocols.md
// Warning: This function only exposes configuration in registries.d;
// just because this function returns an URL does not mean that the URL will be used by c/image/docker (e.g. if the registry natively supports X-R-S-S).
func SignatureStorageBaseURL(sys *types.SystemContext, ref types.ImageReference, write bool) (*url.URL, error) {
dr, ok := ref.(dockerReference)
if !ok {
return nil, errors.New("ref must be a dockerReference")
}
config, err := loadRegistryConfiguration(sys)
if err != nil {
return nil, err
}
return config.lookasideStorageBaseURL(dr, write)
}
// loadRegistryConfiguration returns a registryConfiguration appropriate for sys.
func loadRegistryConfiguration(sys *types.SystemContext) (*registryConfiguration, error) {
dirPath := registriesDirPath(sys)
logrus.Debugf(`Using registries.d directory %s`, dirPath)
return loadAndMergeConfig(dirPath)
}
// registriesDirPath returns a path to registries.d
func registriesDirPath(sys *types.SystemContext) string {
return registriesDirPathWithHomeDir(sys, homedir.Get())
}
// registriesDirPathWithHomeDir is an internal implementation detail of registriesDirPath,
// it exists only to allow testing it with an artificial home directory.
func registriesDirPathWithHomeDir(sys *types.SystemContext, homeDir string) string {
if sys != nil && sys.RegistriesDirPath != "" {
return sys.RegistriesDirPath
}
userRegistriesDirPath := filepath.Join(homeDir, userRegistriesDir)
if err := fileutils.Exists(userRegistriesDirPath); err == nil {
return userRegistriesDirPath
}
if sys != nil && sys.RootForImplicitAbsolutePaths != "" {
return filepath.Join(sys.RootForImplicitAbsolutePaths, systemRegistriesDirPath)
}
return systemRegistriesDirPath
}
// loadAndMergeConfig loads configuration files in dirPath
// FIXME: Probably rename to loadRegistryConfigurationForPath
func loadAndMergeConfig(dirPath string) (*registryConfiguration, error) {
mergedConfig := registryConfiguration{Docker: map[string]registryNamespace{}}
dockerDefaultMergedFrom := ""
nsMergedFrom := map[string]string{}
dir, err := os.Open(dirPath)
if err != nil {
if os.IsNotExist(err) {
return &mergedConfig, nil
}
return nil, err
}
configNames, err := dir.Readdirnames(0)
if err != nil {
return nil, err
}
for _, configName := range configNames {
if !strings.HasSuffix(configName, ".yaml") {
continue
}
configPath := filepath.Join(dirPath, configName)
configBytes, err := os.ReadFile(configPath)
if err != nil {
if errors.Is(err, fs.ErrNotExist) {
// file must have been removed between the directory listing
// and the open call, ignore that as it is a expected race
continue
}
return nil, err
}
var config registryConfiguration
err = yaml.Unmarshal(configBytes, &config)
if err != nil {
return nil, fmt.Errorf("parsing %s: %w", configPath, err)
}
if config.DefaultDocker != nil {
if mergedConfig.DefaultDocker != nil {
return nil, fmt.Errorf(`Error parsing signature storage configuration: "default-docker" defined both in %q and %q`,
dockerDefaultMergedFrom, configPath)
}
mergedConfig.DefaultDocker = config.DefaultDocker
dockerDefaultMergedFrom = configPath
}
for nsName, nsConfig := range config.Docker { // includes config.Docker == nil
if _, ok := mergedConfig.Docker[nsName]; ok {
return nil, fmt.Errorf(`Error parsing signature storage configuration: "docker" namespace %q defined both in %q and %q`,
nsName, nsMergedFrom[nsName], configPath)
}
mergedConfig.Docker[nsName] = nsConfig
nsMergedFrom[nsName] = configPath
}
}
return &mergedConfig, nil
}
// lookasideStorageBaseURL returns an appropriate signature storage URL for ref, for write access if “write”.
// the usage of the BaseURL is defined under docker/distribution registries—separate storage of docs/signature-protocols.md
func (config *registryConfiguration) lookasideStorageBaseURL(dr dockerReference, write bool) (*url.URL, error) {
topLevel := config.signatureTopLevel(dr, write)
var baseURL *url.URL
if topLevel != "" {
u, err := url.Parse(topLevel)
if err != nil {
return nil, fmt.Errorf("Invalid signature storage URL %s: %w", topLevel, err)
}
baseURL = u
} else {
// returns default directory if no lookaside specified in configuration file
baseURL = builtinDefaultLookasideStorageDir(rootless.GetRootlessEUID())
logrus.Debugf(" No signature storage configuration found for %s, using built-in default %s", dr.PolicyConfigurationIdentity(), baseURL.Redacted())
}
// NOTE: Keep this in sync with docs/signature-protocols.md!
// FIXME? Restrict to explicitly supported schemes?
repo := reference.Path(dr.ref) // Note that this is without a tag or digest.
if path.Clean(repo) != repo { // Coverage: This should not be reachable because /./ and /../ components are not valid in docker references
return nil, fmt.Errorf("Unexpected path elements in Docker reference %s for signature storage", dr.ref.String())
}
baseURL.Path = baseURL.Path + "/" + repo
return baseURL, nil
}
// builtinDefaultLookasideStorageDir returns default signature storage URL as per euid
func builtinDefaultLookasideStorageDir(euid int) *url.URL {
if euid != 0 {
return &url.URL{Scheme: "file", Path: filepath.Join(homedir.Get(), defaultUserDockerDir)}
}
return &url.URL{Scheme: "file", Path: defaultDockerDir}
}
// config.signatureTopLevel returns an URL string configured in config for ref, for write access if “write”.
// (the top level of the storage, namespaced by repo.FullName etc.), or "" if nothing has been configured.
func (config *registryConfiguration) signatureTopLevel(ref dockerReference, write bool) string {
if config.Docker != nil {
// Look for a full match.
identity := ref.PolicyConfigurationIdentity()
if ns, ok := config.Docker[identity]; ok {
logrus.Debugf(` Lookaside configuration: using "docker" namespace %s`, identity)
if ret := ns.signatureTopLevel(write); ret != "" {
return ret
}
}
// Look for a match of the possible parent namespaces.
for _, name := range ref.PolicyConfigurationNamespaces() {
if ns, ok := config.Docker[name]; ok {
logrus.Debugf(` Lookaside configuration: using "docker" namespace %s`, name)
if ret := ns.signatureTopLevel(write); ret != "" {
return ret
}
}
}
}
// Look for a default location
if config.DefaultDocker != nil {
logrus.Debugf(` Lookaside configuration: using "default-docker" configuration`)
if ret := config.DefaultDocker.signatureTopLevel(write); ret != "" {
return ret
}
}
return ""
}
// config.useSigstoreAttachments returns whether we should look for and write sigstore attachments.
// for ref.
func (config *registryConfiguration) useSigstoreAttachments(ref dockerReference) bool {
if config.Docker != nil {
// Look for a full match.
identity := ref.PolicyConfigurationIdentity()
if ns, ok := config.Docker[identity]; ok {
logrus.Debugf(` Sigstore attachments: using "docker" namespace %s`, identity)
if ns.UseSigstoreAttachments != nil {
return *ns.UseSigstoreAttachments
}
}
// Look for a match of the possible parent namespaces.
for _, name := range ref.PolicyConfigurationNamespaces() {
if ns, ok := config.Docker[name]; ok {
logrus.Debugf(` Sigstore attachments: using "docker" namespace %s`, name)
if ns.UseSigstoreAttachments != nil {
return *ns.UseSigstoreAttachments
}
}
}
}
// Look for a default location
if config.DefaultDocker != nil {
logrus.Debugf(` Sigstore attachments: using "default-docker" configuration`)
if config.DefaultDocker.UseSigstoreAttachments != nil {
return *config.DefaultDocker.UseSigstoreAttachments
}
}
return false
}
// ns.signatureTopLevel returns an URL string configured in ns for ref, for write access if “write”.
// or "" if nothing has been configured.
func (ns registryNamespace) signatureTopLevel(write bool) string {
if write {
if ns.LookasideStaging != "" {
logrus.Debugf(` Using "lookaside-staging" %s`, ns.LookasideStaging)
return ns.LookasideStaging
}
if ns.SigStoreStaging != "" {
logrus.Debugf(` Using "sigstore-staging" %s`, ns.SigStoreStaging)
return ns.SigStoreStaging
}
}
if ns.Lookaside != "" {
logrus.Debugf(` Using "lookaside" %s`, ns.Lookaside)
return ns.Lookaside
}
if ns.SigStore != "" {
logrus.Debugf(` Using "sigstore" %s`, ns.SigStore)
return ns.SigStore
}
return ""
}
// lookasideStorageURL returns an URL usable for accessing signature index in base with known manifestDigest.
// base is not nil from the caller
// NOTE: Keep this in sync with docs/signature-protocols.md!
func lookasideStorageURL(base lookasideStorageBase, manifestDigest digest.Digest, index int) (*url.URL, error) {
if err := manifestDigest.Validate(); err != nil { // digest.Digest.Encoded() panics on failure, and could possibly result in a path with ../, so validate explicitly.
return nil, err
}
sigURL := *base
sigURL.Path = fmt.Sprintf("%s@%s=%s/signature-%d", sigURL.Path, manifestDigest.Algorithm(), manifestDigest.Encoded(), index+1)
return &sigURL, nil
}

175
vendor/go.podman.io/image/v5/docker/wwwauthenticate.go generated vendored Normal file
View File

@@ -0,0 +1,175 @@
package docker
// Based on github.com/docker/distribution/registry/client/auth/authchallenge.go, primarily stripping unnecessary dependencies.
import (
"fmt"
"iter"
"net/http"
"strings"
)
// challenge carries information from a WWW-Authenticate response header.
// See RFC 7235.
type challenge struct {
// Scheme is the auth-scheme according to RFC 7235
Scheme string
// Parameters are the auth-params according to RFC 7235
Parameters map[string]string
}
// Octet types from RFC 7230.
type octetType byte
var octetTypes [256]octetType
const (
isToken octetType = 1 << iota
isSpace
)
func init() {
// OCTET = <any 8-bit sequence of data>
// CHAR = <any US-ASCII character (octets 0 - 127)>
// CTL = <any US-ASCII control character (octets 0 - 31) and DEL (127)>
// CR = <US-ASCII CR, carriage return (13)>
// LF = <US-ASCII LF, linefeed (10)>
// SP = <US-ASCII SP, space (32)>
// HT = <US-ASCII HT, horizontal-tab (9)>
// <"> = <US-ASCII double-quote mark (34)>
// CRLF = CR LF
// LWS = [CRLF] 1*( SP | HT )
// TEXT = <any OCTET except CTLs, but including LWS>
// separators = "(" | ")" | "<" | ">" | "@" | "," | ";" | ":" | "\" | <">
// | "/" | "[" | "]" | "?" | "=" | "{" | "}" | SP | HT
// token = 1*<any CHAR except CTLs or separators>
// qdtext = <any TEXT except <">>
for c := 0; c < 256; c++ {
var t octetType
isCtl := c <= 31 || c == 127
isChar := 0 <= c && c <= 127
isSeparator := strings.ContainsRune(" \t\"(),/:;<=>?@[]\\{}", rune(c))
if strings.ContainsRune(" \t\r\n", rune(c)) {
t |= isSpace
}
if isChar && !isCtl && !isSeparator {
t |= isToken
}
octetTypes[c] = t
}
}
func iterateAuthHeader(header http.Header) iter.Seq[challenge] {
return func(yield func(challenge) bool) {
for _, h := range header[http.CanonicalHeaderKey("WWW-Authenticate")] {
v, p := parseValueAndParams(h)
if v != "" {
if !yield(challenge{Scheme: v, Parameters: p}) {
return
}
}
}
}
}
// parseAuthScope parses an authentication scope string of the form `$resource:$remote:$actions`
func parseAuthScope(scopeStr string) (*authScope, error) {
if parts := strings.Split(scopeStr, ":"); len(parts) == 3 {
return &authScope{
resourceType: parts[0],
remoteName: parts[1],
actions: parts[2],
}, nil
}
return nil, fmt.Errorf("error parsing auth scope: '%s'", scopeStr)
}
// NOTE: This is not a fully compliant parser per RFC 7235:
// Most notably it does not support more than one challenge within a single header
// Some of the whitespace parsing also seems noncompliant.
// But it is clearly better than what we used to have…
func parseValueAndParams(header string) (value string, params map[string]string) {
params = make(map[string]string)
value, s := expectToken(header)
if value == "" {
return
}
value = strings.ToLower(value)
s = "," + skipSpace(s)
for strings.HasPrefix(s, ",") {
var pkey string
pkey, s = expectToken(skipSpace(s[1:]))
if pkey == "" {
return
}
if !strings.HasPrefix(s, "=") {
return
}
var pvalue string
pvalue, s = expectTokenOrQuoted(s[1:])
if pvalue == "" {
return
}
pkey = strings.ToLower(pkey)
params[pkey] = pvalue
s = skipSpace(s)
}
return
}
func skipSpace(s string) (rest string) {
i := 0
for ; i < len(s); i++ {
if octetTypes[s[i]]&isSpace == 0 {
break
}
}
return s[i:]
}
func expectToken(s string) (token, rest string) {
i := 0
for ; i < len(s); i++ {
if octetTypes[s[i]]&isToken == 0 {
break
}
}
return s[:i], s[i:]
}
func expectTokenOrQuoted(s string) (value string, rest string) {
if !strings.HasPrefix(s, "\"") {
return expectToken(s)
}
s = s[1:]
for i := 0; i < len(s); i++ {
switch s[i] {
case '"':
return s[:i], s[i+1:]
case '\\':
p := make([]byte, len(s)-1)
j := copy(p, s[:i])
escape := true
for i++; i < len(s); i++ {
b := s[i]
switch {
case escape:
escape = false
p[j] = b
j++
case b == '\\':
escape = true
case b == '"':
return string(p[:j]), s[i+1:]
default:
p[j] = b
j++
}
}
return "", ""
}
}
return "", ""
}

View File

@@ -0,0 +1,55 @@
package blobinfocache
import (
digest "github.com/opencontainers/go-digest"
"go.podman.io/image/v5/types"
)
// FromBlobInfoCache returns a BlobInfoCache2 based on a BlobInfoCache, returning the original
// object if it implements BlobInfoCache2, or a wrapper which discards compression information
// if it only implements BlobInfoCache.
func FromBlobInfoCache(bic types.BlobInfoCache) BlobInfoCache2 {
if bic2, ok := bic.(BlobInfoCache2); ok {
return bic2
}
return &v1OnlyBlobInfoCache{
BlobInfoCache: bic,
}
}
type v1OnlyBlobInfoCache struct {
types.BlobInfoCache
}
func (bic *v1OnlyBlobInfoCache) Open() {
}
func (bic *v1OnlyBlobInfoCache) Close() {
}
func (bic *v1OnlyBlobInfoCache) UncompressedDigestForTOC(tocDigest digest.Digest) digest.Digest {
return ""
}
func (bic *v1OnlyBlobInfoCache) RecordTOCUncompressedPair(tocDigest digest.Digest, uncompressed digest.Digest) {
}
func (bic *v1OnlyBlobInfoCache) RecordDigestCompressorData(anyDigest digest.Digest, data DigestCompressorData) {
}
func (bic *v1OnlyBlobInfoCache) CandidateLocations2(transport types.ImageTransport, scope types.BICTransportScope, digest digest.Digest, options CandidateLocations2Options) []BICReplacementCandidate2 {
return nil
}
// CandidateLocationsFromV2 converts a slice of BICReplacementCandidate2 to a slice of
// types.BICReplacementCandidate, dropping compression information.
func CandidateLocationsFromV2(v2candidates []BICReplacementCandidate2) []types.BICReplacementCandidate {
candidates := make([]types.BICReplacementCandidate, 0, len(v2candidates))
for _, c := range v2candidates {
candidates = append(candidates, types.BICReplacementCandidate{
Digest: c.Digest,
Location: c.Location,
})
}
return candidates
}

View File

@@ -0,0 +1,81 @@
package blobinfocache
import (
digest "github.com/opencontainers/go-digest"
compressiontypes "go.podman.io/image/v5/pkg/compression/types"
"go.podman.io/image/v5/types"
)
const (
// Uncompressed is the value we store in a blob info cache to indicate that we know that
// the blob in the corresponding location is not compressed.
Uncompressed = "uncompressed"
// UnknownCompression is the value we store in a blob info cache to indicate that we don't
// know if the blob in the corresponding location is compressed (and if so, how) or not.
UnknownCompression = "unknown"
)
// BlobInfoCache2 extends BlobInfoCache by adding the ability to track information about what kind
// of compression was applied to the blobs it keeps information about.
type BlobInfoCache2 interface {
types.BlobInfoCache
// Open() sets up the cache for future accesses, potentially acquiring costly state. Each Open() must be paired with a Close().
// Note that public callers may call the types.BlobInfoCache operations without Open()/Close().
Open()
// Close destroys state created by Open().
Close()
// UncompressedDigestForTOC returns an uncompressed digest corresponding to anyDigest.
// Returns "" if the uncompressed digest is unknown.
UncompressedDigestForTOC(tocDigest digest.Digest) digest.Digest
// RecordTOCUncompressedPair records that the tocDigest corresponds to uncompressed.
// WARNING: Only call this for LOCALLY VERIFIED data; dont record a digest pair just because some remote author claims so (e.g.
// because a manifest/config pair exists); otherwise the cache could be poisoned and allow substituting unexpected blobs.
// (Eventually, the DiffIDs in image config could detect the substitution, but that may be too late, and not all image formats contain that data.)
RecordTOCUncompressedPair(tocDigest digest.Digest, uncompressed digest.Digest)
// RecordDigestCompressorData records data for the blob with the specified digest.
// WARNING: Only call this with LOCALLY VERIFIED data:
// - dont record a compressor for a digest just because some remote author claims so
// (e.g. because a manifest says so);
// - dont record the non-base variant or annotations if we are not _sure_ that the base variant
// and the blobs digest match the non-base variants annotations (e.g. because we saw them
// in a manifest)
// otherwise the cache could be poisoned and cause us to make incorrect edits to type
// information in a manifest.
RecordDigestCompressorData(anyDigest digest.Digest, data DigestCompressorData)
// CandidateLocations2 returns a prioritized, limited, number of blobs and their locations (if known)
// that could possibly be reused within the specified (transport scope) (if they still
// exist, which is not guaranteed).
CandidateLocations2(transport types.ImageTransport, scope types.BICTransportScope, digest digest.Digest, options CandidateLocations2Options) []BICReplacementCandidate2
}
// DigestCompressorData is information known about how a blob is compressed.
// (This is worded generically, but basically targeted at the zstd / zstd:chunked situation.)
type DigestCompressorData struct {
BaseVariantCompressor string // A compressors base variant name, or Uncompressed or UnknownCompression.
// The following fields are only valid if the base variant is neither Uncompressed nor UnknownCompression:
SpecificVariantCompressor string // A non-base variant compressor (or UnknownCompression if the true format is just the base variant)
SpecificVariantAnnotations map[string]string // Annotations required to benefit from the base variant.
}
// CandidateLocations2Options are used in CandidateLocations2.
type CandidateLocations2Options struct {
// If !CanSubstitute, the returned candidates will match the submitted digest exactly; if
// CanSubstitute, data from previous RecordDigestUncompressedPair calls is used to also look
// up variants of the blob which have the same uncompressed digest.
CanSubstitute bool
PossibleManifestFormats []string // If set, a set of possible manifest formats; at least one should support the reused layer
RequiredCompression *compressiontypes.Algorithm // If set, only reuse layers with a matching algorithm
}
// BICReplacementCandidate2 is an item returned by BlobInfoCache2.CandidateLocations2.
type BICReplacementCandidate2 struct {
Digest digest.Digest
CompressionOperation types.LayerCompression // Either types.Decompress for uncompressed, or types.Compress for compressed
CompressionAlgorithm *compressiontypes.Algorithm // An algorithm when the candidate is compressed, or nil when it is uncompressed
CompressionAnnotations map[string]string // If necessary, annotations necessary to use CompressionAlgorithm
UnknownLocation bool // is true when `Location` for this blob is not set
Location types.BICLocationReference // not set if UnknownLocation is set to `true`
}

View File

@@ -0,0 +1,34 @@
package image
import (
"context"
"fmt"
"go.podman.io/image/v5/internal/manifest"
"go.podman.io/image/v5/types"
)
func manifestSchema2FromManifestList(ctx context.Context, sys *types.SystemContext, src types.ImageSource, manblob []byte) (genericManifest, error) {
list, err := manifest.Schema2ListFromManifest(manblob)
if err != nil {
return nil, fmt.Errorf("parsing schema2 manifest list: %w", err)
}
targetManifestDigest, err := list.ChooseInstance(sys)
if err != nil {
return nil, fmt.Errorf("choosing image instance: %w", err)
}
manblob, mt, err := src.GetManifest(ctx, &targetManifestDigest)
if err != nil {
return nil, fmt.Errorf("fetching target platform image selected from manifest list: %w", err)
}
matches, err := manifest.MatchesDigest(manblob, targetManifestDigest)
if err != nil {
return nil, fmt.Errorf("computing manifest digest: %w", err)
}
if !matches {
return nil, fmt.Errorf("Image manifest does not match selected manifest digest %s", targetManifestDigest)
}
return manifestInstanceFromBlob(ctx, sys, src, manblob, mt)
}

View File

@@ -0,0 +1,257 @@
package image
import (
"context"
"fmt"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"go.podman.io/image/v5/docker/reference"
"go.podman.io/image/v5/manifest"
"go.podman.io/image/v5/types"
)
type manifestSchema1 struct {
m *manifest.Schema1
}
func manifestSchema1FromManifest(manifestBlob []byte) (genericManifest, error) {
m, err := manifest.Schema1FromManifest(manifestBlob)
if err != nil {
return nil, err
}
return &manifestSchema1{m: m}, nil
}
// manifestSchema1FromComponents builds a new manifestSchema1 from the supplied data.
func manifestSchema1FromComponents(ref reference.Named, fsLayers []manifest.Schema1FSLayers, history []manifest.Schema1History, architecture string) (genericManifest, error) {
m, err := manifest.Schema1FromComponents(ref, fsLayers, history, architecture)
if err != nil {
return nil, err
}
return &manifestSchema1{m: m}, nil
}
func (m *manifestSchema1) serialize() ([]byte, error) {
return m.m.Serialize()
}
func (m *manifestSchema1) manifestMIMEType() string {
return manifest.DockerV2Schema1SignedMediaType
}
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
func (m *manifestSchema1) ConfigInfo() types.BlobInfo {
return m.m.ConfigInfo()
}
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
// The result is cached; it is OK to call this however often you need.
func (m *manifestSchema1) ConfigBlob(context.Context) ([]byte, error) {
return nil, nil
}
// OCIConfig returns the image configuration as per OCI v1 image-spec. Information about
// layers in the resulting configuration isn't guaranteed to be returned to due how
// old image manifests work (docker v2s1 especially).
func (m *manifestSchema1) OCIConfig(ctx context.Context) (*imgspecv1.Image, error) {
v2s2, err := m.convertToManifestSchema2(ctx, &types.ManifestUpdateOptions{})
if err != nil {
return nil, err
}
return v2s2.OCIConfig(ctx)
}
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (m *manifestSchema1) LayerInfos() []types.BlobInfo {
return manifestLayerInfosToBlobInfos(m.m.LayerInfos())
}
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
// It returns false if the manifest does not embed a Docker reference.
// (This embedding unfortunately happens for Docker schema1, please do not add support for this in any new formats.)
func (m *manifestSchema1) EmbeddedDockerReferenceConflicts(ref reference.Named) bool {
// This is a bit convoluted: We cant just have a "get embedded docker reference" method
// and have the “does it conflict” logic in the generic copy code, because the manifest does not actually
// embed a full docker/distribution reference, but only the repo name and tag (without the host name).
// So we would have to provide a “return repo without host name, and tag” getter for the generic code,
// which would be very awkward. Instead, we do the matching here in schema1-specific code, and all the
// generic copy code needs to know about is reference.Named and that a manifest may need updating
// for some destinations.
name := reference.Path(ref)
var tag string
if tagged, isTagged := ref.(reference.NamedTagged); isTagged {
tag = tagged.Tag()
} else {
tag = ""
}
return m.m.Name != name || m.m.Tag != tag
}
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
func (m *manifestSchema1) Inspect(context.Context) (*types.ImageInspectInfo, error) {
return m.m.Inspect(nil)
}
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
// (most importantly it forces us to download the full layers even if they are already present at the destination).
func (m *manifestSchema1) UpdatedImageNeedsLayerDiffIDs(options types.ManifestUpdateOptions) bool {
return (options.ManifestMIMEType == manifest.DockerV2Schema2MediaType || options.ManifestMIMEType == imgspecv1.MediaTypeImageManifest)
}
// UpdatedImage returns a types.Image modified according to options.
// This does not change the state of the original Image object.
func (m *manifestSchema1) UpdatedImage(ctx context.Context, options types.ManifestUpdateOptions) (types.Image, error) {
copy := manifestSchema1{m: manifest.Schema1Clone(m.m)}
// We have 2 MIME types for schema 1, which are basically equivalent (even the un-"Signed" MIME type will be rejected if there isnt a signature; so,
// handle conversions between them by doing nothing.
if options.ManifestMIMEType != manifest.DockerV2Schema1MediaType && options.ManifestMIMEType != manifest.DockerV2Schema1SignedMediaType {
converted, err := convertManifestIfRequiredWithUpdate(ctx, options, map[string]manifestConvertFn{
imgspecv1.MediaTypeImageManifest: copy.convertToManifestOCI1,
manifest.DockerV2Schema2MediaType: copy.convertToManifestSchema2Generic,
})
if err != nil {
return nil, err
}
if converted != nil {
return converted, nil
}
}
// No conversion required, update manifest
if options.LayerInfos != nil {
if err := copy.m.UpdateLayerInfos(options.LayerInfos); err != nil {
return nil, err
}
}
if options.EmbeddedDockerReference != nil {
copy.m.Name = reference.Path(options.EmbeddedDockerReference)
if tagged, isTagged := options.EmbeddedDockerReference.(reference.NamedTagged); isTagged {
copy.m.Tag = tagged.Tag()
} else {
copy.m.Tag = ""
}
}
return memoryImageFromManifest(&copy), nil
}
// convertToManifestSchema2Generic returns a genericManifest implementation converted to manifest.DockerV2Schema2MediaType.
// It may use options.InformationOnly and also adjust *options to be appropriate for editing the returned
// value.
// This does not change the state of the original manifestSchema1 object.
//
// We need this function just because a function returning an implementation of the genericManifest
// interface is not automatically assignable to a function type returning the genericManifest interface
func (m *manifestSchema1) convertToManifestSchema2Generic(ctx context.Context, options *types.ManifestUpdateOptions) (genericManifest, error) {
return m.convertToManifestSchema2(ctx, options)
}
// convertToManifestSchema2 returns a genericManifest implementation converted to manifest.DockerV2Schema2MediaType.
// It may use options.InformationOnly and also adjust *options to be appropriate for editing the returned
// value.
// This does not change the state of the original manifestSchema1 object.
//
// Based on github.com/docker/docker/distribution/pull_v2.go
func (m *manifestSchema1) convertToManifestSchema2(_ context.Context, options *types.ManifestUpdateOptions) (*manifestSchema2, error) {
uploadedLayerInfos := options.InformationOnly.LayerInfos
layerDiffIDs := options.InformationOnly.LayerDiffIDs
if len(m.m.ExtractedV1Compatibility) == 0 {
// What would this even mean?! Anyhow, the rest of the code depends on FSLayers[0] and ExtractedV1Compatibility[0] existing.
return nil, fmt.Errorf("Cannot convert an image with 0 history entries to %s", manifest.DockerV2Schema2MediaType)
}
if len(m.m.ExtractedV1Compatibility) != len(m.m.FSLayers) {
return nil, fmt.Errorf("Inconsistent schema 1 manifest: %d history entries, %d fsLayers entries", len(m.m.ExtractedV1Compatibility), len(m.m.FSLayers))
}
if uploadedLayerInfos != nil && len(uploadedLayerInfos) != len(m.m.FSLayers) {
return nil, fmt.Errorf("Internal error: uploaded %d blobs, but schema1 manifest has %d fsLayers", len(uploadedLayerInfos), len(m.m.FSLayers))
}
if layerDiffIDs != nil && len(layerDiffIDs) != len(m.m.FSLayers) {
return nil, fmt.Errorf("Internal error: collected %d DiffID values, but schema1 manifest has %d fsLayers", len(layerDiffIDs), len(m.m.FSLayers))
}
var convertedLayerUpdates []types.BlobInfo // Only used if options.LayerInfos != nil
if options.LayerInfos != nil {
if len(options.LayerInfos) != len(m.m.FSLayers) {
return nil, fmt.Errorf("Error converting image: layer edits for %d layers vs %d existing layers",
len(options.LayerInfos), len(m.m.FSLayers))
}
convertedLayerUpdates = []types.BlobInfo{}
}
// Build a list of the diffIDs for the non-empty layers.
diffIDs := []digest.Digest{}
var layers []manifest.Schema2Descriptor
for v1Index := len(m.m.ExtractedV1Compatibility) - 1; v1Index >= 0; v1Index-- {
v2Index := (len(m.m.ExtractedV1Compatibility) - 1) - v1Index
if !m.m.ExtractedV1Compatibility[v1Index].ThrowAway {
var size int64
if uploadedLayerInfos != nil {
size = uploadedLayerInfos[v2Index].Size
}
var d digest.Digest
if layerDiffIDs != nil {
d = layerDiffIDs[v2Index]
}
layers = append(layers, manifest.Schema2Descriptor{
MediaType: manifest.DockerV2Schema2LayerMediaType,
Size: size,
Digest: m.m.FSLayers[v1Index].BlobSum,
})
if options.LayerInfos != nil {
convertedLayerUpdates = append(convertedLayerUpdates, options.LayerInfos[v2Index])
}
diffIDs = append(diffIDs, d)
}
}
configJSON, err := m.m.ToSchema2Config(diffIDs)
if err != nil {
return nil, err
}
configDescriptor := manifest.Schema2Descriptor{
MediaType: manifest.DockerV2Schema2ConfigMediaType,
Size: int64(len(configJSON)),
Digest: digest.FromBytes(configJSON),
}
if options.LayerInfos != nil {
options.LayerInfos = convertedLayerUpdates
}
return manifestSchema2FromComponents(configDescriptor, nil, configJSON, layers), nil
}
// convertToManifestOCI1 returns a genericManifest implementation converted to imgspecv1.MediaTypeImageManifest.
// It may use options.InformationOnly and also adjust *options to be appropriate for editing the returned
// value.
// This does not change the state of the original manifestSchema1 object.
func (m *manifestSchema1) convertToManifestOCI1(ctx context.Context, options *types.ManifestUpdateOptions) (genericManifest, error) {
// We can't directly convert to OCI, but we can transitively convert via a Docker V2.2 Distribution manifest
m2, err := m.convertToManifestSchema2(ctx, options)
if err != nil {
return nil, err
}
return m2.convertToManifestOCI1(ctx, options)
}
// SupportsEncryption returns if encryption is supported for the manifest type
func (m *manifestSchema1) SupportsEncryption(context.Context) bool {
return false
}
// CanChangeLayerCompression returns true if we can compress/decompress layers with mimeType in the current image
// (and the code can handle that).
// NOTE: Even if this returns true, the relevant format might not accept all compression algorithms; the set of accepted
// algorithms depends not on the current format, but possibly on the target of a conversion (if UpdatedImage converts
// to a different manifest format).
func (m *manifestSchema1) CanChangeLayerCompression(mimeType string) bool {
return true // There are no MIME types in the manifest, so we must assume a valid image.
}

View File

@@ -0,0 +1,413 @@
package image
import (
"bytes"
"context"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"errors"
"fmt"
"strings"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/sirupsen/logrus"
"go.podman.io/image/v5/docker/reference"
"go.podman.io/image/v5/internal/iolimits"
"go.podman.io/image/v5/manifest"
"go.podman.io/image/v5/pkg/blobinfocache/none"
"go.podman.io/image/v5/types"
)
// GzippedEmptyLayer is a gzip-compressed version of an empty tar file (1024 NULL bytes)
// This comes from github.com/docker/distribution/manifest/schema1/config_builder.go; there is
// a non-zero embedded timestamp; we could zero that, but that would just waste storage space
// in registries, so lets use the same values.
//
// This is publicly visible as c/image/image.GzippedEmptyLayer.
var GzippedEmptyLayer = []byte{
31, 139, 8, 0, 0, 9, 110, 136, 0, 255, 98, 24, 5, 163, 96, 20, 140, 88,
0, 8, 0, 0, 255, 255, 46, 175, 181, 239, 0, 4, 0, 0,
}
// GzippedEmptyLayerDigest is a digest of GzippedEmptyLayer
//
// This is publicly visible as c/image/image.GzippedEmptyLayerDigest.
const GzippedEmptyLayerDigest = digest.Digest("sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4")
type manifestSchema2 struct {
src types.ImageSource // May be nil if configBlob is not nil
configBlob []byte // If set, corresponds to contents of ConfigDescriptor.
m *manifest.Schema2
}
func manifestSchema2FromManifest(src types.ImageSource, manifestBlob []byte) (genericManifest, error) {
m, err := manifest.Schema2FromManifest(manifestBlob)
if err != nil {
return nil, err
}
return &manifestSchema2{
src: src,
m: m,
}, nil
}
// manifestSchema2FromComponents builds a new manifestSchema2 from the supplied data:
func manifestSchema2FromComponents(config manifest.Schema2Descriptor, src types.ImageSource, configBlob []byte, layers []manifest.Schema2Descriptor) *manifestSchema2 {
return &manifestSchema2{
src: src,
configBlob: configBlob,
m: manifest.Schema2FromComponents(config, layers),
}
}
func (m *manifestSchema2) serialize() ([]byte, error) {
return m.m.Serialize()
}
func (m *manifestSchema2) manifestMIMEType() string {
return m.m.MediaType
}
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
func (m *manifestSchema2) ConfigInfo() types.BlobInfo {
return m.m.ConfigInfo()
}
// OCIConfig returns the image configuration as per OCI v1 image-spec. Information about
// layers in the resulting configuration isn't guaranteed to be returned to due how
// old image manifests work (docker v2s1 especially).
func (m *manifestSchema2) OCIConfig(ctx context.Context) (*imgspecv1.Image, error) {
configBlob, err := m.ConfigBlob(ctx)
if err != nil {
return nil, err
}
// docker v2s2 and OCI v1 are mostly compatible but v2s2 contains more fields
// than OCI v1. This unmarshal makes sure we drop docker v2s2
// fields that aren't needed in OCI v1.
configOCI := &imgspecv1.Image{}
if err := json.Unmarshal(configBlob, configOCI); err != nil {
return nil, err
}
return configOCI, nil
}
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
// The result is cached; it is OK to call this however often you need.
func (m *manifestSchema2) ConfigBlob(ctx context.Context) ([]byte, error) {
if m.configBlob == nil {
if m.src == nil {
return nil, fmt.Errorf("Internal error: neither src nor configBlob set in manifestSchema2")
}
stream, _, err := m.src.GetBlob(ctx, manifest.BlobInfoFromSchema2Descriptor(m.m.ConfigDescriptor), none.NoCache)
if err != nil {
return nil, err
}
defer stream.Close()
blob, err := iolimits.ReadAtMost(stream, iolimits.MaxConfigBodySize)
if err != nil {
return nil, err
}
computedDigest := digest.FromBytes(blob)
if computedDigest != m.m.ConfigDescriptor.Digest {
return nil, fmt.Errorf("Download config.json digest %s does not match expected %s", computedDigest, m.m.ConfigDescriptor.Digest)
}
m.configBlob = blob
}
return m.configBlob, nil
}
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (m *manifestSchema2) LayerInfos() []types.BlobInfo {
return manifestLayerInfosToBlobInfos(m.m.LayerInfos())
}
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
// It returns false if the manifest does not embed a Docker reference.
// (This embedding unfortunately happens for Docker schema1, please do not add support for this in any new formats.)
func (m *manifestSchema2) EmbeddedDockerReferenceConflicts(ref reference.Named) bool {
return false
}
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
func (m *manifestSchema2) Inspect(ctx context.Context) (*types.ImageInspectInfo, error) {
getter := func(info types.BlobInfo) ([]byte, error) {
if info.Digest != m.ConfigInfo().Digest {
// Shouldn't ever happen
return nil, errors.New("asked for a different config blob")
}
config, err := m.ConfigBlob(ctx)
if err != nil {
return nil, err
}
return config, nil
}
return m.m.Inspect(getter)
}
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
// (most importantly it forces us to download the full layers even if they are already present at the destination).
func (m *manifestSchema2) UpdatedImageNeedsLayerDiffIDs(options types.ManifestUpdateOptions) bool {
return false
}
// UpdatedImage returns a types.Image modified according to options.
// This does not change the state of the original Image object.
// The returned error will be a manifest.ManifestLayerCompressionIncompatibilityError
// if the CompressionOperation and CompressionAlgorithm specified in one or more
// options.LayerInfos items is anything other than gzip.
func (m *manifestSchema2) UpdatedImage(ctx context.Context, options types.ManifestUpdateOptions) (types.Image, error) {
copy := manifestSchema2{ // NOTE: This is not a deep copy, it still shares slices etc.
src: m.src,
configBlob: m.configBlob,
m: manifest.Schema2Clone(m.m),
}
converted, err := convertManifestIfRequiredWithUpdate(ctx, options, map[string]manifestConvertFn{
manifest.DockerV2Schema1MediaType: copy.convertToManifestSchema1,
manifest.DockerV2Schema1SignedMediaType: copy.convertToManifestSchema1,
imgspecv1.MediaTypeImageManifest: copy.convertToManifestOCI1,
})
if err != nil {
return nil, err
}
if converted != nil {
return converted, nil
}
// No conversion required, update manifest
if options.LayerInfos != nil {
if err := copy.m.UpdateLayerInfos(options.LayerInfos); err != nil {
return nil, err
}
}
// Ignore options.EmbeddedDockerReference: it may be set when converting from schema1 to schema2, but we really don't care.
return memoryImageFromManifest(&copy), nil
}
func oci1DescriptorFromSchema2Descriptor(d manifest.Schema2Descriptor) imgspecv1.Descriptor {
return imgspecv1.Descriptor{
MediaType: d.MediaType,
Size: d.Size,
Digest: d.Digest,
URLs: d.URLs,
}
}
// convertToManifestOCI1 returns a genericManifest implementation converted to imgspecv1.MediaTypeImageManifest.
// It may use options.InformationOnly and also adjust *options to be appropriate for editing the returned
// value.
// This does not change the state of the original manifestSchema2 object.
func (m *manifestSchema2) convertToManifestOCI1(ctx context.Context, _ *types.ManifestUpdateOptions) (genericManifest, error) {
configOCI, err := m.OCIConfig(ctx)
if err != nil {
return nil, err
}
configOCIBytes, err := json.Marshal(configOCI)
if err != nil {
return nil, err
}
config := imgspecv1.Descriptor{
MediaType: imgspecv1.MediaTypeImageConfig,
Size: int64(len(configOCIBytes)),
Digest: digest.FromBytes(configOCIBytes),
}
layers := make([]imgspecv1.Descriptor, len(m.m.LayersDescriptors))
for idx := range layers {
layers[idx] = oci1DescriptorFromSchema2Descriptor(m.m.LayersDescriptors[idx])
switch m.m.LayersDescriptors[idx].MediaType {
case manifest.DockerV2Schema2ForeignLayerMediaType:
layers[idx].MediaType = imgspecv1.MediaTypeImageLayerNonDistributable //nolint:staticcheck // NonDistributable layers are deprecated, but we want to continue to support manipulating pre-existing images.
case manifest.DockerV2Schema2ForeignLayerMediaTypeGzip:
layers[idx].MediaType = imgspecv1.MediaTypeImageLayerNonDistributableGzip //nolint:staticcheck // NonDistributable layers are deprecated, but we want to continue to support manipulating pre-existing images.
case manifest.DockerV2SchemaLayerMediaTypeUncompressed:
layers[idx].MediaType = imgspecv1.MediaTypeImageLayer
case manifest.DockerV2Schema2LayerMediaType:
layers[idx].MediaType = imgspecv1.MediaTypeImageLayerGzip
default:
return nil, fmt.Errorf("Unknown media type during manifest conversion: %q", m.m.LayersDescriptors[idx].MediaType)
}
}
return manifestOCI1FromComponents(config, m.src, configOCIBytes, layers), nil
}
// convertToManifestSchema1 returns a genericManifest implementation converted to manifest.DockerV2Schema1{Signed,}MediaType.
// It may use options.InformationOnly and also adjust *options to be appropriate for editing the returned
// value.
// This does not change the state of the original manifestSchema2 object.
//
// Based on docker/distribution/manifest/schema1/config_builder.go
func (m *manifestSchema2) convertToManifestSchema1(ctx context.Context, options *types.ManifestUpdateOptions) (genericManifest, error) {
dest := options.InformationOnly.Destination
var convertedLayerUpdates []types.BlobInfo // Only used if options.LayerInfos != nil
if options.LayerInfos != nil {
if len(options.LayerInfos) != len(m.m.LayersDescriptors) {
return nil, fmt.Errorf("Error converting image: layer edits for %d layers vs %d existing layers",
len(options.LayerInfos), len(m.m.LayersDescriptors))
}
convertedLayerUpdates = []types.BlobInfo{}
}
configBytes, err := m.ConfigBlob(ctx)
if err != nil {
return nil, err
}
imageConfig := &manifest.Schema2Image{}
if err := json.Unmarshal(configBytes, imageConfig); err != nil {
return nil, err
}
// Build fsLayers and History, discarding all configs. We will patch the top-level config in later.
fsLayers := make([]manifest.Schema1FSLayers, len(imageConfig.History))
history := make([]manifest.Schema1History, len(imageConfig.History))
nonemptyLayerIndex := 0
var parentV1ID string // Set in the loop
v1ID := ""
haveGzippedEmptyLayer := false
if len(imageConfig.History) == 0 {
// What would this even mean?! Anyhow, the rest of the code depends on fsLayers[0] and history[0] existing.
return nil, fmt.Errorf("Cannot convert an image with 0 history entries to %s", manifest.DockerV2Schema1SignedMediaType)
}
for v2Index, historyEntry := range imageConfig.History {
parentV1ID = v1ID
v1Index := len(imageConfig.History) - 1 - v2Index
var blobDigest digest.Digest
if historyEntry.EmptyLayer {
emptyLayerBlobInfo := types.BlobInfo{Digest: GzippedEmptyLayerDigest, Size: int64(len(GzippedEmptyLayer))}
if !haveGzippedEmptyLayer {
logrus.Debugf("Uploading empty layer during conversion to schema 1")
// Ideally we should update the relevant BlobInfoCache about this layer, but that would require passing it down here,
// and anyway this blob is so small that its easier to just copy it than to worry about figuring out another location where to get it.
info, err := dest.PutBlob(ctx, bytes.NewReader(GzippedEmptyLayer), emptyLayerBlobInfo, none.NoCache, false)
if err != nil {
return nil, fmt.Errorf("uploading empty layer: %w", err)
}
if info.Digest != emptyLayerBlobInfo.Digest {
return nil, fmt.Errorf("Internal error: Uploaded empty layer has digest %#v instead of %s", info.Digest, emptyLayerBlobInfo.Digest)
}
haveGzippedEmptyLayer = true
}
if options.LayerInfos != nil {
convertedLayerUpdates = append(convertedLayerUpdates, emptyLayerBlobInfo)
}
blobDigest = emptyLayerBlobInfo.Digest
} else {
if nonemptyLayerIndex >= len(m.m.LayersDescriptors) {
return nil, fmt.Errorf("Invalid image configuration, needs more than the %d distributed layers", len(m.m.LayersDescriptors))
}
if options.LayerInfos != nil {
convertedLayerUpdates = append(convertedLayerUpdates, options.LayerInfos[nonemptyLayerIndex])
}
blobDigest = m.m.LayersDescriptors[nonemptyLayerIndex].Digest
nonemptyLayerIndex++
}
// AFAICT pull ignores these ID values, at least nowadays, so we could use anything unique, including a simple counter. Use what Docker uses for cargo-cult consistency.
v, err := v1IDFromBlobDigestAndComponents(blobDigest, parentV1ID)
if err != nil {
return nil, err
}
v1ID = v
fakeImage := manifest.Schema1V1Compatibility{
ID: v1ID,
Parent: parentV1ID,
Comment: historyEntry.Comment,
Created: historyEntry.Created,
Author: historyEntry.Author,
ThrowAway: historyEntry.EmptyLayer,
}
fakeImage.ContainerConfig.Cmd = []string{historyEntry.CreatedBy}
v1CompatibilityBytes, err := json.Marshal(&fakeImage)
if err != nil {
return nil, fmt.Errorf("Internal error: Error creating v1compatibility for %#v", fakeImage)
}
fsLayers[v1Index] = manifest.Schema1FSLayers{BlobSum: blobDigest}
history[v1Index] = manifest.Schema1History{V1Compatibility: string(v1CompatibilityBytes)}
// Note that parentV1ID of the top layer is preserved when exiting this loop
}
// Now patch in real configuration for the top layer (v1Index == 0)
v1ID, err = v1IDFromBlobDigestAndComponents(fsLayers[0].BlobSum, parentV1ID, string(configBytes)) // See above WRT v1ID value generation and cargo-cult consistency.
if err != nil {
return nil, err
}
v1Config, err := v1ConfigFromConfigJSON(configBytes, v1ID, parentV1ID, imageConfig.History[len(imageConfig.History)-1].EmptyLayer)
if err != nil {
return nil, err
}
history[0].V1Compatibility = string(v1Config)
if options.LayerInfos != nil {
options.LayerInfos = convertedLayerUpdates
}
m1, err := manifestSchema1FromComponents(dest.Reference().DockerReference(), fsLayers, history, imageConfig.Architecture)
if err != nil {
return nil, err // This should never happen, we should have created all the components correctly.
}
return m1, nil
}
func v1IDFromBlobDigestAndComponents(blobDigest digest.Digest, others ...string) (string, error) {
if err := blobDigest.Validate(); err != nil {
return "", err
}
parts := append([]string{blobDigest.Encoded()}, others...)
v1IDHash := sha256.Sum256([]byte(strings.Join(parts, " ")))
return hex.EncodeToString(v1IDHash[:]), nil
}
func v1ConfigFromConfigJSON(configJSON []byte, v1ID, parentV1ID string, throwaway bool) ([]byte, error) {
// Preserve everything we don't specifically know about.
// (This must be a *json.RawMessage, even though *[]byte is fairly redundant, because only *RawMessage implements json.Marshaler.)
rawContents := map[string]*json.RawMessage{}
if err := json.Unmarshal(configJSON, &rawContents); err != nil { // We have already unmarshaled it before, using a more detailed schema?!
return nil, err
}
delete(rawContents, "rootfs")
delete(rawContents, "history")
updates := map[string]any{"id": v1ID}
if parentV1ID != "" {
updates["parent"] = parentV1ID
}
if throwaway {
updates["throwaway"] = throwaway
}
for field, value := range updates {
encoded, err := json.Marshal(value)
if err != nil {
return nil, err
}
rawContents[field] = (*json.RawMessage)(&encoded)
}
return json.Marshal(rawContents)
}
// SupportsEncryption returns if encryption is supported for the manifest type
func (m *manifestSchema2) SupportsEncryption(context.Context) bool {
return false
}
// CanChangeLayerCompression returns true if we can compress/decompress layers with mimeType in the current image
// (and the code can handle that).
// NOTE: Even if this returns true, the relevant format might not accept all compression algorithms; the set of accepted
// algorithms depends not on the current format, but possibly on the target of a conversion (if UpdatedImage converts
// to a different manifest format).
func (m *manifestSchema2) CanChangeLayerCompression(mimeType string) bool {
return m.m.CanChangeLayerCompression(mimeType)
}

121
vendor/go.podman.io/image/v5/internal/image/manifest.go generated vendored Normal file
View File

@@ -0,0 +1,121 @@
package image
import (
"context"
"fmt"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"go.podman.io/image/v5/docker/reference"
"go.podman.io/image/v5/manifest"
"go.podman.io/image/v5/types"
)
// genericManifest is an interface for parsing, modifying image manifests and related data.
// The public methods are related to types.Image so that embedding a genericManifest implements most of it,
// but there are also public methods that are only visible by packages that can import c/image/internal/image.
type genericManifest interface {
serialize() ([]byte, error)
manifestMIMEType() string
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
ConfigInfo() types.BlobInfo
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
// The result is cached; it is OK to call this however often you need.
ConfigBlob(context.Context) ([]byte, error)
// OCIConfig returns the image configuration as per OCI v1 image-spec. Information about
// layers in the resulting configuration isn't guaranteed to be returned to due how
// old image manifests work (docker v2s1 especially).
OCIConfig(context.Context) (*imgspecv1.Image, error)
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
LayerInfos() []types.BlobInfo
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
// It returns false if the manifest does not embed a Docker reference.
// (This embedding unfortunately happens for Docker schema1, please do not add support for this in any new formats.)
EmbeddedDockerReferenceConflicts(ref reference.Named) bool
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
Inspect(context.Context) (*types.ImageInspectInfo, error)
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
// (most importantly it forces us to download the full layers even if they are already present at the destination).
UpdatedImageNeedsLayerDiffIDs(options types.ManifestUpdateOptions) bool
// UpdatedImage returns a types.Image modified according to options.
// This does not change the state of the original Image object.
UpdatedImage(ctx context.Context, options types.ManifestUpdateOptions) (types.Image, error)
// SupportsEncryption returns if encryption is supported for the manifest type
//
// Deprecated: Initially used to determine if a manifest can be copied from a source manifest type since
// the process of updating a manifest between different manifest types was to update then convert.
// This resulted in some fields in the update being lost. This has been fixed by: https://github.com/containers/image/pull/836
SupportsEncryption(ctx context.Context) bool
// The following methods are not a part of types.Image:
// ===
// CanChangeLayerCompression returns true if we can compress/decompress layers with mimeType in the current image
// (and the code can handle that).
// NOTE: Even if this returns true, the relevant format might not accept all compression algorithms; the set of accepted
// algorithms depends not on the current format, but possibly on the target of a conversion (if UpdatedImage converts
// to a different manifest format).
CanChangeLayerCompression(mimeType string) bool
}
// manifestInstanceFromBlob returns a genericManifest implementation for (manblob, mt) in src.
// If manblob is a manifest list, it implicitly chooses an appropriate image from the list.
func manifestInstanceFromBlob(ctx context.Context, sys *types.SystemContext, src types.ImageSource, manblob []byte, mt string) (genericManifest, error) {
switch manifest.NormalizedMIMEType(mt) {
case manifest.DockerV2Schema1MediaType, manifest.DockerV2Schema1SignedMediaType:
return manifestSchema1FromManifest(manblob)
case imgspecv1.MediaTypeImageManifest:
return manifestOCI1FromManifest(src, manblob)
case manifest.DockerV2Schema2MediaType:
return manifestSchema2FromManifest(src, manblob)
case manifest.DockerV2ListMediaType:
return manifestSchema2FromManifestList(ctx, sys, src, manblob)
case imgspecv1.MediaTypeImageIndex:
return manifestOCI1FromImageIndex(ctx, sys, src, manblob)
default: // Note that this may not be reachable, manifest.NormalizedMIMEType has a default for unknown values.
return nil, fmt.Errorf("Unimplemented manifest MIME type %q", mt)
}
}
// manifestLayerInfosToBlobInfos extracts a []types.BlobInfo from a []manifest.LayerInfo.
func manifestLayerInfosToBlobInfos(layers []manifest.LayerInfo) []types.BlobInfo {
blobs := make([]types.BlobInfo, len(layers))
for i, layer := range layers {
blobs[i] = layer.BlobInfo
}
return blobs
}
// manifestConvertFn (a method of genericManifest object) returns a genericManifest implementation
// converted to a specific manifest MIME type.
// It may use options.InformationOnly and also adjust *options to be appropriate for editing the returned
// value.
// This does not change the state of the original genericManifest object.
type manifestConvertFn func(ctx context.Context, options *types.ManifestUpdateOptions) (genericManifest, error)
// convertManifestIfRequiredWithUpdate will run conversion functions of a manifest if
// required and re-apply the options to the converted type.
// It returns (nil, nil) if no conversion was requested.
func convertManifestIfRequiredWithUpdate(ctx context.Context, options types.ManifestUpdateOptions, converters map[string]manifestConvertFn) (types.Image, error) {
if options.ManifestMIMEType == "" {
return nil, nil
}
converter, ok := converters[options.ManifestMIMEType]
if !ok {
return nil, fmt.Errorf("Unsupported conversion type: %v", options.ManifestMIMEType)
}
optionsCopy := options
convertedManifest, err := converter(ctx, &optionsCopy)
if err != nil {
return nil, err
}
convertedImage := memoryImageFromManifest(convertedManifest)
optionsCopy.ManifestMIMEType = ""
return convertedImage.UpdatedImage(ctx, optionsCopy)
}

64
vendor/go.podman.io/image/v5/internal/image/memory.go generated vendored Normal file
View File

@@ -0,0 +1,64 @@
package image
import (
"context"
"errors"
"go.podman.io/image/v5/types"
)
// memoryImage is a mostly-implementation of types.Image assembled from data
// created in memory, used primarily as a return value of types.Image.UpdatedImage
// as a way to carry various structured information in a type-safe and easy-to-use way.
// Note that this _only_ carries the immediate metadata; it is _not_ a stand-alone
// collection of all related information, e.g. there is no way to get layer blobs
// from a memoryImage.
type memoryImage struct {
genericManifest
serializedManifest []byte // A private cache for Manifest()
}
func memoryImageFromManifest(m genericManifest) types.Image {
return &memoryImage{
genericManifest: m,
serializedManifest: nil,
}
}
// Reference returns the reference used to set up this source, _as specified by the user_
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
func (i *memoryImage) Reference() types.ImageReference {
// It would really be inappropriate to return the ImageReference of the image this was based on.
return nil
}
// Size returns the size of the image as stored, if known, or -1 if not.
func (i *memoryImage) Size() (int64, error) {
return -1, nil
}
// Manifest is like ImageSource.GetManifest, but the result is cached; it is OK to call this however often you need.
func (i *memoryImage) Manifest(ctx context.Context) ([]byte, string, error) {
if i.serializedManifest == nil {
m, err := i.genericManifest.serialize()
if err != nil {
return nil, "", err
}
i.serializedManifest = m
}
return i.serializedManifest, i.genericManifest.manifestMIMEType(), nil
}
// Signatures is like ImageSource.GetSignatures, but the result is cached; it is OK to call this however often you need.
func (i *memoryImage) Signatures(ctx context.Context) ([][]byte, error) {
// Modifying an image invalidates signatures; a caller asking the updated image for signatures
// is probably confused.
return nil, errors.New("Internal error: Image.Signatures() is not supported for images modified in memory")
}
// LayerInfosForCopy returns an updated set of layer blob information which may not match the manifest.
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (i *memoryImage) LayerInfosForCopy(ctx context.Context) ([]types.BlobInfo, error) {
return nil, nil
}

336
vendor/go.podman.io/image/v5/internal/image/oci.go generated vendored Normal file
View File

@@ -0,0 +1,336 @@
package image
import (
"context"
"encoding/json"
"errors"
"fmt"
"slices"
ociencspec "github.com/containers/ocicrypt/spec"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"go.podman.io/image/v5/docker/reference"
"go.podman.io/image/v5/internal/iolimits"
internalManifest "go.podman.io/image/v5/internal/manifest"
"go.podman.io/image/v5/manifest"
"go.podman.io/image/v5/pkg/blobinfocache/none"
"go.podman.io/image/v5/types"
)
type manifestOCI1 struct {
src types.ImageSource // May be nil if configBlob is not nil
configBlob []byte // If set, corresponds to contents of m.Config.
m *manifest.OCI1
}
func manifestOCI1FromManifest(src types.ImageSource, manifestBlob []byte) (genericManifest, error) {
m, err := manifest.OCI1FromManifest(manifestBlob)
if err != nil {
return nil, err
}
return &manifestOCI1{
src: src,
m: m,
}, nil
}
// manifestOCI1FromComponents builds a new manifestOCI1 from the supplied data:
func manifestOCI1FromComponents(config imgspecv1.Descriptor, src types.ImageSource, configBlob []byte, layers []imgspecv1.Descriptor) genericManifest {
return &manifestOCI1{
src: src,
configBlob: configBlob,
m: manifest.OCI1FromComponents(config, layers),
}
}
func (m *manifestOCI1) serialize() ([]byte, error) {
return m.m.Serialize()
}
func (m *manifestOCI1) manifestMIMEType() string {
return imgspecv1.MediaTypeImageManifest
}
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
func (m *manifestOCI1) ConfigInfo() types.BlobInfo {
return m.m.ConfigInfo()
}
// ConfigBlob returns the blob described by ConfigInfo, iff ConfigInfo().Digest != ""; nil otherwise.
// The result is cached; it is OK to call this however often you need.
func (m *manifestOCI1) ConfigBlob(ctx context.Context) ([]byte, error) {
if m.configBlob == nil {
if m.src == nil {
return nil, errors.New("Internal error: neither src nor configBlob set in manifestOCI1")
}
stream, _, err := m.src.GetBlob(ctx, manifest.BlobInfoFromOCI1Descriptor(m.m.Config), none.NoCache)
if err != nil {
return nil, err
}
defer stream.Close()
blob, err := iolimits.ReadAtMost(stream, iolimits.MaxConfigBodySize)
if err != nil {
return nil, err
}
computedDigest := digest.FromBytes(blob)
if computedDigest != m.m.Config.Digest {
return nil, fmt.Errorf("Download config.json digest %s does not match expected %s", computedDigest, m.m.Config.Digest)
}
m.configBlob = blob
}
return m.configBlob, nil
}
// OCIConfig returns the image configuration as per OCI v1 image-spec. Information about
// layers in the resulting configuration isn't guaranteed to be returned to due how
// old image manifests work (docker v2s1 especially).
func (m *manifestOCI1) OCIConfig(ctx context.Context) (*imgspecv1.Image, error) {
if m.m.Config.MediaType != imgspecv1.MediaTypeImageConfig {
return nil, internalManifest.NewNonImageArtifactError(&m.m.Manifest)
}
cb, err := m.ConfigBlob(ctx)
if err != nil {
return nil, err
}
configOCI := &imgspecv1.Image{}
if err := json.Unmarshal(cb, configOCI); err != nil {
return nil, err
}
return configOCI, nil
}
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (m *manifestOCI1) LayerInfos() []types.BlobInfo {
return manifestLayerInfosToBlobInfos(m.m.LayerInfos())
}
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
// It returns false if the manifest does not embed a Docker reference.
// (This embedding unfortunately happens for Docker schema1, please do not add support for this in any new formats.)
func (m *manifestOCI1) EmbeddedDockerReferenceConflicts(ref reference.Named) bool {
return false
}
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
func (m *manifestOCI1) Inspect(ctx context.Context) (*types.ImageInspectInfo, error) {
getter := func(info types.BlobInfo) ([]byte, error) {
if info.Digest != m.ConfigInfo().Digest {
// Shouldn't ever happen
return nil, errors.New("asked for a different config blob")
}
config, err := m.ConfigBlob(ctx)
if err != nil {
return nil, err
}
return config, nil
}
return m.m.Inspect(getter)
}
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
// (most importantly it forces us to download the full layers even if they are already present at the destination).
func (m *manifestOCI1) UpdatedImageNeedsLayerDiffIDs(options types.ManifestUpdateOptions) bool {
return false
}
// UpdatedImage returns a types.Image modified according to options.
// This does not change the state of the original Image object.
// The returned error will be a manifest.ManifestLayerCompressionIncompatibilityError
// if the combination of CompressionOperation and CompressionAlgorithm specified
// in one or more options.LayerInfos items indicates that a layer is compressed using
// an algorithm that is not allowed in OCI.
func (m *manifestOCI1) UpdatedImage(ctx context.Context, options types.ManifestUpdateOptions) (types.Image, error) {
copy := manifestOCI1{ // NOTE: This is not a deep copy, it still shares slices etc.
src: m.src,
configBlob: m.configBlob,
m: manifest.OCI1Clone(m.m),
}
converted, err := convertManifestIfRequiredWithUpdate(ctx, options, map[string]manifestConvertFn{
manifest.DockerV2Schema2MediaType: copy.convertToManifestSchema2Generic,
manifest.DockerV2Schema1MediaType: copy.convertToManifestSchema1,
manifest.DockerV2Schema1SignedMediaType: copy.convertToManifestSchema1,
})
if err != nil {
return nil, err
}
if converted != nil {
return converted, nil
}
// No conversion required, update manifest
if options.LayerInfos != nil {
if err := copy.m.UpdateLayerInfos(options.LayerInfos); err != nil {
return nil, err
}
}
// Ignore options.EmbeddedDockerReference: it may be set when converting from schema1, but we really don't care.
return memoryImageFromManifest(&copy), nil
}
func schema2DescriptorFromOCI1Descriptor(d imgspecv1.Descriptor) manifest.Schema2Descriptor {
return manifest.Schema2Descriptor{
MediaType: d.MediaType,
Size: d.Size,
Digest: d.Digest,
URLs: d.URLs,
}
}
// convertToManifestSchema2Generic returns a genericManifest implementation converted to manifest.DockerV2Schema2MediaType.
// It may use options.InformationOnly and also adjust *options to be appropriate for editing the returned
// value.
// This does not change the state of the original manifestSchema1 object.
//
// We need this function just because a function returning an implementation of the genericManifest
// interface is not automatically assignable to a function type returning the genericManifest interface
func (m *manifestOCI1) convertToManifestSchema2Generic(ctx context.Context, options *types.ManifestUpdateOptions) (genericManifest, error) {
return m.convertToManifestSchema2(ctx, options)
}
// layerEditsOfOCIOnlyFeatures checks if options requires some layer edits to be done before converting to a Docker format.
// If not, it returns (nil, nil).
// If decryption is required, it returns a set of edits to provide to OCI1.UpdateLayerInfos,
// and edits *options to not try decryption again.
func (m *manifestOCI1) layerEditsOfOCIOnlyFeatures(options *types.ManifestUpdateOptions) ([]types.BlobInfo, error) {
if options == nil || options.LayerInfos == nil {
return nil, nil
}
originalInfos := m.LayerInfos()
if len(originalInfos) != len(options.LayerInfos) {
return nil, fmt.Errorf("preparing to decrypt before conversion: %d layers vs. %d layer edits", len(originalInfos), len(options.LayerInfos))
}
ociOnlyEdits := slices.Clone(originalInfos) // Start with a full copy so that we don't forget to copy anything: use the current data in full unless we intentionally deviate.
laterEdits := slices.Clone(options.LayerInfos)
needsOCIOnlyEdits := false
for i, edit := range options.LayerInfos {
// Unless determined otherwise, don't do any compression-related MIME type conversions. m.LayerInfos() should not set these edit instructions, but be explicit.
ociOnlyEdits[i].CompressionOperation = types.PreserveOriginal
ociOnlyEdits[i].CompressionAlgorithm = nil
if edit.CryptoOperation == types.Decrypt {
needsOCIOnlyEdits = true // Encrypted types must be removed before conversion because they cant be represented in Docker schemas
ociOnlyEdits[i].CryptoOperation = types.Decrypt
laterEdits[i].CryptoOperation = types.PreserveOriginalCrypto // Don't try to decrypt in a schema[12] manifest later, that would fail.
}
if originalInfos[i].MediaType == imgspecv1.MediaTypeImageLayerZstd ||
originalInfos[i].MediaType == imgspecv1.MediaTypeImageLayerNonDistributableZstd { //nolint:staticcheck // NonDistributable layers are deprecated, but we want to continue to support manipulating pre-existing images.
needsOCIOnlyEdits = true // Zstd MIME types must be removed before conversion because they cant be represented in Docker schemas.
ociOnlyEdits[i].CompressionOperation = edit.CompressionOperation
ociOnlyEdits[i].CompressionAlgorithm = edit.CompressionAlgorithm
laterEdits[i].CompressionOperation = types.PreserveOriginal
laterEdits[i].CompressionAlgorithm = nil
}
}
if !needsOCIOnlyEdits {
return nil, nil
}
options.LayerInfos = laterEdits
return ociOnlyEdits, nil
}
// convertToManifestSchema2 returns a genericManifest implementation converted to manifest.DockerV2Schema2MediaType.
// It may use options.InformationOnly and also adjust *options to be appropriate for editing the returned
// value.
// This does not change the state of the original manifestOCI1 object.
func (m *manifestOCI1) convertToManifestSchema2(_ context.Context, options *types.ManifestUpdateOptions) (*manifestSchema2, error) {
if m.m.Config.MediaType != imgspecv1.MediaTypeImageConfig {
return nil, internalManifest.NewNonImageArtifactError(&m.m.Manifest)
}
// Mostly we first make a format conversion, and _afterwards_ do layer edits. But first we need to do the layer edits
// which remove OCI-specific features, because trying to convert those layers would fail.
// So, do the layer updates for decryption, and for conversions from Zstd.
ociManifest := m.m
ociOnlyEdits, err := m.layerEditsOfOCIOnlyFeatures(options)
if err != nil {
return nil, err
}
if ociOnlyEdits != nil {
ociManifest = manifest.OCI1Clone(ociManifest)
if err := ociManifest.UpdateLayerInfos(ociOnlyEdits); err != nil {
return nil, err
}
}
// Create a copy of the descriptor.
config := schema2DescriptorFromOCI1Descriptor(ociManifest.Config)
// Above, we have already checked that this manifest refers to an image, not an OCI artifact,
// so the only difference between OCI and DockerSchema2 is the mediatypes. The
// media type of the manifest is handled by manifestSchema2FromComponents.
config.MediaType = manifest.DockerV2Schema2ConfigMediaType
layers := make([]manifest.Schema2Descriptor, len(ociManifest.Layers))
for idx := range layers {
layers[idx] = schema2DescriptorFromOCI1Descriptor(ociManifest.Layers[idx])
switch layers[idx].MediaType {
case imgspecv1.MediaTypeImageLayerNonDistributable: //nolint:staticcheck // NonDistributable layers are deprecated, but we want to continue to support manipulating pre-existing images.
layers[idx].MediaType = manifest.DockerV2Schema2ForeignLayerMediaType
case imgspecv1.MediaTypeImageLayerNonDistributableGzip: //nolint:staticcheck // NonDistributable layers are deprecated, but we want to continue to support manipulating pre-existing images.
layers[idx].MediaType = manifest.DockerV2Schema2ForeignLayerMediaTypeGzip
case imgspecv1.MediaTypeImageLayerNonDistributableZstd: //nolint:staticcheck // NonDistributable layers are deprecated, but we want to continue to support manipulating pre-existing images.
return nil, fmt.Errorf("Error during manifest conversion: %q: zstd compression is not supported for docker images", layers[idx].MediaType)
case imgspecv1.MediaTypeImageLayer:
layers[idx].MediaType = manifest.DockerV2SchemaLayerMediaTypeUncompressed
case imgspecv1.MediaTypeImageLayerGzip:
layers[idx].MediaType = manifest.DockerV2Schema2LayerMediaType
case imgspecv1.MediaTypeImageLayerZstd:
return nil, fmt.Errorf("Error during manifest conversion: %q: zstd compression is not supported for docker images", layers[idx].MediaType)
case ociencspec.MediaTypeLayerEnc, ociencspec.MediaTypeLayerGzipEnc, ociencspec.MediaTypeLayerZstdEnc,
ociencspec.MediaTypeLayerNonDistributableEnc, ociencspec.MediaTypeLayerNonDistributableGzipEnc, ociencspec.MediaTypeLayerNonDistributableZstdEnc:
return nil, fmt.Errorf("during manifest conversion: encrypted layers (%q) are not supported in docker images", layers[idx].MediaType)
default:
return nil, fmt.Errorf("Unknown media type during manifest conversion: %q", layers[idx].MediaType)
}
}
// Rather than copying the ConfigBlob now, we just pass m.src to the
// translated manifest, since the only difference is the mediatype of
// descriptors there is no change to any blob stored in m.src.
return manifestSchema2FromComponents(config, m.src, nil, layers), nil
}
// convertToManifestSchema1 returns a genericManifest implementation converted to manifest.DockerV2Schema1{Signed,}MediaType.
// It may use options.InformationOnly and also adjust *options to be appropriate for editing the returned
// value.
// This does not change the state of the original manifestOCI1 object.
func (m *manifestOCI1) convertToManifestSchema1(ctx context.Context, options *types.ManifestUpdateOptions) (genericManifest, error) {
if m.m.Config.MediaType != imgspecv1.MediaTypeImageConfig {
return nil, internalManifest.NewNonImageArtifactError(&m.m.Manifest)
}
// We can't directly convert images to V1, but we can transitively convert via a V2 image
m2, err := m.convertToManifestSchema2(ctx, options)
if err != nil {
return nil, err
}
return m2.convertToManifestSchema1(ctx, options)
}
// SupportsEncryption returns if encryption is supported for the manifest type
func (m *manifestOCI1) SupportsEncryption(context.Context) bool {
return true
}
// CanChangeLayerCompression returns true if we can compress/decompress layers with mimeType in the current image
// (and the code can handle that).
// NOTE: Even if this returns true, the relevant format might not accept all compression algorithms; the set of accepted
// algorithms depends not on the current format, but possibly on the target of a conversion (if UpdatedImage converts
// to a different manifest format).
func (m *manifestOCI1) CanChangeLayerCompression(mimeType string) bool {
return m.m.CanChangeLayerCompression(mimeType)
}

View File

@@ -0,0 +1,34 @@
package image
import (
"context"
"fmt"
"go.podman.io/image/v5/internal/manifest"
"go.podman.io/image/v5/types"
)
func manifestOCI1FromImageIndex(ctx context.Context, sys *types.SystemContext, src types.ImageSource, manblob []byte) (genericManifest, error) {
index, err := manifest.OCI1IndexFromManifest(manblob)
if err != nil {
return nil, fmt.Errorf("parsing OCI1 index: %w", err)
}
targetManifestDigest, err := index.ChooseInstance(sys)
if err != nil {
return nil, fmt.Errorf("choosing image instance: %w", err)
}
manblob, mt, err := src.GetManifest(ctx, &targetManifestDigest)
if err != nil {
return nil, fmt.Errorf("fetching target platform image selected from image index: %w", err)
}
matches, err := manifest.MatchesDigest(manblob, targetManifestDigest)
if err != nil {
return nil, fmt.Errorf("computing manifest digest: %w", err)
}
if !matches {
return nil, fmt.Errorf("Image manifest does not match selected manifest digest %s", targetManifestDigest)
}
return manifestInstanceFromBlob(ctx, sys, src, manblob, mt)
}

134
vendor/go.podman.io/image/v5/internal/image/sourced.go generated vendored Normal file
View File

@@ -0,0 +1,134 @@
// Package image consolidates knowledge about various container image formats
// (as opposed to image storage mechanisms, which are handled by types.ImageSource)
// and exposes all of them using an unified interface.
package image
import (
"context"
"go.podman.io/image/v5/types"
)
// FromReference returns a types.ImageCloser implementation for the default instance reading from reference.
// If reference points to a manifest list, .Manifest() still returns the manifest list,
// but other methods transparently return data from an appropriate image instance.
//
// The caller must call .Close() on the returned ImageCloser.
//
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage instead of calling this function.
func FromReference(ctx context.Context, sys *types.SystemContext, ref types.ImageReference) (types.ImageCloser, error) {
src, err := ref.NewImageSource(ctx, sys)
if err != nil {
return nil, err
}
img, err := FromSource(ctx, sys, src)
if err != nil {
src.Close()
return nil, err
}
return img, nil
}
// imageCloser implements types.ImageCloser, perhaps allowing simple users
// to use a single object without having keep a reference to a types.ImageSource
// only to call types.ImageSource.Close().
type imageCloser struct {
types.Image
src types.ImageSource
}
// FromSource returns a types.ImageCloser implementation for the default instance of source.
// If source is a manifest list, .Manifest() still returns the manifest list,
// but other methods transparently return data from an appropriate image instance.
//
// The caller must call .Close() on the returned ImageCloser.
//
// FromSource “takes ownership” of the input ImageSource and will call src.Close()
// when the image is closed. (This does not prevent callers from using both the
// Image and ImageSource objects simultaneously, but it means that they only need to
// the Image.)
//
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage instead of calling this function.
//
// Most callers can use either FromUnparsedImage or FromReference instead.
//
// This is publicly visible as c/image/image.FromSource.
func FromSource(ctx context.Context, sys *types.SystemContext, src types.ImageSource) (types.ImageCloser, error) {
img, err := FromUnparsedImage(ctx, sys, UnparsedInstance(src, nil))
if err != nil {
return nil, err
}
return &imageCloser{
Image: img,
src: src,
}, nil
}
func (ic *imageCloser) Close() error {
return ic.src.Close()
}
// SourcedImage is a general set of utilities for working with container images,
// whatever is their underlying transport (i.e. ImageSource-independent).
// Note the existence of docker.Image and image.memoryImage: various instances
// of a types.Image may not be a SourcedImage directly.
//
// Most external users of `types.Image` do not care, and those who care about `docker.Image` know they do.
//
// Internal users may depend on methods available in SourcedImage but not (yet?) in types.Image.
type SourcedImage struct {
*UnparsedImage
ManifestBlob []byte // The manifest of the relevant instance
ManifestMIMEType string // MIME type of ManifestBlob
// genericManifest contains data corresponding to manifestBlob.
// NOTE: The manifest may have been modified in the process; DO NOT reserialize and store genericManifest
// if you want to preserve the original manifest; use manifestBlob directly.
genericManifest
}
// FromUnparsedImage returns a types.Image implementation for unparsed.
// If unparsed represents a manifest list, .Manifest() still returns the manifest list,
// but other methods transparently return data from an appropriate single image.
//
// The Image must not be used after the underlying ImageSource is Close()d.
//
// This is publicly visible as c/image/image.FromUnparsedImage.
func FromUnparsedImage(ctx context.Context, sys *types.SystemContext, unparsed *UnparsedImage) (*SourcedImage, error) {
// Note that the input parameter above is specifically *image.UnparsedImage, not types.UnparsedImage:
// we want to be able to use unparsed.src. We could make that an explicit interface, but, well,
// this is the only UnparsedImage implementation around, anyway.
// NOTE: It is essential for signature verification that all parsing done in this object happens on the same manifest which is returned by unparsed.Manifest().
manifestBlob, manifestMIMEType, err := unparsed.Manifest(ctx)
if err != nil {
return nil, err
}
parsedManifest, err := manifestInstanceFromBlob(ctx, sys, unparsed.src, manifestBlob, manifestMIMEType)
if err != nil {
return nil, err
}
return &SourcedImage{
UnparsedImage: unparsed,
ManifestBlob: manifestBlob,
ManifestMIMEType: manifestMIMEType,
genericManifest: parsedManifest,
}, nil
}
// Size returns the size of the image as stored, if it's known, or -1 if it isn't.
func (i *SourcedImage) Size() (int64, error) {
return -1, nil
}
// Manifest overrides the UnparsedImage.Manifest to always use the fields which we have already fetched.
func (i *SourcedImage) Manifest(ctx context.Context) ([]byte, string, error) {
return i.ManifestBlob, i.ManifestMIMEType, nil
}
func (i *SourcedImage) LayerInfosForCopy(ctx context.Context) ([]types.BlobInfo, error) {
return i.UnparsedImage.src.LayerInfosForCopy(ctx, i.UnparsedImage.instanceDigest)
}

125
vendor/go.podman.io/image/v5/internal/image/unparsed.go generated vendored Normal file
View File

@@ -0,0 +1,125 @@
package image
import (
"context"
"fmt"
"github.com/opencontainers/go-digest"
"go.podman.io/image/v5/docker/reference"
"go.podman.io/image/v5/internal/imagesource"
"go.podman.io/image/v5/internal/private"
"go.podman.io/image/v5/internal/signature"
"go.podman.io/image/v5/manifest"
"go.podman.io/image/v5/types"
)
// UnparsedImage implements types.UnparsedImage .
// An UnparsedImage is a pair of (ImageSource, instance digest); it can represent either a manifest list or a single image instance.
//
// This is publicly visible as c/image/image.UnparsedImage.
type UnparsedImage struct {
src private.ImageSource
instanceDigest *digest.Digest
cachedManifest []byte // A private cache for Manifest(); nil if not yet known.
// A private cache for Manifest(), may be the empty string if guessing failed.
// Valid iff cachedManifest is not nil.
cachedManifestMIMEType string
cachedSignatures []signature.Signature // A private cache for Signatures(); nil if not yet known.
}
// UnparsedInstance returns a types.UnparsedImage implementation for (source, instanceDigest).
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve (when the primary manifest is a manifest list).
//
// This implementation of [types.UnparsedImage] ensures that [types.UnparsedImage.Manifest] validates the image
// against instanceDigest if set, or, if not, a digest implied by src.Reference, if any.
//
// The UnparsedImage must not be used after the underlying ImageSource is Close()d.
//
// This is publicly visible as c/image/image.UnparsedInstance.
func UnparsedInstance(src types.ImageSource, instanceDigest *digest.Digest) *UnparsedImage {
return &UnparsedImage{
src: imagesource.FromPublic(src),
instanceDigest: instanceDigest,
}
}
// Reference returns the reference used to set up this source, _as specified by the user_
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
func (i *UnparsedImage) Reference() types.ImageReference {
// Note that this does not depend on instanceDigest; e.g. all instances within a manifest list need to be signed with the manifest list identity.
return i.src.Reference()
}
// Manifest is like ImageSource.GetManifest, but the result is cached; it is OK to call this however often you need.
//
// Users of UnparsedImage are promised that this validates the image
// against either i.instanceDigest if set, or against a digest included in i.src.Reference.
func (i *UnparsedImage) Manifest(ctx context.Context) ([]byte, string, error) {
if i.cachedManifest == nil {
m, mt, err := i.src.GetManifest(ctx, i.instanceDigest)
if err != nil {
return nil, "", err
}
// ImageSource.GetManifest does not do digest verification, but we do;
// this immediately protects also any user of types.Image.
if digest, haveDigest := i.expectedManifestDigest(); haveDigest {
matches, err := manifest.MatchesDigest(m, digest)
if err != nil {
return nil, "", fmt.Errorf("computing manifest digest: %w", err)
}
if !matches {
return nil, "", fmt.Errorf("Manifest does not match provided manifest digest %s", digest)
}
}
i.cachedManifest = m
i.cachedManifestMIMEType = mt
}
return i.cachedManifest, i.cachedManifestMIMEType, nil
}
// expectedManifestDigest returns a the expected value of the manifest digest, and an indicator whether it is known.
// The bool return value seems redundant with digest != ""; it is used explicitly
// to refuse (unexpected) situations when the digest exists but is "".
func (i *UnparsedImage) expectedManifestDigest() (digest.Digest, bool) {
if i.instanceDigest != nil {
return *i.instanceDigest, true
}
ref := i.Reference().DockerReference()
if ref != nil {
if canonical, ok := ref.(reference.Canonical); ok {
return canonical.Digest(), true
}
}
return "", false
}
// Signatures is like ImageSource.GetSignatures, but the result is cached; it is OK to call this however often you need.
func (i *UnparsedImage) Signatures(ctx context.Context) ([][]byte, error) {
// It would be consistent to make this an internal/unparsedimage/impl.Compat wrapper,
// but this is very likely to be the only implementation ever.
sigs, err := i.UntrustedSignatures(ctx)
if err != nil {
return nil, err
}
simpleSigs := [][]byte{}
for _, sig := range sigs {
if sig, ok := sig.(signature.SimpleSigning); ok {
simpleSigs = append(simpleSigs, sig.UntrustedSignature())
}
}
return simpleSigs, nil
}
// UntrustedSignatures is like ImageSource.GetSignaturesWithFormat, but the result is cached; it is OK to call this however often you need.
func (i *UnparsedImage) UntrustedSignatures(ctx context.Context) ([]signature.Signature, error) {
if i.cachedSignatures == nil {
sigs, err := i.src.GetSignaturesWithFormat(ctx, i.instanceDigest)
if err != nil {
return nil, err
}
i.cachedSignatures = sigs
}
return i.cachedSignatures, nil
}

View File

@@ -0,0 +1,114 @@
package impl
import (
"context"
"io"
"github.com/opencontainers/go-digest"
"go.podman.io/image/v5/internal/blobinfocache"
"go.podman.io/image/v5/internal/private"
"go.podman.io/image/v5/internal/signature"
"go.podman.io/image/v5/types"
)
// Compat implements the obsolete parts of types.ImageDestination
// for implementations of private.ImageDestination.
// See AddCompat below.
type Compat struct {
dest private.ImageDestinationInternalOnly
}
// AddCompat initializes Compat to implement the obsolete parts of types.ImageDestination
// for implementations of private.ImageDestination.
//
// Use it like this:
//
// type yourDestination struct {
// impl.Compat
// …
// }
//
// dest := &yourDestination{…}
// dest.Compat = impl.AddCompat(dest)
func AddCompat(dest private.ImageDestinationInternalOnly) Compat {
return Compat{dest}
}
// PutBlob writes contents of stream and returns data representing the result.
// inputInfo.Digest can be optionally provided if known; if provided, and stream is read to the end without error, the digest MUST match the stream contents.
// inputInfo.Size is the expected length of stream, if known.
// inputInfo.MediaType describes the blob format, if known.
// May update cache.
// WARNING: The contents of stream are being verified on the fly. Until stream.Read() returns io.EOF, the contents of the data SHOULD NOT be available
// to any other readers for download using the supplied digest.
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
func (c *Compat) PutBlob(ctx context.Context, stream io.Reader, inputInfo types.BlobInfo, cache types.BlobInfoCache, isConfig bool) (types.BlobInfo, error) {
res, err := c.dest.PutBlobWithOptions(ctx, stream, inputInfo, private.PutBlobOptions{
Cache: blobinfocache.FromBlobInfoCache(cache),
IsConfig: isConfig,
})
if err != nil {
return types.BlobInfo{}, err
}
return types.BlobInfo{
Digest: res.Digest,
Size: res.Size,
}, nil
}
// TryReusingBlob checks whether the transport already contains, or can efficiently reuse, a blob, and if so, applies it to the current destination
// (e.g. if the blob is a filesystem layer, this signifies that the changes it describes need to be applied again when composing a filesystem tree).
// info.Digest must not be empty.
// If canSubstitute, TryReusingBlob can use an equivalent equivalent of the desired blob; in that case the returned info may not match the input.
// If the blob has been successfully reused, returns (true, info, nil); info must contain at least a digest and size, and may
// include CompressionOperation and CompressionAlgorithm fields to indicate that a change to the compression type should be
// reflected in the manifest that will be written.
// If the transport can not reuse the requested blob, TryReusingBlob returns (false, {}, nil); it returns a non-nil error only on an unexpected failure.
// May use and/or update cache.
func (c *Compat) TryReusingBlob(ctx context.Context, info types.BlobInfo, cache types.BlobInfoCache, canSubstitute bool) (bool, types.BlobInfo, error) {
reused, blob, err := c.dest.TryReusingBlobWithOptions(ctx, info, private.TryReusingBlobOptions{
Cache: blobinfocache.FromBlobInfoCache(cache),
CanSubstitute: canSubstitute,
})
if !reused || err != nil {
return reused, types.BlobInfo{}, err
}
res := types.BlobInfo{
Digest: blob.Digest,
Size: blob.Size,
CompressionOperation: blob.CompressionOperation,
CompressionAlgorithm: blob.CompressionAlgorithm,
}
// This is probably not necessary; we preserve MediaType to decrease risks of breaking for external callers.
// Some transports were not setting the MediaType field anyway, and others were setting the old value on substitution;
// provide the value in cases where it is likely to be correct.
if blob.Digest == info.Digest {
res.MediaType = info.MediaType
}
return true, res, nil
}
// PutSignatures writes a set of signatures to the destination.
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to write or overwrite the signatures for
// (when the primary manifest is a manifest list); this should always be nil if the primary manifest is not a manifest list.
// MUST be called after PutManifest (signatures may reference manifest contents).
func (c *Compat) PutSignatures(ctx context.Context, signatures [][]byte, instanceDigest *digest.Digest) error {
withFormat := []signature.Signature{}
for _, sig := range signatures {
withFormat = append(withFormat, signature.SimpleSigningFromBlob(sig))
}
return c.dest.PutSignaturesWithFormat(ctx, withFormat, instanceDigest)
}
// Commit marks the process of storing the image as successful and asks for the image to be persisted.
// unparsedToplevel contains data about the top-level manifest of the source (which may be a single-arch image or a manifest list
// if PutManifest was only called for the single-arch image with instanceDigest == nil), primarily to allow lookups by the
// original manifest list digest, if desired.
// WARNING: This does not have any transactional semantics:
// - Uploaded data MAY be visible to others before Commit() is called
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
func (c *Compat) Commit(ctx context.Context, unparsedToplevel types.UnparsedImage) error {
return c.dest.CommitWithOptions(ctx, private.CommitOptions{
UnparsedToplevel: unparsedToplevel,
})
}

View File

@@ -0,0 +1,15 @@
package impl
import (
"go.podman.io/image/v5/internal/manifest"
"go.podman.io/image/v5/internal/private"
)
// OriginalCandidateMatchesTryReusingBlobOptions returns true if the original blob passed to TryReusingBlobWithOptions
// is acceptable based on opts.
func OriginalCandidateMatchesTryReusingBlobOptions(opts private.TryReusingBlobOptions) bool {
return manifest.CandidateCompressionMatchesReuseConditions(manifest.ReuseConditions{
PossibleManifestFormats: opts.PossibleManifestFormats,
RequiredCompression: opts.RequiredCompression,
}, opts.OriginalCompression)
}

View File

@@ -0,0 +1,72 @@
package impl
import "go.podman.io/image/v5/types"
// Properties collects properties of an ImageDestination that are constant throughout its lifetime
// (but might differ across instances).
type Properties struct {
// SupportedManifestMIMETypes tells which manifest MIME types the destination supports.
// A empty slice or nil means any MIME type can be tried to upload.
SupportedManifestMIMETypes []string
// DesiredLayerCompression indicates the kind of compression to apply on layers
DesiredLayerCompression types.LayerCompression
// AcceptsForeignLayerURLs is false if foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
AcceptsForeignLayerURLs bool
// MustMatchRuntimeOS is set to true if the destination can store only images targeted for the current runtime architecture and OS.
MustMatchRuntimeOS bool
// IgnoresEmbeddedDockerReference is set to true if the destination does not care about Image.EmbeddedDockerReferenceConflicts(),
// and would prefer to receive an unmodified manifest instead of one modified for the destination.
// Does not make a difference if Reference().DockerReference() is nil.
IgnoresEmbeddedDockerReference bool
// HasThreadSafePutBlob indicates that PutBlob can be executed concurrently.
HasThreadSafePutBlob bool
}
// PropertyMethodsInitialize implements parts of private.ImageDestination corresponding to Properties.
type PropertyMethodsInitialize struct {
// We need two separate structs, PropertyMethodsInitialize and Properties, because Go prohibits fields and methods with the same name.
vals Properties
}
// PropertyMethods creates an PropertyMethodsInitialize for vals.
func PropertyMethods(vals Properties) PropertyMethodsInitialize {
return PropertyMethodsInitialize{
vals: vals,
}
}
// SupportedManifestMIMETypes tells which manifest mime types the destination supports
// If an empty slice or nil it's returned, then any mime type can be tried to upload
func (o PropertyMethodsInitialize) SupportedManifestMIMETypes() []string {
return o.vals.SupportedManifestMIMETypes
}
// DesiredLayerCompression indicates the kind of compression to apply on layers
func (o PropertyMethodsInitialize) DesiredLayerCompression() types.LayerCompression {
return o.vals.DesiredLayerCompression
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
func (o PropertyMethodsInitialize) AcceptsForeignLayerURLs() bool {
return o.vals.AcceptsForeignLayerURLs
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime architecture and OS. False otherwise.
func (o PropertyMethodsInitialize) MustMatchRuntimeOS() bool {
return o.vals.MustMatchRuntimeOS
}
// IgnoresEmbeddedDockerReference() returns true iff the destination does not care about Image.EmbeddedDockerReferenceConflicts(),
// and would prefer to receive an unmodified manifest instead of one modified for the destination.
// Does not make a difference if Reference().DockerReference() is nil.
func (o PropertyMethodsInitialize) IgnoresEmbeddedDockerReference() bool {
return o.vals.IgnoresEmbeddedDockerReference
}
// HasThreadSafePutBlob indicates whether PutBlob can be executed concurrently.
func (o PropertyMethodsInitialize) HasThreadSafePutBlob() bool {
return o.vals.HasThreadSafePutBlob
}

View File

@@ -0,0 +1,16 @@
package stubs
import (
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
)
// IgnoresOriginalOCIConfig implements NoteOriginalOCIConfig() that does nothing.
type IgnoresOriginalOCIConfig struct{}
// NoteOriginalOCIConfig provides the config of the image, as it exists on the source, BUT converted to OCI format,
// or an error obtaining that value (e.g. if the image is an artifact and not a container image).
// The destination can use it in its TryReusingBlob/PutBlob implementations
// (otherwise it only obtains the final config after all layers are written).
func (stub IgnoresOriginalOCIConfig) NoteOriginalOCIConfig(ociConfig *imgspecv1.Image, configErr error) error {
return nil
}

View File

@@ -0,0 +1,52 @@
package stubs
import (
"context"
"fmt"
"go.podman.io/image/v5/internal/private"
"go.podman.io/image/v5/types"
)
// NoPutBlobPartialInitialize implements parts of private.ImageDestination
// for transports that dont support PutBlobPartial().
// See NoPutBlobPartial() below.
type NoPutBlobPartialInitialize struct {
transportName string
}
// NoPutBlobPartial creates a NoPutBlobPartialInitialize for ref.
func NoPutBlobPartial(ref types.ImageReference) NoPutBlobPartialInitialize {
return NoPutBlobPartialRaw(ref.Transport().Name())
}
// NoPutBlobPartialRaw is the same thing as NoPutBlobPartial, but it can be used
// in situations where no ImageReference is available.
func NoPutBlobPartialRaw(transportName string) NoPutBlobPartialInitialize {
return NoPutBlobPartialInitialize{
transportName: transportName,
}
}
// SupportsPutBlobPartial returns true if PutBlobPartial is supported.
func (stub NoPutBlobPartialInitialize) SupportsPutBlobPartial() bool {
return false
}
// PutBlobPartial attempts to create a blob using the data that is already present
// at the destination. chunkAccessor is accessed in a non-sequential way to retrieve the missing chunks.
// It is available only if SupportsPutBlobPartial().
// Even if SupportsPutBlobPartial() returns true, the call can fail.
// If the call fails with ErrFallbackToOrdinaryLayerDownload, the caller can fall back to PutBlobWithOptions.
// The fallback _must not_ be done otherwise.
func (stub NoPutBlobPartialInitialize) PutBlobPartial(ctx context.Context, chunkAccessor private.BlobChunkAccessor, srcInfo types.BlobInfo, options private.PutBlobPartialOptions) (private.UploadedBlob, error) {
return private.UploadedBlob{}, fmt.Errorf("internal error: PutBlobPartial is not supported by the %q transport", stub.transportName)
}
// ImplementsPutBlobPartial implements SupportsPutBlobPartial() that returns true.
type ImplementsPutBlobPartial struct{}
// SupportsPutBlobPartial returns true if PutBlobPartial is supported.
func (stub ImplementsPutBlobPartial) SupportsPutBlobPartial() bool {
return true
}

View File

@@ -0,0 +1,50 @@
package stubs
import (
"context"
"errors"
"github.com/opencontainers/go-digest"
"go.podman.io/image/v5/internal/signature"
)
// NoSignaturesInitialize implements parts of private.ImageDestination
// for transports that dont support storing signatures.
// See NoSignatures() below.
type NoSignaturesInitialize struct {
message string
}
// NoSignatures creates a NoSignaturesInitialize, failing with message.
func NoSignatures(message string) NoSignaturesInitialize {
return NoSignaturesInitialize{
message: message,
}
}
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
func (stub NoSignaturesInitialize) SupportsSignatures(ctx context.Context) error {
return errors.New(stub.message)
}
// PutSignaturesWithFormat writes a set of signatures to the destination.
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to write or overwrite the signatures for
// (when the primary manifest is a manifest list); this should always be nil if the primary manifest is not a manifest list.
// MUST be called after PutManifest (signatures may reference manifest contents).
func (stub NoSignaturesInitialize) PutSignaturesWithFormat(ctx context.Context, signatures []signature.Signature, instanceDigest *digest.Digest) error {
if len(signatures) != 0 {
return errors.New(stub.message)
}
return nil
}
// SupportsSignatures implements SupportsSignatures() that returns nil.
// Note that it might be even more useful to return a value dynamically detected based on
type AlwaysSupportsSignatures struct{}
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
func (stub AlwaysSupportsSignatures) SupportsSignatures(ctx context.Context) error {
return nil
}

View File

@@ -0,0 +1,27 @@
// Package stubs contains trivial stubs for parts of private.ImageDestination.
// It can be used from internal/wrapper, so it should not drag in any extra dependencies.
// Compare with imagedestination/impl, which might require non-trivial implementation work.
//
// There are two kinds of stubs:
//
// First, there are pure stubs, like ImplementsPutBlobPartial. Those can just be included in an imageDestination
// implementation:
//
// type yourDestination struct {
// stubs.ImplementsPutBlobPartial
// …
// }
//
// Second, there are stubs with a constructor, like NoPutBlobPartialInitialize. The Initialize marker
// means that a constructor must be called:
//
// type yourDestination struct {
// stubs.NoPutBlobPartialInitialize
// …
// }
//
// dest := &yourDestination{
// …
// NoPutBlobPartialInitialize: stubs.NoPutBlobPartial(ref),
// }
package stubs

View File

@@ -0,0 +1,55 @@
package impl
import (
"context"
"github.com/opencontainers/go-digest"
"go.podman.io/image/v5/internal/private"
"go.podman.io/image/v5/internal/signature"
)
// Compat implements the obsolete parts of types.ImageSource
// for implementations of private.ImageSource.
// See AddCompat below.
type Compat struct {
src private.ImageSourceInternalOnly
}
// AddCompat initializes Compat to implement the obsolete parts of types.ImageSource
// for implementations of private.ImageSource.
//
// Use it like this:
//
// type yourSource struct {
// impl.Compat
// …
// }
//
// src := &yourSource{…}
// src.Compat = impl.AddCompat(src)
func AddCompat(src private.ImageSourceInternalOnly) Compat {
return Compat{src}
}
// GetSignatures returns the image's signatures. It may use a remote (= slow) service.
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve signatures for
// (when the primary manifest is a manifest list); this never happens if the primary manifest is not a manifest list
// (e.g. if the source never returns manifest lists).
func (c *Compat) GetSignatures(ctx context.Context, instanceDigest *digest.Digest) ([][]byte, error) {
// Silently ignore signatures with other formats; the caller cant handle them.
// Admittedly callers that want to sync all of the image might want to fail instead; this
// way an upgrade of c/image neither breaks them nor adds new functionality.
// Alternatively, we could possibly define the old GetSignatures to use the multi-format
// signature.Blob representation now, in general, but that could silently break them as well.
sigs, err := c.src.GetSignaturesWithFormat(ctx, instanceDigest)
if err != nil {
return nil, err
}
simpleSigs := [][]byte{}
for _, sig := range sigs {
if sig, ok := sig.(signature.SimpleSigning); ok {
simpleSigs = append(simpleSigs, sig.UntrustedSignature())
}
}
return simpleSigs, nil
}

View File

@@ -0,0 +1,23 @@
package impl
import (
"context"
"github.com/opencontainers/go-digest"
"go.podman.io/image/v5/types"
)
// DoesNotAffectLayerInfosForCopy implements LayerInfosForCopy() that returns nothing.
type DoesNotAffectLayerInfosForCopy struct{}
// LayerInfosForCopy returns either nil (meaning the values in the manifest are fine), or updated values for the layer
// blobsums that are listed in the image's manifest. If values are returned, they should be used when using GetBlob()
// to read the image's layers.
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve BlobInfos for
// (when the primary manifest is a manifest list); this never happens if the primary manifest is not a manifest list
// (e.g. if the source never returns manifest lists).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (stub DoesNotAffectLayerInfosForCopy) LayerInfosForCopy(ctx context.Context, instanceDigest *digest.Digest) ([]types.BlobInfo, error) {
return nil, nil
}

View File

@@ -0,0 +1,27 @@
package impl
// Properties collects properties of an ImageSource that are constant throughout its lifetime
// (but might differ across instances).
type Properties struct {
// HasThreadSafeGetBlob indicates whether GetBlob can be executed concurrently.
HasThreadSafeGetBlob bool
}
// PropertyMethodsInitialize implements parts of private.ImageSource corresponding to Properties.
type PropertyMethodsInitialize struct {
// We need two separate structs, PropertyMethodsInitialize and Properties, because Go prohibits fields and methods with the same name.
vals Properties
}
// PropertyMethods creates an PropertyMethodsInitialize for vals.
func PropertyMethods(vals Properties) PropertyMethodsInitialize {
return PropertyMethodsInitialize{
vals: vals,
}
}
// HasThreadSafeGetBlob indicates whether GetBlob can be executed concurrently.
func (o PropertyMethodsInitialize) HasThreadSafeGetBlob() bool {
return o.vals.HasThreadSafeGetBlob
}

View File

@@ -0,0 +1,19 @@
package impl
import (
"context"
"github.com/opencontainers/go-digest"
"go.podman.io/image/v5/internal/signature"
)
// NoSignatures implements GetSignatures() that returns nothing.
type NoSignatures struct{}
// GetSignaturesWithFormat returns the image's signatures. It may use a remote (= slow) service.
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve signatures for
// (when the primary manifest is a manifest list); this never happens if the primary manifest is not a manifest list
// (e.g. if the source never returns manifest lists).
func (stub NoSignatures) GetSignaturesWithFormat(ctx context.Context, instanceDigest *digest.Digest) ([]signature.Signature, error) {
return nil, nil
}

View File

@@ -0,0 +1,54 @@
package stubs
import (
"context"
"fmt"
"io"
"go.podman.io/image/v5/internal/private"
"go.podman.io/image/v5/types"
)
// NoGetBlobAtInitialize implements parts of private.ImageSource
// for transports that dont support GetBlobAt().
// See NoGetBlobAt() below.
type NoGetBlobAtInitialize struct {
transportName string
}
// NoGetBlobAt() creates a NoGetBlobAtInitialize for ref.
func NoGetBlobAt(ref types.ImageReference) NoGetBlobAtInitialize {
return NoGetBlobAtRaw(ref.Transport().Name())
}
// NoGetBlobAtRaw is the same thing as NoGetBlobAt, but it can be used
// in situations where no ImageReference is available.
func NoGetBlobAtRaw(transportName string) NoGetBlobAtInitialize {
return NoGetBlobAtInitialize{
transportName: transportName,
}
}
// SupportsGetBlobAt() returns true if GetBlobAt (BlobChunkAccessor) is supported.
func (stub NoGetBlobAtInitialize) SupportsGetBlobAt() bool {
return false
}
// GetBlobAt returns a sequential channel of readers that contain data for the requested
// blob chunks, and a channel that might get a single error value.
// The specified chunks must be not overlapping and sorted by their offset.
// The readers must be fully consumed, in the order they are returned, before blocking
// to read the next chunk.
// If the Length for the last chunk is set to math.MaxUint64, then it
// fully fetches the remaining data from the offset to the end of the blob.
func (stub NoGetBlobAtInitialize) GetBlobAt(ctx context.Context, info types.BlobInfo, chunks []private.ImageSourceChunk) (chan io.ReadCloser, chan error, error) {
return nil, nil, fmt.Errorf("internal error: GetBlobAt is not supported by the %q transport", stub.transportName)
}
// ImplementsGetBlobAt implements SupportsGetBlobAt() that returns true.
type ImplementsGetBlobAt struct{}
// SupportsGetBlobAt() returns true if GetBlobAt (BlobChunkAccessor) is supported.
func (stub ImplementsGetBlobAt) SupportsGetBlobAt() bool {
return true
}

View File

@@ -0,0 +1,28 @@
// Package stubs contains trivial stubs for parts of private.ImageSource.
// It can be used from internal/wrapper, so it should not drag in any extra dependencies.
// Compare with imagesource/impl, which might require non-trivial implementation work.
//
// There are two kinds of stubs:
//
// First, there are pure stubs, like ImplementsGetBlobAt. Those can just be included in an ImageSource
//
// implementation:
//
// type yourSource struct {
// stubs.ImplementsGetBlobAt
// …
// }
//
// Second, there are stubs with a constructor, like NoGetBlobAtInitialize. The Initialize marker
// means that a constructor must be called:
//
// type yourSource struct {
// stubs.NoGetBlobAtInitialize
// …
// }
//
// dest := &yourSource{
// …
// NoGetBlobAtInitialize: stubs.NoGetBlobAt(ref),
// }
package stubs

View File

@@ -0,0 +1,56 @@
package imagesource
import (
"context"
"github.com/opencontainers/go-digest"
"go.podman.io/image/v5/internal/imagesource/stubs"
"go.podman.io/image/v5/internal/private"
"go.podman.io/image/v5/internal/signature"
"go.podman.io/image/v5/types"
)
// wrapped provides the private.ImageSource operations
// for a source that only implements types.ImageSource
type wrapped struct {
stubs.NoGetBlobAtInitialize
types.ImageSource
}
// FromPublic(src) returns an object that provides the private.ImageSource API
//
// Eventually, we might want to expose this function, and methods of the returned object,
// as a public API (or rather, a variant that does not include the already-superseded
// methods of types.ImageSource, and has added more future-proofing), and more strongly
// deprecate direct use of types.ImageSource.
//
// NOTE: The returned API MUST NOT be a public interface (it can be either just a struct
// with public methods, or perhaps a private interface), so that we can add methods
// without breaking any external implementers of a public interface.
func FromPublic(src types.ImageSource) private.ImageSource {
if src2, ok := src.(private.ImageSource); ok {
return src2
}
return &wrapped{
NoGetBlobAtInitialize: stubs.NoGetBlobAt(src.Reference()),
ImageSource: src,
}
}
// GetSignaturesWithFormat returns the image's signatures. It may use a remote (= slow) service.
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve signatures for
// (when the primary manifest is a manifest list); this never happens if the primary manifest is not a manifest list
// (e.g. if the source never returns manifest lists).
func (w *wrapped) GetSignaturesWithFormat(ctx context.Context, instanceDigest *digest.Digest) ([]signature.Signature, error) {
sigs, err := w.GetSignatures(ctx, instanceDigest)
if err != nil {
return nil, err
}
res := []signature.Signature{}
for _, sig := range sigs {
res = append(res, signature.SimpleSigningFromBlob(sig))
}
return res, nil
}

View File

@@ -0,0 +1,58 @@
package iolimits
import (
"fmt"
"io"
)
// All constants below are intended to be used as limits for `ReadAtMost`. The
// immediate use-case for limiting the size of in-memory copied data is to
// protect against OOM DOS attacks as described inCVE-2020-1702. Instead of
// copying data until running out of memory, we error out after hitting the
// specified limit.
const (
// megaByte denotes one megabyte and is intended to be used as a limit in
// `ReadAtMost`.
megaByte = 1 << 20
// MaxManifestBodySize is the maximum allowed size of a manifest. The limit
// of 4 MB aligns with the one of a Docker registry:
// https://github.com/docker/distribution/blob/a8371794149d1d95f1e846744b05c87f2f825e5a/registry/handlers/manifests.go#L30
MaxManifestBodySize = 4 * megaByte
// MaxAuthTokenBodySize is the maximum allowed size of an auth token.
// The limit of 1 MB is considered to be greatly sufficient.
MaxAuthTokenBodySize = megaByte
// MaxSignatureListBodySize is the maximum allowed size of a signature list.
// The limit of 4 MB is considered to be greatly sufficient.
MaxSignatureListBodySize = 4 * megaByte
// MaxSignatureBodySize is the maximum allowed size of a signature.
// The limit of 4 MB is considered to be greatly sufficient.
MaxSignatureBodySize = 4 * megaByte
// MaxErrorBodySize is the maximum allowed size of an error-response body.
// The limit of 1 MB is considered to be greatly sufficient.
MaxErrorBodySize = megaByte
// MaxConfigBodySize is the maximum allowed size of a config blob.
// The limit of 4 MB is considered to be greatly sufficient.
MaxConfigBodySize = 4 * megaByte
// MaxOpenShiftStatusBody is the maximum allowed size of an OpenShift status body.
// The limit of 4 MB is considered to be greatly sufficient.
MaxOpenShiftStatusBody = 4 * megaByte
// MaxTarFileManifestSize is the maximum allowed size of a (docker save)-like manifest (which may contain multiple images)
// The limit of 1 MB is considered to be greatly sufficient.
MaxTarFileManifestSize = megaByte
)
// ReadAtMost reads from reader and errors out if the specified limit (in bytes) is exceeded.
func ReadAtMost(reader io.Reader, limit int) ([]byte, error) {
limitedReader := io.LimitReader(reader, int64(limit+1))
res, err := io.ReadAll(limitedReader)
if err != nil {
return nil, err
}
if len(res) > limit {
return nil, fmt.Errorf("exceeded maximum allowed size of %d bytes", limit)
}
return res, nil
}

View File

@@ -0,0 +1,72 @@
package manifest
import (
"encoding/json"
"fmt"
)
// AllowedManifestFields is a bit mask of “essential” manifest fields that ValidateUnambiguousManifestFormat
// can expect to be present.
type AllowedManifestFields int
const (
AllowedFieldConfig AllowedManifestFields = 1 << iota
AllowedFieldFSLayers
AllowedFieldHistory
AllowedFieldLayers
AllowedFieldManifests
AllowedFieldFirstUnusedBit // Keep this at the end!
)
// ValidateUnambiguousManifestFormat rejects manifests (incl. multi-arch) that look like more than
// one kind we currently recognize, i.e. if they contain any of the known “essential” format fields
// other than the ones the caller specifically allows.
// expectedMIMEType is used only for diagnostics.
// NOTE: The caller should do the non-heuristic validations (e.g. check for any specified format
// identification/version, or other “magic numbers”) before calling this, to cleanly reject unambiguous
// data that just isnt what was expected, as opposed to actually ambiguous data.
func ValidateUnambiguousManifestFormat(manifest []byte, expectedMIMEType string,
allowed AllowedManifestFields) error {
if allowed >= AllowedFieldFirstUnusedBit {
return fmt.Errorf("internal error: invalid allowedManifestFields value %#v", allowed)
}
// Use a private type to decode, not just a map[string]any, because we want
// to also reject case-insensitive matches (which would be used by Go when really decoding
// the manifest).
// (It is expected that as manifest formats are added or extended over time, more fields will be added
// here.)
detectedFields := struct {
Config any `json:"config"`
FSLayers any `json:"fsLayers"`
History any `json:"history"`
Layers any `json:"layers"`
Manifests any `json:"manifests"`
}{}
if err := json.Unmarshal(manifest, &detectedFields); err != nil {
// The caller was supposed to already validate version numbers, so this should not happen;
// lets not bother with making this error “nice”.
return err
}
unexpected := []string{}
// Sadly this isnt easy to automate in Go, without reflection. So, copy&paste.
if detectedFields.Config != nil && (allowed&AllowedFieldConfig) == 0 {
unexpected = append(unexpected, "config")
}
if detectedFields.FSLayers != nil && (allowed&AllowedFieldFSLayers) == 0 {
unexpected = append(unexpected, "fsLayers")
}
if detectedFields.History != nil && (allowed&AllowedFieldHistory) == 0 {
unexpected = append(unexpected, "history")
}
if detectedFields.Layers != nil && (allowed&AllowedFieldLayers) == 0 {
unexpected = append(unexpected, "layers")
}
if detectedFields.Manifests != nil && (allowed&AllowedFieldManifests) == 0 {
unexpected = append(unexpected, "manifests")
}
if len(unexpected) != 0 {
return fmt.Errorf(`rejecting ambiguous manifest, unexpected fields %#v in supposedly %s`,
unexpected, expectedMIMEType)
}
return nil
}

View File

@@ -0,0 +1,15 @@
package manifest
import (
"github.com/opencontainers/go-digest"
)
// Schema2Descriptor is a “descriptor” in docker/distribution schema 2.
//
// This is publicly visible as c/image/manifest.Schema2Descriptor.
type Schema2Descriptor struct {
MediaType string `json:"mediaType"`
Size int64 `json:"size"`
Digest digest.Digest `json:"digest"`
URLs []string `json:"urls,omitempty"`
}

View File

@@ -0,0 +1,311 @@
package manifest
import (
"encoding/json"
"fmt"
"slices"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
platform "go.podman.io/image/v5/internal/pkg/platform"
compression "go.podman.io/image/v5/pkg/compression/types"
"go.podman.io/image/v5/types"
)
// Schema2PlatformSpec describes the platform which a particular manifest is
// specialized for.
// This is publicly visible as c/image/manifest.Schema2PlatformSpec.
type Schema2PlatformSpec struct {
Architecture string `json:"architecture"`
OS string `json:"os"`
OSVersion string `json:"os.version,omitempty"`
OSFeatures []string `json:"os.features,omitempty"`
Variant string `json:"variant,omitempty"`
Features []string `json:"features,omitempty"` // removed in OCI
}
// Schema2ManifestDescriptor references a platform-specific manifest.
// This is publicly visible as c/image/manifest.Schema2ManifestDescriptor.
type Schema2ManifestDescriptor struct {
Schema2Descriptor
Platform Schema2PlatformSpec `json:"platform"`
}
// Schema2ListPublic is a list of platform-specific manifests.
// This is publicly visible as c/image/manifest.Schema2List.
// Internal users should usually use Schema2List instead.
type Schema2ListPublic struct {
SchemaVersion int `json:"schemaVersion"`
MediaType string `json:"mediaType"`
Manifests []Schema2ManifestDescriptor `json:"manifests"`
}
// MIMEType returns the MIME type of this particular manifest list.
func (list *Schema2ListPublic) MIMEType() string {
return list.MediaType
}
// Instances returns a slice of digests of the manifests that this list knows of.
func (list *Schema2ListPublic) Instances() []digest.Digest {
results := make([]digest.Digest, len(list.Manifests))
for i, m := range list.Manifests {
results[i] = m.Digest
}
return results
}
// Instance returns the ListUpdate of a particular instance in the list.
func (list *Schema2ListPublic) Instance(instanceDigest digest.Digest) (ListUpdate, error) {
for _, manifest := range list.Manifests {
if manifest.Digest == instanceDigest {
ret := ListUpdate{
Digest: manifest.Digest,
Size: manifest.Size,
MediaType: manifest.MediaType,
}
ret.ReadOnly.CompressionAlgorithmNames = []string{compression.GzipAlgorithmName}
platform := ociPlatformFromSchema2PlatformSpec(manifest.Platform)
ret.ReadOnly.Platform = &platform
return ret, nil
}
}
return ListUpdate{}, fmt.Errorf("unable to find instance %s passed to Schema2List.Instances", instanceDigest)
}
// UpdateInstances updates the sizes, digests, and media types of the manifests
// which the list catalogs.
func (list *Schema2ListPublic) UpdateInstances(updates []ListUpdate) error {
editInstances := []ListEdit{}
for i, instance := range updates {
editInstances = append(editInstances, ListEdit{
UpdateOldDigest: list.Manifests[i].Digest,
UpdateDigest: instance.Digest,
UpdateSize: instance.Size,
UpdateMediaType: instance.MediaType,
ListOperation: ListOpUpdate})
}
return list.editInstances(editInstances)
}
func (list *Schema2ListPublic) editInstances(editInstances []ListEdit) error {
addedEntries := []Schema2ManifestDescriptor{}
for i, editInstance := range editInstances {
switch editInstance.ListOperation {
case ListOpUpdate:
if err := editInstance.UpdateOldDigest.Validate(); err != nil {
return fmt.Errorf("Schema2List.EditInstances: Attempting to update %s which is an invalid digest: %w", editInstance.UpdateOldDigest, err)
}
if err := editInstance.UpdateDigest.Validate(); err != nil {
return fmt.Errorf("Schema2List.EditInstances: Modified digest %s is an invalid digest: %w", editInstance.UpdateDigest, err)
}
targetIndex := slices.IndexFunc(list.Manifests, func(m Schema2ManifestDescriptor) bool {
return m.Digest == editInstance.UpdateOldDigest
})
if targetIndex == -1 {
return fmt.Errorf("Schema2List.EditInstances: digest %s not found", editInstance.UpdateOldDigest)
}
list.Manifests[targetIndex].Digest = editInstance.UpdateDigest
if editInstance.UpdateSize < 0 {
return fmt.Errorf("update %d of %d passed to Schema2List.UpdateInstances had an invalid size (%d)", i+1, len(editInstances), editInstance.UpdateSize)
}
list.Manifests[targetIndex].Size = editInstance.UpdateSize
if editInstance.UpdateMediaType == "" {
return fmt.Errorf("update %d of %d passed to Schema2List.UpdateInstances had no media type (was %q)", i+1, len(editInstances), list.Manifests[i].MediaType)
}
list.Manifests[targetIndex].MediaType = editInstance.UpdateMediaType
case ListOpAdd:
if editInstance.AddPlatform == nil {
// Should we create a struct with empty fields instead?
// Right now ListOpAdd is only called when an instance with the same platform value
// already exists in the manifest, so this should not be reached in practice.
return fmt.Errorf("adding a schema2 list instance with no platform specified is not supported")
}
addedEntries = append(addedEntries, Schema2ManifestDescriptor{
Schema2Descriptor{
Digest: editInstance.AddDigest,
Size: editInstance.AddSize,
MediaType: editInstance.AddMediaType,
},
schema2PlatformSpecFromOCIPlatform(*editInstance.AddPlatform),
})
default:
return fmt.Errorf("internal error: invalid operation: %d", editInstance.ListOperation)
}
}
if len(addedEntries) != 0 {
// slices.Clone() here to ensure a private backing array;
// an external caller could have manually created Schema2ListPublic with a slice with extra capacity.
list.Manifests = append(slices.Clone(list.Manifests), addedEntries...)
}
return nil
}
func (list *Schema2List) EditInstances(editInstances []ListEdit) error {
return list.editInstances(editInstances)
}
func (list *Schema2ListPublic) ChooseInstanceByCompression(ctx *types.SystemContext, preferGzip types.OptionalBool) (digest.Digest, error) {
// ChooseInstanceByCompression is same as ChooseInstance for schema2 manifest list.
return list.ChooseInstance(ctx)
}
// ChooseInstance parses blob as a schema2 manifest list, and returns the digest
// of the image which is appropriate for the current environment.
func (list *Schema2ListPublic) ChooseInstance(ctx *types.SystemContext) (digest.Digest, error) {
wantedPlatforms := platform.WantedPlatforms(ctx)
for _, wantedPlatform := range wantedPlatforms {
for _, d := range list.Manifests {
imagePlatform := ociPlatformFromSchema2PlatformSpec(d.Platform)
if platform.MatchesPlatform(imagePlatform, wantedPlatform) {
return d.Digest, nil
}
}
}
return "", fmt.Errorf("no image found in manifest list for architecture %q, variant %q, OS %q", wantedPlatforms[0].Architecture, wantedPlatforms[0].Variant, wantedPlatforms[0].OS)
}
// Serialize returns the list in a blob format.
// NOTE: Serialize() does not in general reproduce the original blob if this object was loaded from one, even if no modifications were made!
func (list *Schema2ListPublic) Serialize() ([]byte, error) {
buf, err := json.Marshal(list)
if err != nil {
return nil, fmt.Errorf("marshaling Schema2List %#v: %w", list, err)
}
return buf, nil
}
// Schema2ListPublicFromComponents creates a Schema2 manifest list instance from the
// supplied data.
// This is publicly visible as c/image/manifest.Schema2ListFromComponents.
func Schema2ListPublicFromComponents(components []Schema2ManifestDescriptor) *Schema2ListPublic {
list := Schema2ListPublic{
SchemaVersion: 2,
MediaType: DockerV2ListMediaType,
Manifests: make([]Schema2ManifestDescriptor, len(components)),
}
for i, component := range components {
m := Schema2ManifestDescriptor{
Schema2Descriptor{
MediaType: component.MediaType,
Size: component.Size,
Digest: component.Digest,
URLs: slices.Clone(component.URLs),
},
Schema2PlatformSpec{
Architecture: component.Platform.Architecture,
OS: component.Platform.OS,
OSVersion: component.Platform.OSVersion,
OSFeatures: slices.Clone(component.Platform.OSFeatures),
Variant: component.Platform.Variant,
Features: slices.Clone(component.Platform.Features),
},
}
list.Manifests[i] = m
}
return &list
}
// Schema2ListPublicClone creates a deep copy of the passed-in list.
// This is publicly visible as c/image/manifest.Schema2ListClone.
func Schema2ListPublicClone(list *Schema2ListPublic) *Schema2ListPublic {
return Schema2ListPublicFromComponents(list.Manifests)
}
// ToOCI1Index returns the list encoded as an OCI1 index.
func (list *Schema2ListPublic) ToOCI1Index() (*OCI1IndexPublic, error) {
components := make([]imgspecv1.Descriptor, 0, len(list.Manifests))
for _, manifest := range list.Manifests {
platform := ociPlatformFromSchema2PlatformSpec(manifest.Platform)
components = append(components, imgspecv1.Descriptor{
MediaType: manifest.MediaType,
Size: manifest.Size,
Digest: manifest.Digest,
URLs: slices.Clone(manifest.URLs),
Platform: &platform,
})
}
oci := OCI1IndexPublicFromComponents(components, nil)
return oci, nil
}
// ToSchema2List returns the list encoded as a Schema2 list.
func (list *Schema2ListPublic) ToSchema2List() (*Schema2ListPublic, error) {
return Schema2ListPublicClone(list), nil
}
// Schema2ListPublicFromManifest creates a Schema2 manifest list instance from marshalled
// JSON, presumably generated by encoding a Schema2 manifest list.
// This is publicly visible as c/image/manifest.Schema2ListFromManifest.
func Schema2ListPublicFromManifest(manifest []byte) (*Schema2ListPublic, error) {
list := Schema2ListPublic{
Manifests: []Schema2ManifestDescriptor{},
}
if err := json.Unmarshal(manifest, &list); err != nil {
return nil, fmt.Errorf("unmarshaling Schema2List %q: %w", string(manifest), err)
}
if err := ValidateUnambiguousManifestFormat(manifest, DockerV2ListMediaType,
AllowedFieldManifests); err != nil {
return nil, err
}
return &list, nil
}
// Clone returns a deep copy of this list and its contents.
func (list *Schema2ListPublic) Clone() ListPublic {
return Schema2ListPublicClone(list)
}
// ConvertToMIMEType converts the passed-in manifest list to a manifest
// list of the specified type.
func (list *Schema2ListPublic) ConvertToMIMEType(manifestMIMEType string) (ListPublic, error) {
switch normalized := NormalizedMIMEType(manifestMIMEType); normalized {
case DockerV2ListMediaType:
return list.Clone(), nil
case imgspecv1.MediaTypeImageIndex:
return list.ToOCI1Index()
case DockerV2Schema1MediaType, DockerV2Schema1SignedMediaType, imgspecv1.MediaTypeImageManifest, DockerV2Schema2MediaType:
return nil, fmt.Errorf("Can not convert manifest list to MIME type %q, which is not a list type", manifestMIMEType)
default:
// Note that this may not be reachable, NormalizedMIMEType has a default for unknown values.
return nil, fmt.Errorf("Unimplemented manifest list MIME type %s", manifestMIMEType)
}
}
// Schema2List is a list of platform-specific manifests.
type Schema2List struct {
Schema2ListPublic
}
func schema2ListFromPublic(public *Schema2ListPublic) *Schema2List {
return &Schema2List{*public}
}
func (list *Schema2List) CloneInternal() List {
return schema2ListFromPublic(Schema2ListPublicClone(&list.Schema2ListPublic))
}
func (list *Schema2List) Clone() ListPublic {
return list.CloneInternal()
}
// Schema2ListFromManifest creates a Schema2 manifest list instance from marshalled
// JSON, presumably generated by encoding a Schema2 manifest list.
func Schema2ListFromManifest(manifest []byte) (*Schema2List, error) {
public, err := Schema2ListPublicFromManifest(manifest)
if err != nil {
return nil, err
}
return schema2ListFromPublic(public), nil
}
// ociPlatformFromSchema2PlatformSpec converts a schema2 platform p to the OCI struccture.
func ociPlatformFromSchema2PlatformSpec(p Schema2PlatformSpec) imgspecv1.Platform {
return imgspecv1.Platform{
Architecture: p.Architecture,
OS: p.OS,
OSVersion: p.OSVersion,
OSFeatures: slices.Clone(p.OSFeatures),
Variant: p.Variant,
// Features is not supported in OCI, and discarded.
}
}

View File

@@ -0,0 +1,56 @@
package manifest
import (
"fmt"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
)
// FIXME: This is a duplicate of c/image/manifestDockerV2Schema2ConfigMediaType.
// Deduplicate that, depending on outcome of https://github.com/containers/image/pull/1791 .
const dockerV2Schema2ConfigMediaType = "application/vnd.docker.container.image.v1+json"
// NonImageArtifactError (detected via errors.As) is used when asking for an image-specific operation
// on an object which is not a “container image” in the standard sense (e.g. an OCI artifact)
//
// This is publicly visible as c/image/manifest.NonImageArtifactError (but we dont provide a public constructor)
type NonImageArtifactError struct {
// Callers should not be blindly calling image-specific operations and only checking MIME types
// on failure; if they care about the artifact type, they should check before using it.
// If they blindly assume an image, they dont really need this value; just a type check
// is sufficient for basic "we can only pull images" UI.
//
// Also, there are fairly widespread “artifacts” which nevertheless use imgspecv1.MediaTypeImageConfig,
// e.g. https://github.com/sigstore/cosign/blob/main/specs/SIGNATURE_SPEC.md , which could cause the callers
// to complain about a non-image artifact with the correct MIME type; we should probably add some other kind of
// type discrimination, _and_ somehow make it available in the API, if we expect API callers to make decisions
// based on that kind of data.
//
// So, lets not expose this until a specific need is identified.
mimeType string
}
// NewNonImageArtifactError returns a NonImageArtifactError about an artifact manifest.
//
// This is typically called if manifest.Config.MediaType != imgspecv1.MediaTypeImageConfig .
func NewNonImageArtifactError(manifest *imgspecv1.Manifest) error {
// Callers decide based on manifest.Config.MediaType that this is not an image;
// in that case manifest.ArtifactType can be optionally defined, and if it is, it is typically
// more relevant because config may be ~absent with imgspecv1.MediaTypeEmptyJSON.
//
// If ArtifactType and Config.MediaType are both defined and non-trivial, presumably
// ArtifactType is the “top-level” one, although thats not defined by the spec.
mimeType := manifest.ArtifactType
if mimeType == "" {
mimeType = manifest.Config.MediaType
}
return NonImageArtifactError{mimeType: mimeType}
}
func (e NonImageArtifactError) Error() string {
// Special-case these invalid mixed images, which show up from time to time:
if e.mimeType == dockerV2Schema2ConfigMediaType {
return fmt.Sprintf("invalid mixed OCI image with Docker v2s2 config (%q)", e.mimeType)
}
return fmt.Sprintf("unsupported image-specific operation on artifact with type %q", e.mimeType)
}

133
vendor/go.podman.io/image/v5/internal/manifest/list.go generated vendored Normal file
View File

@@ -0,0 +1,133 @@
package manifest
import (
"fmt"
digest "github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
compression "go.podman.io/image/v5/pkg/compression/types"
"go.podman.io/image/v5/types"
)
// ListPublic is a subset of List which is a part of the public API;
// so no methods can be added, removed or changed.
//
// Internal users should usually use List instead.
type ListPublic interface {
// MIMEType returns the MIME type of this particular manifest list.
MIMEType() string
// Instances returns a list of the manifests that this list knows of, other than its own.
Instances() []digest.Digest
// Update information about the list's instances. The length of the passed-in slice must
// match the length of the list of instances which the list already contains, and every field
// must be specified.
UpdateInstances([]ListUpdate) error
// Instance returns the size and MIME type of a particular instance in the list.
Instance(digest.Digest) (ListUpdate, error)
// ChooseInstance selects which manifest is most appropriate for the platform described by the
// SystemContext, or for the current platform if the SystemContext doesn't specify any details.
ChooseInstance(ctx *types.SystemContext) (digest.Digest, error)
// Serialize returns the list in a blob format.
// NOTE: Serialize() does not in general reproduce the original blob if this object was loaded
// from, even if no modifications were made!
Serialize() ([]byte, error)
// ConvertToMIMEType returns the list rebuilt to the specified MIME type, or an error.
ConvertToMIMEType(mimeType string) (ListPublic, error)
// Clone returns a deep copy of this list and its contents.
Clone() ListPublic
}
// List is an interface for parsing, modifying lists of image manifests.
// Callers can either use this abstract interface without understanding the details of the formats,
// or instantiate a specific implementation (e.g. manifest.OCI1Index) and access the public members
// directly.
type List interface {
ListPublic
// CloneInternal returns a deep copy of this list and its contents.
CloneInternal() List
// ChooseInstanceInstanceByCompression selects which manifest is most appropriate for the platform and compression described by the
// SystemContext ( or for the current platform if the SystemContext doesn't specify any detail ) and preferGzip for compression which
// when configured to OptionalBoolTrue and chooses best available compression when it is OptionalBoolFalse or left OptionalBoolUndefined.
ChooseInstanceByCompression(ctx *types.SystemContext, preferGzip types.OptionalBool) (digest.Digest, error)
// Edit information about the list's instances. Contains Slice of ListEdit where each element
// is responsible for either Modifying or Adding a new instance to the Manifest. Operation is
// selected on the basis of configured ListOperation field.
EditInstances([]ListEdit) error
}
// ListUpdate includes the fields which a List's UpdateInstances() method will modify.
// This is publicly visible as c/image/manifest.ListUpdate.
type ListUpdate struct {
Digest digest.Digest
Size int64
MediaType string
// ReadOnly fields: may be set by Instance(), ignored by UpdateInstance()
ReadOnly struct {
Platform *imgspecv1.Platform
Annotations map[string]string
CompressionAlgorithmNames []string
ArtifactType string
}
}
type ListOp int
const (
listOpInvalid ListOp = iota
ListOpAdd
ListOpUpdate
)
// ListEdit includes the fields which a List's EditInstances() method will modify.
type ListEdit struct {
ListOperation ListOp
// if Op == ListEditUpdate (basically the previous UpdateInstances). All fields must be set.
UpdateOldDigest digest.Digest
UpdateDigest digest.Digest
UpdateSize int64
UpdateMediaType string
UpdateAffectAnnotations bool
UpdateAnnotations map[string]string
UpdateCompressionAlgorithms []compression.Algorithm
// If Op = ListEditAdd. All fields must be set.
AddDigest digest.Digest
AddSize int64
AddMediaType string
AddArtifactType string
AddPlatform *imgspecv1.Platform
AddAnnotations map[string]string
AddCompressionAlgorithms []compression.Algorithm
}
// ListPublicFromBlob parses a list of manifests.
// This is publicly visible as c/image/manifest.ListFromBlob.
func ListPublicFromBlob(manifest []byte, manifestMIMEType string) (ListPublic, error) {
list, err := ListFromBlob(manifest, manifestMIMEType)
if err != nil {
return nil, err
}
return list, nil
}
// ListFromBlob parses a list of manifests.
func ListFromBlob(manifest []byte, manifestMIMEType string) (List, error) {
normalized := NormalizedMIMEType(manifestMIMEType)
switch normalized {
case DockerV2ListMediaType:
return Schema2ListFromManifest(manifest)
case imgspecv1.MediaTypeImageIndex:
return OCI1IndexFromManifest(manifest)
case DockerV2Schema1MediaType, DockerV2Schema1SignedMediaType, imgspecv1.MediaTypeImageManifest, DockerV2Schema2MediaType:
return nil, fmt.Errorf("Treating single images as manifest lists is not implemented")
}
return nil, fmt.Errorf("Unimplemented manifest list MIME type %q (normalized as %q)", manifestMIMEType, normalized)
}

View File

@@ -0,0 +1,226 @@
package manifest
import (
"encoding/json"
"slices"
"github.com/containers/libtrust"
digest "github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
compressiontypes "go.podman.io/image/v5/pkg/compression/types"
)
// FIXME: Should we just use docker/distribution and docker/docker implementations directly?
// FIXME(runcom, mitr): should we have a mediatype pkg??
const (
// DockerV2Schema1MediaType MIME type represents Docker manifest schema 1
DockerV2Schema1MediaType = "application/vnd.docker.distribution.manifest.v1+json"
// DockerV2Schema1SignedMediaType MIME type represents Docker manifest schema 1 with a JWS signature
DockerV2Schema1SignedMediaType = "application/vnd.docker.distribution.manifest.v1+prettyjws"
// DockerV2Schema2MediaType MIME type represents Docker manifest schema 2
DockerV2Schema2MediaType = "application/vnd.docker.distribution.manifest.v2+json"
// DockerV2Schema2ConfigMediaType is the MIME type used for schema 2 config blobs.
DockerV2Schema2ConfigMediaType = "application/vnd.docker.container.image.v1+json"
// DockerV2Schema2LayerMediaType is the MIME type used for schema 2 layers.
DockerV2Schema2LayerMediaType = "application/vnd.docker.image.rootfs.diff.tar.gzip"
// DockerV2SchemaLayerMediaTypeUncompressed is the mediaType used for uncompressed layers.
DockerV2SchemaLayerMediaTypeUncompressed = "application/vnd.docker.image.rootfs.diff.tar"
// DockerV2ListMediaType MIME type represents Docker manifest schema 2 list
DockerV2ListMediaType = "application/vnd.docker.distribution.manifest.list.v2+json"
// DockerV2Schema2ForeignLayerMediaType is the MIME type used for schema 2 foreign layers.
DockerV2Schema2ForeignLayerMediaType = "application/vnd.docker.image.rootfs.foreign.diff.tar"
// DockerV2Schema2ForeignLayerMediaType is the MIME type used for gzipped schema 2 foreign layers.
DockerV2Schema2ForeignLayerMediaTypeGzip = "application/vnd.docker.image.rootfs.foreign.diff.tar.gzip"
)
// GuessMIMEType guesses MIME type of a manifest and returns it _if it is recognized_, or "" if unknown or unrecognized.
// FIXME? We should, in general, prefer out-of-band MIME type instead of blindly parsing the manifest,
// but we may not have such metadata available (e.g. when the manifest is a local file).
// This is publicly visible as c/image/manifest.GuessMIMEType.
func GuessMIMEType(manifest []byte) string {
// A subset of manifest fields; the rest is silently ignored by json.Unmarshal.
// Also docker/distribution/manifest.Versioned.
meta := struct {
MediaType string `json:"mediaType"`
SchemaVersion int `json:"schemaVersion"`
Signatures any `json:"signatures"`
}{}
if err := json.Unmarshal(manifest, &meta); err != nil {
return ""
}
switch meta.MediaType {
case DockerV2Schema2MediaType, DockerV2ListMediaType,
imgspecv1.MediaTypeImageManifest, imgspecv1.MediaTypeImageIndex: // A recognized type.
return meta.MediaType
}
// this is the only way the function can return DockerV2Schema1MediaType, and recognizing that is essential for stripping the JWS signatures = computing the correct manifest digest.
switch meta.SchemaVersion {
case 1:
if meta.Signatures != nil {
return DockerV2Schema1SignedMediaType
}
return DockerV2Schema1MediaType
case 2:
// Best effort to understand if this is an OCI image since mediaType
// wasn't in the manifest for OCI image-spec < 1.0.2.
// For docker v2s2 meta.MediaType should have been set. But given the data, this is our best guess.
ociMan := struct {
Config struct {
MediaType string `json:"mediaType"`
} `json:"config"`
}{}
if err := json.Unmarshal(manifest, &ociMan); err != nil {
return ""
}
switch ociMan.Config.MediaType {
case imgspecv1.MediaTypeImageConfig:
return imgspecv1.MediaTypeImageManifest
case DockerV2Schema2ConfigMediaType:
// This case should not happen since a Docker image
// must declare a top-level media type and
// `meta.MediaType` has already been checked.
return DockerV2Schema2MediaType
}
// Maybe an image index or an OCI artifact.
ociIndex := struct {
Manifests []imgspecv1.Descriptor `json:"manifests"`
}{}
if err := json.Unmarshal(manifest, &ociIndex); err != nil {
return ""
}
if len(ociIndex.Manifests) != 0 {
if ociMan.Config.MediaType == "" {
return imgspecv1.MediaTypeImageIndex
}
// FIXME: this is mixing media types of manifests and configs.
return ociMan.Config.MediaType
}
// It's most likely an OCI artifact with a custom config media
// type which is not (and cannot) be covered by the media-type
// checks cabove.
return imgspecv1.MediaTypeImageManifest
}
return ""
}
// Digest returns the a digest of a docker manifest, with any necessary implied transformations like stripping v1s1 signatures.
// This is publicly visible as c/image/manifest.Digest.
func Digest(manifest []byte) (digest.Digest, error) {
if GuessMIMEType(manifest) == DockerV2Schema1SignedMediaType {
sig, err := libtrust.ParsePrettySignature(manifest, "signatures")
if err != nil {
return "", err
}
manifest, err = sig.Payload()
if err != nil {
// Coverage: This should never happen, libtrust's Payload() can fail only if joseBase64UrlDecode() fails, on a string
// that libtrust itself has josebase64UrlEncode()d
return "", err
}
}
return digest.FromBytes(manifest), nil
}
// MatchesDigest returns true iff the manifest matches expectedDigest.
// Error may be set if this returns false.
// Note that this is not doing ConstantTimeCompare; by the time we get here, the cryptographic signature must already have been verified,
// or we are not using a cryptographic channel and the attacker can modify the digest along with the manifest blob.
// This is publicly visible as c/image/manifest.MatchesDigest.
func MatchesDigest(manifest []byte, expectedDigest digest.Digest) (bool, error) {
// This should eventually support various digest types.
actualDigest, err := Digest(manifest)
if err != nil {
return false, err
}
return expectedDigest == actualDigest, nil
}
// NormalizedMIMEType returns the effective MIME type of a manifest MIME type returned by a server,
// centralizing various workarounds.
// This is publicly visible as c/image/manifest.NormalizedMIMEType.
func NormalizedMIMEType(input string) string {
switch input {
// "application/json" is a valid v2s1 value per https://github.com/docker/distribution/blob/master/docs/spec/manifest-v2-1.md .
// This works for now, when nothing else seems to return "application/json"; if that were not true, the mapping/detection might
// need to happen within the ImageSource.
case "application/json":
return DockerV2Schema1SignedMediaType
case DockerV2Schema1MediaType, DockerV2Schema1SignedMediaType,
imgspecv1.MediaTypeImageManifest,
imgspecv1.MediaTypeImageIndex,
DockerV2Schema2MediaType,
DockerV2ListMediaType:
return input
default:
// If it's not a recognized manifest media type, or we have failed determining the type, we'll try one last time
// to deserialize using v2s1 as per https://github.com/docker/distribution/blob/master/manifests.go#L108
// and https://github.com/docker/distribution/blob/master/manifest/schema1/manifest.go#L50
//
// Crane registries can also return "text/plain", or pretty much anything else depending on a file extension “recognized” in the tag.
// This makes no real sense, but it happens
// because requests for manifests are
// redirected to a content distribution
// network which is configured that way. See https://bugzilla.redhat.com/show_bug.cgi?id=1389442
return DockerV2Schema1SignedMediaType
}
}
// CompressionAlgorithmIsUniversallySupported returns true if MIMETypeSupportsCompressionAlgorithm(mimeType, algo) returns true for all mimeType values.
func CompressionAlgorithmIsUniversallySupported(algo compressiontypes.Algorithm) bool {
// Compare the discussion about BaseVariantName in MIMETypeSupportsCompressionAlgorithm().
switch algo.Name() {
case compressiontypes.GzipAlgorithmName:
return true
default:
return false
}
}
// MIMETypeSupportsCompressionAlgorithm returns true if mimeType can represent algo.
func MIMETypeSupportsCompressionAlgorithm(mimeType string, algo compressiontypes.Algorithm) bool {
if CompressionAlgorithmIsUniversallySupported(algo) {
return true
}
// This does not use BaseVariantName: Plausibly a manifest format might support zstd but not have annotation fields.
// The logic might have to be more complex (and more ad-hoc) if more manifest formats, with more capabilities, emerge.
switch algo.Name() {
case compressiontypes.ZstdAlgorithmName, compressiontypes.ZstdChunkedAlgorithmName:
return mimeType == imgspecv1.MediaTypeImageManifest
default: // Includes Bzip2AlgorithmName and XzAlgorithmName, which are defined names but are not supported anywhere
return false
}
}
// ReuseConditions are an input to CandidateCompressionMatchesReuseConditions;
// it is a struct to allow longer and better-documented field names.
type ReuseConditions struct {
PossibleManifestFormats []string // If set, a set of possible manifest formats; at least one should support the reused layer
RequiredCompression *compressiontypes.Algorithm // If set, only reuse layers with a matching algorithm
}
// CandidateCompressionMatchesReuseConditions returns true if a layer with candidateCompression
// (which can be nil to represent uncompressed or unknown) matches reuseConditions.
func CandidateCompressionMatchesReuseConditions(c ReuseConditions, candidateCompression *compressiontypes.Algorithm) bool {
if c.RequiredCompression != nil {
if candidateCompression == nil ||
(c.RequiredCompression.Name() != candidateCompression.Name() && c.RequiredCompression.Name() != candidateCompression.BaseVariantName()) {
return false
}
}
// For candidateCompression == nil, we cant tell the difference between “uncompressed” and “unknown”;
// and “uncompressed” is acceptable in all known formats (well, it seems to work in practice for schema1),
// so dont impose any restrictions if candidateCompression == nil
if c.PossibleManifestFormats != nil && candidateCompression != nil {
if !slices.ContainsFunc(c.PossibleManifestFormats, func(mt string) bool {
return MIMETypeSupportsCompressionAlgorithm(mt, *candidateCompression)
}) {
return false
}
}
return true
}

View File

@@ -0,0 +1,466 @@
package manifest
import (
"bytes"
"encoding/json"
"fmt"
"maps"
"math"
"runtime"
"slices"
"github.com/opencontainers/go-digest"
imgspec "github.com/opencontainers/image-spec/specs-go"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
platform "go.podman.io/image/v5/internal/pkg/platform"
compression "go.podman.io/image/v5/pkg/compression/types"
"go.podman.io/image/v5/types"
)
const (
// OCI1InstanceAnnotationCompressionZSTD is an annotation name that can be placed on a manifest descriptor in an OCI index.
// The value of the annotation must be the string "true".
// If this annotation is present on a manifest, consuming that image instance requires support for Zstd compression.
// That also suggests that this instance benefits from
// Zstd compression, so it can be preferred by compatible consumers over instances that
// use gzip, depending on their local policy.
OCI1InstanceAnnotationCompressionZSTD = "io.github.containers.compression.zstd"
OCI1InstanceAnnotationCompressionZSTDValue = "true"
)
// OCI1IndexPublic is just an alias for the OCI index type, but one which we can
// provide methods for.
// This is publicly visible as c/image/manifest.OCI1Index
// Internal users should usually use OCI1Index instead.
type OCI1IndexPublic struct {
imgspecv1.Index
}
// MIMEType returns the MIME type of this particular manifest index.
func (index *OCI1IndexPublic) MIMEType() string {
return imgspecv1.MediaTypeImageIndex
}
// Instances returns a slice of digests of the manifests that this index knows of.
func (index *OCI1IndexPublic) Instances() []digest.Digest {
results := make([]digest.Digest, len(index.Manifests))
for i, m := range index.Manifests {
results[i] = m.Digest
}
return results
}
// Instance returns the ListUpdate of a particular instance in the index.
func (index *OCI1IndexPublic) Instance(instanceDigest digest.Digest) (ListUpdate, error) {
for _, manifest := range index.Manifests {
if manifest.Digest == instanceDigest {
ret := ListUpdate{
Digest: manifest.Digest,
Size: manifest.Size,
MediaType: manifest.MediaType,
}
ret.ReadOnly.Platform = manifest.Platform
ret.ReadOnly.Annotations = manifest.Annotations
ret.ReadOnly.CompressionAlgorithmNames = annotationsToCompressionAlgorithmNames(manifest.Annotations)
ret.ReadOnly.ArtifactType = manifest.ArtifactType
return ret, nil
}
}
return ListUpdate{}, fmt.Errorf("unable to find instance %s in OCI1Index", instanceDigest)
}
// UpdateInstances updates the sizes, digests, and media types of the manifests
// which the list catalogs.
func (index *OCI1IndexPublic) UpdateInstances(updates []ListUpdate) error {
editInstances := []ListEdit{}
for i, instance := range updates {
editInstances = append(editInstances, ListEdit{
UpdateOldDigest: index.Manifests[i].Digest,
UpdateDigest: instance.Digest,
UpdateSize: instance.Size,
UpdateMediaType: instance.MediaType,
ListOperation: ListOpUpdate})
}
return index.editInstances(editInstances)
}
func annotationsToCompressionAlgorithmNames(annotations map[string]string) []string {
result := make([]string, 0, 1)
if annotations[OCI1InstanceAnnotationCompressionZSTD] == OCI1InstanceAnnotationCompressionZSTDValue {
result = append(result, compression.ZstdAlgorithmName)
}
// No compression was detected, hence assume instance has default compression `Gzip`
if len(result) == 0 {
result = append(result, compression.GzipAlgorithmName)
}
return result
}
func addCompressionAnnotations(compressionAlgorithms []compression.Algorithm, annotationsMap *map[string]string) {
// TODO: This should also delete the algorithm if map already contains an algorithm and compressionAlgorithm
// list has a different algorithm. To do that, we would need to modify the callers to always provide a reliable
// and full compressionAlghorithms list.
if *annotationsMap == nil && len(compressionAlgorithms) > 0 {
*annotationsMap = map[string]string{}
}
for _, algo := range compressionAlgorithms {
switch algo.BaseVariantName() {
case compression.ZstdAlgorithmName:
(*annotationsMap)[OCI1InstanceAnnotationCompressionZSTD] = OCI1InstanceAnnotationCompressionZSTDValue
default:
continue
}
}
}
func (index *OCI1IndexPublic) editInstances(editInstances []ListEdit) error {
addedEntries := []imgspecv1.Descriptor{}
updatedAnnotations := false
for i, editInstance := range editInstances {
switch editInstance.ListOperation {
case ListOpUpdate:
if err := editInstance.UpdateOldDigest.Validate(); err != nil {
return fmt.Errorf("OCI1Index.EditInstances: Attempting to update %s which is an invalid digest: %w", editInstance.UpdateOldDigest, err)
}
if err := editInstance.UpdateDigest.Validate(); err != nil {
return fmt.Errorf("OCI1Index.EditInstances: Modified digest %s is an invalid digest: %w", editInstance.UpdateDigest, err)
}
targetIndex := slices.IndexFunc(index.Manifests, func(m imgspecv1.Descriptor) bool {
return m.Digest == editInstance.UpdateOldDigest
})
if targetIndex == -1 {
return fmt.Errorf("OCI1Index.EditInstances: digest %s not found", editInstance.UpdateOldDigest)
}
index.Manifests[targetIndex].Digest = editInstance.UpdateDigest
if editInstance.UpdateSize < 0 {
return fmt.Errorf("update %d of %d passed to OCI1Index.UpdateInstances had an invalid size (%d)", i+1, len(editInstances), editInstance.UpdateSize)
}
index.Manifests[targetIndex].Size = editInstance.UpdateSize
if editInstance.UpdateMediaType == "" {
return fmt.Errorf("update %d of %d passed to OCI1Index.UpdateInstances had no media type (was %q)", i+1, len(editInstances), index.Manifests[i].MediaType)
}
index.Manifests[targetIndex].MediaType = editInstance.UpdateMediaType
if editInstance.UpdateAnnotations != nil {
updatedAnnotations = true
if editInstance.UpdateAffectAnnotations {
index.Manifests[targetIndex].Annotations = maps.Clone(editInstance.UpdateAnnotations)
} else {
if index.Manifests[targetIndex].Annotations == nil {
index.Manifests[targetIndex].Annotations = map[string]string{}
}
maps.Copy(index.Manifests[targetIndex].Annotations, editInstance.UpdateAnnotations)
}
}
addCompressionAnnotations(editInstance.UpdateCompressionAlgorithms, &index.Manifests[targetIndex].Annotations)
case ListOpAdd:
annotations := map[string]string{}
if editInstance.AddAnnotations != nil {
annotations = maps.Clone(editInstance.AddAnnotations)
}
addCompressionAnnotations(editInstance.AddCompressionAlgorithms, &annotations)
addedEntries = append(addedEntries, imgspecv1.Descriptor{
MediaType: editInstance.AddMediaType,
ArtifactType: editInstance.AddArtifactType,
Size: editInstance.AddSize,
Digest: editInstance.AddDigest,
Platform: editInstance.AddPlatform,
Annotations: annotations,
})
default:
return fmt.Errorf("internal error: invalid operation: %d", editInstance.ListOperation)
}
}
if len(addedEntries) != 0 {
// slices.Clone() here to ensure the slice uses a private backing array;
// an external caller could have manually created OCI1IndexPublic with a slice with extra capacity.
index.Manifests = append(slices.Clone(index.Manifests), addedEntries...)
}
if len(addedEntries) != 0 || updatedAnnotations {
slices.SortStableFunc(index.Manifests, func(a, b imgspecv1.Descriptor) int {
// FIXME? With Go 1.21 and cmp.Compare available, turn instanceIsZstd into an integer score that can be compared, and generalizes
// into more algorithms?
aZstd := instanceIsZstd(a)
bZstd := instanceIsZstd(b)
switch {
case aZstd == bZstd:
return 0
case !aZstd: // Implies bZstd
return -1
default: // aZstd && !bZstd
return 1
}
})
}
return nil
}
func (index *OCI1Index) EditInstances(editInstances []ListEdit) error {
return index.editInstances(editInstances)
}
// instanceIsZstd returns true if instance is a zstd instance otherwise false.
func instanceIsZstd(manifest imgspecv1.Descriptor) bool {
if value, ok := manifest.Annotations[OCI1InstanceAnnotationCompressionZSTD]; ok && value == "true" {
return true
}
return false
}
type instanceCandidate struct {
platformIndex int // Index of the candidate in platform.WantedPlatforms: lower numbers are preferred; or math.maxInt if the candidate doesnt have a platform
isZstd bool // tells if particular instance if zstd instance
manifestPosition int // A zero-based index of the instance in the manifest list
digest digest.Digest // Instance digest
}
func (ic instanceCandidate) isPreferredOver(other *instanceCandidate, preferGzip types.OptionalBool) bool {
switch {
case ic.platformIndex != other.platformIndex:
return ic.platformIndex < other.platformIndex
case ic.isZstd != other.isZstd:
if preferGzip != types.OptionalBoolTrue {
return ic.isZstd
} else {
return !ic.isZstd
}
case ic.manifestPosition != other.manifestPosition:
return ic.manifestPosition < other.manifestPosition
}
panic("internal error: invalid comparison between two candidates") // This should not be reachable because in all calls we make, the two candidates differ at least in manifestPosition.
}
// chooseInstance is a private equivalent to ChooseInstanceByCompression,
// shared by ChooseInstance and ChooseInstanceByCompression.
func (index *OCI1IndexPublic) chooseInstance(ctx *types.SystemContext, preferGzip types.OptionalBool) (digest.Digest, error) {
wantedPlatforms := platform.WantedPlatforms(ctx)
var bestMatch *instanceCandidate
bestMatch = nil
for manifestIndex, d := range index.Manifests {
candidate := instanceCandidate{platformIndex: math.MaxInt, manifestPosition: manifestIndex, isZstd: instanceIsZstd(d), digest: d.Digest}
if d.Platform != nil {
imagePlatform := ociPlatformClone(*d.Platform)
platformIndex := slices.IndexFunc(wantedPlatforms, func(wantedPlatform imgspecv1.Platform) bool {
return platform.MatchesPlatform(imagePlatform, wantedPlatform)
})
if platformIndex == -1 {
continue
}
candidate.platformIndex = platformIndex
}
if bestMatch == nil || candidate.isPreferredOver(bestMatch, preferGzip) {
bestMatch = &candidate
}
}
if bestMatch != nil {
return bestMatch.digest, nil
}
return "", fmt.Errorf("no image found in image index for architecture %q, variant %q, OS %q", wantedPlatforms[0].Architecture, wantedPlatforms[0].Variant, wantedPlatforms[0].OS)
}
func (index *OCI1Index) ChooseInstanceByCompression(ctx *types.SystemContext, preferGzip types.OptionalBool) (digest.Digest, error) {
return index.chooseInstance(ctx, preferGzip)
}
// ChooseInstance parses blob as an oci v1 manifest index, and returns the digest
// of the image which is appropriate for the current environment.
func (index *OCI1IndexPublic) ChooseInstance(ctx *types.SystemContext) (digest.Digest, error) {
return index.chooseInstance(ctx, types.OptionalBoolFalse)
}
// Serialize returns the index in a blob format.
// NOTE: Serialize() does not in general reproduce the original blob if this object was loaded from one, even if no modifications were made!
func (index *OCI1IndexPublic) Serialize() ([]byte, error) {
buf, err := json.Marshal(index)
if err != nil {
return nil, fmt.Errorf("marshaling OCI1Index %#v: %w", index, err)
}
return buf, nil
}
// OCI1IndexPublicFromComponents creates an OCI1 image index instance from the
// supplied data.
// This is publicly visible as c/image/manifest.OCI1IndexFromComponents.
func OCI1IndexPublicFromComponents(components []imgspecv1.Descriptor, annotations map[string]string) *OCI1IndexPublic {
index := OCI1IndexPublic{
imgspecv1.Index{
Versioned: imgspec.Versioned{SchemaVersion: 2},
MediaType: imgspecv1.MediaTypeImageIndex,
Manifests: make([]imgspecv1.Descriptor, len(components)),
Annotations: maps.Clone(annotations),
},
}
for i, component := range components {
index.Manifests[i] = oci1DescriptorClone(component)
}
return &index
}
func oci1DescriptorClone(d imgspecv1.Descriptor) imgspecv1.Descriptor {
var platform *imgspecv1.Platform
if d.Platform != nil {
platformCopy := ociPlatformClone(*d.Platform)
platform = &platformCopy
}
return imgspecv1.Descriptor{
MediaType: d.MediaType,
Digest: d.Digest,
Size: d.Size,
URLs: slices.Clone(d.URLs),
Annotations: maps.Clone(d.Annotations),
Data: bytes.Clone(d.Data),
Platform: platform,
ArtifactType: d.ArtifactType,
}
}
// OCI1IndexPublicClone creates a deep copy of the passed-in index.
// This is publicly visible as c/image/manifest.OCI1IndexClone.
func OCI1IndexPublicClone(index *OCI1IndexPublic) *OCI1IndexPublic {
var subject *imgspecv1.Descriptor
if index.Subject != nil {
s := oci1DescriptorClone(*index.Subject)
subject = &s
}
manifests := make([]imgspecv1.Descriptor, len(index.Manifests))
for i, m := range index.Manifests {
manifests[i] = oci1DescriptorClone(m)
}
return &OCI1IndexPublic{
Index: imgspecv1.Index{
Versioned: index.Versioned,
MediaType: index.MediaType,
ArtifactType: index.ArtifactType,
Manifests: manifests,
Subject: subject,
Annotations: maps.Clone(index.Annotations),
},
}
}
// ToOCI1Index returns the index encoded as an OCI1 index.
func (index *OCI1IndexPublic) ToOCI1Index() (*OCI1IndexPublic, error) {
return OCI1IndexPublicClone(index), nil
}
// ToSchema2List returns the index encoded as a Schema2 list.
func (index *OCI1IndexPublic) ToSchema2List() (*Schema2ListPublic, error) {
components := make([]Schema2ManifestDescriptor, 0, len(index.Manifests))
for _, manifest := range index.Manifests {
platform := manifest.Platform
if platform == nil {
platform = &imgspecv1.Platform{
OS: runtime.GOOS,
Architecture: runtime.GOARCH,
}
}
components = append(components, Schema2ManifestDescriptor{
Schema2Descriptor{
MediaType: manifest.MediaType,
Size: manifest.Size,
Digest: manifest.Digest,
URLs: slices.Clone(manifest.URLs),
},
schema2PlatformSpecFromOCIPlatform(*platform),
})
}
s2 := Schema2ListPublicFromComponents(components)
return s2, nil
}
// OCI1IndexPublicFromManifest creates an OCI1 manifest index instance from marshalled
// JSON, presumably generated by encoding a OCI1 manifest index.
// This is publicly visible as c/image/manifest.OCI1IndexFromManifest.
func OCI1IndexPublicFromManifest(manifest []byte) (*OCI1IndexPublic, error) {
index := OCI1IndexPublic{
Index: imgspecv1.Index{
Versioned: imgspec.Versioned{SchemaVersion: 2},
MediaType: imgspecv1.MediaTypeImageIndex,
Manifests: []imgspecv1.Descriptor{},
Annotations: make(map[string]string),
},
}
if err := json.Unmarshal(manifest, &index); err != nil {
return nil, fmt.Errorf("unmarshaling OCI1Index %q: %w", string(manifest), err)
}
if err := ValidateUnambiguousManifestFormat(manifest, imgspecv1.MediaTypeImageIndex,
AllowedFieldManifests); err != nil {
return nil, err
}
return &index, nil
}
// Clone returns a deep copy of this list and its contents.
func (index *OCI1IndexPublic) Clone() ListPublic {
return OCI1IndexPublicClone(index)
}
// ConvertToMIMEType converts the passed-in image index to a manifest list of
// the specified type.
func (index *OCI1IndexPublic) ConvertToMIMEType(manifestMIMEType string) (ListPublic, error) {
switch normalized := NormalizedMIMEType(manifestMIMEType); normalized {
case DockerV2ListMediaType:
return index.ToSchema2List()
case imgspecv1.MediaTypeImageIndex:
return index.Clone(), nil
case DockerV2Schema1MediaType, DockerV2Schema1SignedMediaType, imgspecv1.MediaTypeImageManifest, DockerV2Schema2MediaType:
return nil, fmt.Errorf("Can not convert image index to MIME type %q, which is not a list type", manifestMIMEType)
default:
// Note that this may not be reachable, NormalizedMIMEType has a default for unknown values.
return nil, fmt.Errorf("Unimplemented manifest MIME type %s", manifestMIMEType)
}
}
type OCI1Index struct {
OCI1IndexPublic
}
func oci1IndexFromPublic(public *OCI1IndexPublic) *OCI1Index {
return &OCI1Index{*public}
}
func (index *OCI1Index) CloneInternal() List {
return oci1IndexFromPublic(OCI1IndexPublicClone(&index.OCI1IndexPublic))
}
func (index *OCI1Index) Clone() ListPublic {
return index.CloneInternal()
}
// OCI1IndexFromManifest creates a OCI1 manifest list instance from marshalled
// JSON, presumably generated by encoding a OCI1 manifest list.
func OCI1IndexFromManifest(manifest []byte) (*OCI1Index, error) {
public, err := OCI1IndexPublicFromManifest(manifest)
if err != nil {
return nil, err
}
return oci1IndexFromPublic(public), nil
}
// ociPlatformClone returns an independent copy of p.
func ociPlatformClone(p imgspecv1.Platform) imgspecv1.Platform {
// The only practical way in Go to give read-only access to an array is to copy it.
// The only practical way in Go to copy a deep structure is to either do it manually field by field,
// or to use reflection (incl. a round-trip through JSON, which uses reflection).
//
// The combination of the two is just sad, and leads to code like this, which will
// need to be updated with every new Platform field.
return imgspecv1.Platform{
Architecture: p.Architecture,
OS: p.OS,
OSVersion: p.OSVersion,
OSFeatures: slices.Clone(p.OSFeatures),
Variant: p.Variant,
}
}
// schema2PlatformSpecFromOCIPlatform converts an OCI platform p to the schema2 structure.
func schema2PlatformSpecFromOCIPlatform(p imgspecv1.Platform) Schema2PlatformSpec {
return Schema2PlatformSpec{
Architecture: p.Architecture,
OS: p.OS,
OSVersion: p.OSVersion,
OSFeatures: slices.Clone(p.OSFeatures),
Variant: p.Variant,
Features: nil,
}
}

View File

@@ -0,0 +1,34 @@
package multierr
import (
"fmt"
"strings"
)
// Format creates an error value from the input array (which should not be empty)
// If the input contains a single error value, it is returned as is.
// If there are multiple, they are formatted as a multi-error (with Unwrap() []error) with the provided initial, separator, and ending strings.
//
// Typical usage:
//
// var errs []error
// // …
// errs = append(errs, …)
// // …
// if errs != nil { return multierr.Format("Failures doing $FOO", "\n* ", "", errs)}
func Format(first, middle, last string, errs []error) error {
switch len(errs) {
case 0:
return fmt.Errorf("internal error: multierr.Format called with 0 errors")
case 1:
return errs[0]
default:
// We have to do this — and this function only really exists — because fmt.Errorf(format, errs...) is invalid:
// []error is not a valid parameter to a function expecting []any
anyErrs := make([]any, 0, len(errs))
for _, e := range errs {
anyErrs = append(anyErrs, e)
}
return fmt.Errorf(first+"%w"+strings.Repeat(middle+"%w", len(errs)-1)+last, anyErrs...)
}
}

View File

@@ -0,0 +1,223 @@
package platform
// Largely based on
// https://github.com/moby/moby/blob/bc846d2e8fe5538220e0c31e9d0e8446f6fbc022/distribution/cpuinfo_unix.go
// Copyright 2012-2017 Docker, Inc.
//
// https://github.com/containerd/containerd/blob/726dcaea50883e51b2ec6db13caff0e7936b711d/platforms/cpuinfo.go
// Copyright The containerd Authors.
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
// https://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
import (
"bufio"
"fmt"
"os"
"runtime"
"slices"
"strings"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/sirupsen/logrus"
"go.podman.io/image/v5/types"
)
// For Linux, the kernel has already detected the ABI, ISA and Features.
// So we don't need to access the ARM registers to detect platform information
// by ourselves. We can just parse these information from /proc/cpuinfo
func getCPUInfo(pattern string) (info string, err error) {
if runtime.GOOS != "linux" {
return "", fmt.Errorf("getCPUInfo for OS %s not implemented", runtime.GOOS)
}
cpuinfo, err := os.Open("/proc/cpuinfo")
if err != nil {
return "", err
}
defer cpuinfo.Close()
// Start to Parse the Cpuinfo line by line. For SMP SoC, we parse
// the first core is enough.
scanner := bufio.NewScanner(cpuinfo)
for scanner.Scan() {
newline := scanner.Text()
list := strings.Split(newline, ":")
if len(list) > 1 && strings.EqualFold(strings.TrimSpace(list[0]), pattern) {
return strings.TrimSpace(list[1]), nil
}
}
// Check whether the scanner encountered errors
err = scanner.Err()
if err != nil {
return "", err
}
return "", fmt.Errorf("getCPUInfo for pattern: %s not found", pattern)
}
func getCPUVariantDarwinWindows(arch string) string {
// Darwin and Windows only support v7 for ARM32 and v8 for ARM64 and so we can use
// runtime.GOARCH to determine the variants
var variant string
switch arch {
case "arm64":
variant = "v8"
case "arm":
variant = "v7"
default:
variant = ""
}
return variant
}
func getCPUVariantArm() string {
variant, err := getCPUInfo("Cpu architecture")
if err != nil {
logrus.Errorf("Couldn't get cpu architecture: %v", err)
return ""
}
switch strings.ToLower(variant) {
case "8", "aarch64":
variant = "v8"
case "7m", "?(12)", "?(13)", "?(14)", "?(15)", "?(16)", "?(17)":
variant = "v7"
case "7":
// handle RPi Zero variant mismatch due to wrong variant from kernel
// https://github.com/containerd/containerd/pull/4530
// https://www.raspberrypi.org/forums/viewtopic.php?t=12614
// https://github.com/moby/moby/pull/36121#issuecomment-398328286
model, err := getCPUInfo("model name")
if err != nil {
logrus.Errorf("Couldn't get cpu model name, it may be the corner case where variant is 6: %v", err)
return ""
}
// model name is NOT a value provided by the CPU; it is another outcome of Linux CPU detection,
// https://github.com/torvalds/linux/blob/190bf7b14b0cf3df19c059061be032bd8994a597/arch/arm/mm/proc-v6.S#L178C35-L178C35
// (matching happens based on value + mask at https://github.com/torvalds/linux/blob/190bf7b14b0cf3df19c059061be032bd8994a597/arch/arm/mm/proc-v6.S#L273-L274 )
// ARM CPU ID starts with a “main” ID register https://developer.arm.com/documentation/ddi0406/cb/System-Level-Architecture/System-Control-Registers-in-a-VMSA-implementation/VMSA-System-control-registers-descriptions--in-register-order/MIDR--Main-ID-Register--VMSA?lang=en ,
// but the ARMv6/ARMv7 differences are not a single dimension, https://developer.arm.com/documentation/ddi0406/cb/System-Level-Architecture/The-CPUID-Identification-Scheme?lang=en .
// The Linux "cpu architecture" is determined by a “memory model” feature.
//
// So, the "armv6-compatible" check basically checks for a "v6 or v7 CPU, but not one found listed as a known v7 one in the .proc.info.init tables of
// https://github.com/torvalds/linux/blob/190bf7b14b0cf3df19c059061be032bd8994a597/arch/arm/mm/proc-v7.S .
if strings.HasPrefix(strings.ToLower(model), "armv6-compatible") {
logrus.Debugf("Detected corner case, setting cpu variant to v6")
variant = "v6"
} else {
variant = "v7"
}
case "6", "6tej":
variant = "v6"
case "5", "5t", "5te", "5tej":
variant = "v5"
case "4", "4t":
variant = "v4"
case "3":
variant = "v3"
default:
variant = ""
}
return variant
}
func getCPUVariant(os string, arch string) string {
if os == "darwin" || os == "windows" {
return getCPUVariantDarwinWindows(arch)
}
if arch == "arm" || arch == "arm64" {
return getCPUVariantArm()
}
return ""
}
// compatibility contains, for a specified architecture, a list of known variants, in the
// order from most capable (most restrictive) to least capable (most compatible).
// Architectures that dont have variants should not have an entry here.
var compatibility = map[string][]string{
"arm": {"v8", "v7", "v6", "v5"},
"arm64": {"v8"},
}
// WantedPlatforms returns all compatible platforms with the platform specifics possibly overridden by user,
// the most compatible platform is first.
// If some option (arch, os, variant) is not present, a value from current platform is detected.
func WantedPlatforms(ctx *types.SystemContext) []imgspecv1.Platform {
// Note that this does not use Platform.OSFeatures and Platform.OSVersion at all.
// The fields are not specified by the OCI specification, as of version 1.1, usefully enough
// to be interoperable, anyway.
wantedArch := runtime.GOARCH
wantedVariant := ""
if ctx != nil && ctx.ArchitectureChoice != "" {
wantedArch = ctx.ArchitectureChoice
} else {
// Only auto-detect the variant if we are using the default architecture.
// If the user has specified the ArchitectureChoice, don't autodetect, even if
// ctx.ArchitectureChoice == runtime.GOARCH, because we have no idea whether the runtime.GOARCH
// value is relevant to the use case, and if we do autodetect a variant,
// ctx.VariantChoice can't be used to override it back to "".
wantedVariant = getCPUVariant(runtime.GOOS, runtime.GOARCH)
}
if ctx != nil && ctx.VariantChoice != "" {
wantedVariant = ctx.VariantChoice
}
wantedOS := runtime.GOOS
if ctx != nil && ctx.OSChoice != "" {
wantedOS = ctx.OSChoice
}
var variants []string = nil
if wantedVariant != "" {
// If the user requested a specific variant, we'll walk down
// the list from most to least compatible.
if variantOrder := compatibility[wantedArch]; variantOrder != nil {
if i := slices.Index(variantOrder, wantedVariant); i != -1 {
variants = variantOrder[i:]
}
}
if variants == nil {
// user wants a variant which we know nothing about - not even compatibility
variants = []string{wantedVariant}
}
// Make sure to have a candidate with an empty variant as well.
variants = append(variants, "")
} else {
// Make sure to have a candidate with an empty variant as well.
variants = append(variants, "")
// If available add the entire compatibility matrix for the specific architecture.
if possibleVariants, ok := compatibility[wantedArch]; ok {
variants = append(variants, possibleVariants...)
}
}
res := make([]imgspecv1.Platform, 0, len(variants))
for _, v := range variants {
res = append(res, imgspecv1.Platform{
OS: wantedOS,
Architecture: wantedArch,
Variant: v,
})
}
return res
}
// MatchesPlatform returns true if a platform descriptor from a multi-arch image matches
// an item from the return value of WantedPlatforms.
func MatchesPlatform(image imgspecv1.Platform, wanted imgspecv1.Platform) bool {
return image.Architecture == wanted.Architecture &&
image.OS == wanted.OS &&
image.Variant == wanted.Variant
}

View File

@@ -0,0 +1,239 @@
package private
import (
"context"
"io"
"time"
"github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"go.podman.io/image/v5/docker/reference"
"go.podman.io/image/v5/internal/blobinfocache"
"go.podman.io/image/v5/internal/signature"
compression "go.podman.io/image/v5/pkg/compression/types"
"go.podman.io/image/v5/types"
)
// ImageSourceInternalOnly is the part of private.ImageSource that is not
// a part of types.ImageSource.
type ImageSourceInternalOnly interface {
// SupportsGetBlobAt() returns true if GetBlobAt (BlobChunkAccessor) is supported.
SupportsGetBlobAt() bool
// BlobChunkAccessor.GetBlobAt is available only if SupportsGetBlobAt().
BlobChunkAccessor
// GetSignaturesWithFormat returns the image's signatures. It may use a remote (= slow) service.
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve signatures for
// (when the primary manifest is a manifest list); this never happens if the primary manifest is not a manifest list
// (e.g. if the source never returns manifest lists).
GetSignaturesWithFormat(ctx context.Context, instanceDigest *digest.Digest) ([]signature.Signature, error)
}
// ImageSource is an internal extension to the types.ImageSource interface.
type ImageSource interface {
types.ImageSource
ImageSourceInternalOnly
}
// ImageDestinationInternalOnly is the part of private.ImageDestination that is not
// a part of types.ImageDestination.
type ImageDestinationInternalOnly interface {
// SupportsPutBlobPartial returns true if PutBlobPartial is supported.
SupportsPutBlobPartial() bool
// FIXME: Add SupportsSignaturesWithFormat or something like that, to allow early failures
// on unsupported formats.
// NoteOriginalOCIConfig provides the config of the image, as it exists on the source, BUT converted to OCI format,
// or an error obtaining that value (e.g. if the image is an artifact and not a container image).
// The destination can use it in its TryReusingBlob/PutBlob implementations
// (otherwise it only obtains the final config after all layers are written).
NoteOriginalOCIConfig(ociConfig *imgspecv1.Image, configErr error) error
// PutBlobWithOptions writes contents of stream and returns data representing the result.
// inputInfo.Digest can be optionally provided if known; if provided, and stream is read to the end without error, the digest MUST match the stream contents.
// inputInfo.Size is the expected length of stream, if known.
// inputInfo.MediaType describes the blob format, if known.
// WARNING: The contents of stream are being verified on the fly. Until stream.Read() returns io.EOF, the contents of the data SHOULD NOT be available
// to any other readers for download using the supplied digest.
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlobWithOptions MUST 1) fail, and 2) delete any data stored so far.
PutBlobWithOptions(ctx context.Context, stream io.Reader, inputInfo types.BlobInfo, options PutBlobOptions) (UploadedBlob, error)
// PutBlobPartial attempts to create a blob using the data that is already present
// at the destination. chunkAccessor is accessed in a non-sequential way to retrieve the missing chunks.
// It is available only if SupportsPutBlobPartial().
// Even if SupportsPutBlobPartial() returns true, the call can fail.
// If the call fails with ErrFallbackToOrdinaryLayerDownload, the caller can fall back to PutBlobWithOptions.
// The fallback _must not_ be done otherwise.
PutBlobPartial(ctx context.Context, chunkAccessor BlobChunkAccessor, srcInfo types.BlobInfo, options PutBlobPartialOptions) (UploadedBlob, error)
// TryReusingBlobWithOptions checks whether the transport already contains, or can efficiently reuse, a blob, and if so, applies it to the current destination
// (e.g. if the blob is a filesystem layer, this signifies that the changes it describes need to be applied again when composing a filesystem tree).
// info.Digest must not be empty.
// If the blob has been successfully reused, returns (true, info, nil).
// If the transport can not reuse the requested blob, TryReusingBlob returns (false, {}, nil); it returns a non-nil error only on an unexpected failure.
TryReusingBlobWithOptions(ctx context.Context, info types.BlobInfo, options TryReusingBlobOptions) (bool, ReusedBlob, error)
// PutSignaturesWithFormat writes a set of signatures to the destination.
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to write or overwrite the signatures for
// (when the primary manifest is a manifest list); this should always be nil if the primary manifest is not a manifest list.
// MUST be called after PutManifest (signatures may reference manifest contents).
PutSignaturesWithFormat(ctx context.Context, signatures []signature.Signature, instanceDigest *digest.Digest) error
// CommitWithOptions marks the process of storing the image as successful and asks for the image to be persisted.
// WARNING: This does not have any transactional semantics:
// - Uploaded data MAY be visible to others before CommitWithOptions() is called
// - Uploaded data MAY be removed or MAY remain around if Close() is called without CommitWithOptions() (i.e. rollback is allowed but not guaranteed)
CommitWithOptions(ctx context.Context, options CommitOptions) error
}
// ImageDestination is an internal extension to the types.ImageDestination
// interface.
type ImageDestination interface {
types.ImageDestination
ImageDestinationInternalOnly
}
// UploadedBlob is information about a blob written to a destination.
// It is the subset of types.BlobInfo fields the transport is responsible for setting; all fields must be provided.
type UploadedBlob struct {
Digest digest.Digest
Size int64
}
// PutBlobOptions are used in PutBlobWithOptions.
type PutBlobOptions struct {
Cache blobinfocache.BlobInfoCache2 // Cache to optionally update with the uploaded bloblook up blob infos.
IsConfig bool // True if the blob is a config
// The following fields are new to internal/private. Users of internal/private MUST fill them in,
// but they also must expect that they will be ignored by types.ImageDestination transports.
// Transports, OTOH, MUST support these fields being zero-valued for types.ImageDestination callers
// if they use internal/imagedestination/impl.Compat;
// in that case, they will all be consistently zero-valued.
EmptyLayer bool // True if the blob is an "empty"/"throwaway" layer, and may not necessarily be physically represented.
LayerIndex *int // If the blob is a layer, a zero-based index of the layer within the image; nil otherwise.
}
// PutBlobPartialOptions are used in PutBlobPartial.
type PutBlobPartialOptions struct {
Cache blobinfocache.BlobInfoCache2 // Cache to use and/or update.
EmptyLayer bool // True if the blob is an "empty"/"throwaway" layer, and may not necessarily be physically represented.
LayerIndex int // A zero-based index of the layer within the image (PutBlobPartial is only called with layer-like blobs, not configs)
}
// TryReusingBlobOptions are used in TryReusingBlobWithOptions.
type TryReusingBlobOptions struct {
Cache blobinfocache.BlobInfoCache2 // Cache to use and/or update.
// If true, it is allowed to use an equivalent of the desired blob;
// in that case the returned info may not match the input.
CanSubstitute bool
// The following fields are new to internal/private. Users of internal/private MUST fill them in,
// but they also must expect that they will be ignored by types.ImageDestination transports.
// Transports, OTOH, MUST support these fields being zero-valued for types.ImageDestination callers
// if they use internal/imagedestination/impl.Compat;
// in that case, they will all be consistently zero-valued.
EmptyLayer bool // True if the blob is an "empty"/"throwaway" layer, and may not necessarily be physically represented.
LayerIndex *int // If the blob is a layer, a zero-based index of the layer within the image; nil otherwise.
SrcRef reference.Named // A reference to the source image that contains the input blob.
PossibleManifestFormats []string // A set of possible manifest formats; at least one should support the reused layer blob.
RequiredCompression *compression.Algorithm // If set, reuse blobs with a matching algorithm as per implementations in internal/imagedestination/impl.helpers.go
OriginalCompression *compression.Algorithm // May be nil to indicate “uncompressed” or “unknown”.
TOCDigest digest.Digest // If specified, the blob can be looked up in the destination also by its TOC digest.
}
// ReusedBlob is information about a blob reused in a destination.
// It is the subset of types.BlobInfo fields the transport is responsible for setting.
type ReusedBlob struct {
Digest digest.Digest // Must be provided
Size int64 // Must be provided
// The following compression fields should be set when the reuse substitutes
// a differently-compressed blob.
// They may be set also to change from a base variant to a specific variant of an algorithm.
CompressionOperation types.LayerCompression // Compress/Decompress, matching the reused blob; PreserveOriginal if N/A
CompressionAlgorithm *compression.Algorithm // Algorithm if compressed, nil if decompressed or N/A
// Annotations that should be added, for CompressionAlgorithm. Note that they might need to be
// added even if the digest doesnt change (if we found the annotations in a cache).
CompressionAnnotations map[string]string
MatchedByTOCDigest bool // Whether the layer was reused/matched by TOC digest. Used only for UI purposes.
}
// CommitOptions are used in CommitWithOptions
type CommitOptions struct {
// UnparsedToplevel contains data about the top-level manifest of the source (which may be a single-arch image or a manifest list
// if PutManifest was only called for the single-arch image with instanceDigest == nil), primarily to allow lookups by the
// original manifest list digest, if desired.
UnparsedToplevel types.UnparsedImage
// ReportResolvedReference, if set, asks the transport to store a “resolved” (more detailed) reference to the created image
// into the value this option points to.
// What “resolved” means is transport-specific.
// Transports which dont support reporting resolved references can ignore the field; the generic copy code writes "nil" into the value.
ReportResolvedReference *types.ImageReference
// Timestamp, if set, will force timestamps of content created in the destination to this value.
// Most transports don't support this.
//
// In oci-archive: destinations, this will set the create/mod/access timestamps in each tar entry
// (but not a timestamp of the created archive file).
Timestamp *time.Time
}
// ImageSourceChunk is a portion of a blob.
// This API is experimental and can be changed without bumping the major version number.
type ImageSourceChunk struct {
// Offset specifies the starting position of the chunk within the source blob.
Offset uint64
// Length specifies the size of the chunk. If it is set to math.MaxUint64,
// then it refers to all the data from Offset to the end of the blob.
Length uint64
}
// BlobChunkAccessor allows fetching discontiguous chunks of a blob.
type BlobChunkAccessor interface {
// GetBlobAt returns a sequential channel of readers that contain data for the requested
// blob chunks, and a channel that might get a single error value.
// The specified chunks must be not overlapping and sorted by their offset.
// The readers must be fully consumed, in the order they are returned, before blocking
// to read the next chunk.
// If the Length for the last chunk is set to math.MaxUint64, then it
// fully fetches the remaining data from the offset to the end of the blob.
GetBlobAt(ctx context.Context, info types.BlobInfo, chunks []ImageSourceChunk) (chan io.ReadCloser, chan error, error)
}
// BadPartialRequestError is returned by BlobChunkAccessor.GetBlobAt on an invalid request.
type BadPartialRequestError struct {
Status string
}
func (e BadPartialRequestError) Error() string {
return e.Status
}
// UnparsedImage is an internal extension to the types.UnparsedImage interface.
type UnparsedImage interface {
types.UnparsedImage
// UntrustedSignatures is like ImageSource.GetSignaturesWithFormat, but the result is cached; it is OK to call this however often you need.
UntrustedSignatures(ctx context.Context) ([]signature.Signature, error)
}
// ErrFallbackToOrdinaryLayerDownload is a custom error type returned by PutBlobPartial.
// It suggests to the caller that a fallback mechanism can be used instead of a hard failure;
// otherwise the caller of PutBlobPartial _must not_ fall back to PutBlob.
type ErrFallbackToOrdinaryLayerDownload struct {
err error
}
func (c ErrFallbackToOrdinaryLayerDownload) Error() string {
return c.err.Error()
}
func (c ErrFallbackToOrdinaryLayerDownload) Unwrap() error {
return c.err
}
func NewErrFallbackToOrdinaryLayerDownload(err error) error {
return ErrFallbackToOrdinaryLayerDownload{err: err}
}

View File

@@ -0,0 +1,57 @@
package putblobdigest
import (
"io"
"github.com/opencontainers/go-digest"
"go.podman.io/image/v5/types"
)
// Digester computes a digest of the provided stream, if not known yet.
type Digester struct {
knownDigest digest.Digest // Or ""
digester digest.Digester // Or nil
}
// newDigester initiates computation of a digest.Canonical digest of stream,
// if !validDigest; otherwise it just records knownDigest to be returned later.
// The caller MUST use the returned stream instead of the original value.
func newDigester(stream io.Reader, knownDigest digest.Digest, validDigest bool) (Digester, io.Reader) {
if validDigest {
return Digester{knownDigest: knownDigest}, stream
} else {
res := Digester{
digester: digest.Canonical.Digester(),
}
stream = io.TeeReader(stream, res.digester.Hash())
return res, stream
}
}
// DigestIfUnknown initiates computation of a digest.Canonical digest of stream,
// if no digest is supplied in the provided blobInfo; otherwise blobInfo.Digest will
// be used (accepting any algorithm).
// The caller MUST use the returned stream instead of the original value.
func DigestIfUnknown(stream io.Reader, blobInfo types.BlobInfo) (Digester, io.Reader) {
d := blobInfo.Digest
return newDigester(stream, d, d != "")
}
// DigestIfCanonicalUnknown initiates computation of a digest.Canonical digest of stream,
// if a digest.Canonical digest is not supplied in the provided blobInfo;
// otherwise blobInfo.Digest will be used.
// The caller MUST use the returned stream instead of the original value.
func DigestIfCanonicalUnknown(stream io.Reader, blobInfo types.BlobInfo) (Digester, io.Reader) {
d := blobInfo.Digest
return newDigester(stream, d, d != "" && d.Algorithm() == digest.Canonical)
}
// Digest() returns a digest value possibly computed by Digester.
// This must be called only after all of the stream returned by a Digester constructor
// has been successfully read.
func (d Digester) Digest() digest.Digest {
if d.digester != nil {
return d.digester.Digest()
}
return d.knownDigest
}

View File

@@ -0,0 +1,25 @@
package rootless
import (
"os"
"strconv"
)
// GetRootlessEUID returns the UID of the current user (in the parent userNS, if any)
//
// Podman and similar software, in “rootless” configuration, when run as a non-root
// user, very early switches to a user namespace, where Geteuid() == 0 (but does not
// switch to a limited mount namespace); so, code relying on Geteuid() would use
// system-wide paths in e.g. /var, when the user is actually not privileged to write to
// them, and expects state to be stored in the home directory.
//
// If Podman is setting up such a user namespace, it records the original UID in an
// environment variable, allowing us to make choices based on the actual users identity.
func GetRootlessEUID() int {
euidEnv := os.Getenv("_CONTAINERS_ROOTLESS_UID")
if euidEnv != "" {
euid, _ := strconv.Atoi(euidEnv)
return euid
}
return os.Geteuid()
}

55
vendor/go.podman.io/image/v5/internal/set/set.go generated vendored Normal file
View File

@@ -0,0 +1,55 @@
package set
import (
"iter"
"maps"
)
// FIXME:
// - Docstrings
// - This should be in a public library somewhere
type Set[E comparable] struct {
m map[E]struct{}
}
func New[E comparable]() *Set[E] {
return &Set[E]{
m: map[E]struct{}{},
}
}
func NewWithValues[E comparable](values ...E) *Set[E] {
s := New[E]()
for _, v := range values {
s.Add(v)
}
return s
}
func (s *Set[E]) Add(v E) {
s.m[v] = struct{}{} // Possibly writing the same struct{}{} presence marker again.
}
func (s *Set[E]) AddSeq(seq iter.Seq[E]) {
for v := range seq {
s.Add(v)
}
}
func (s *Set[E]) Delete(v E) {
delete(s.m, v)
}
func (s *Set[E]) Contains(v E) bool {
_, ok := s.m[v]
return ok
}
func (s *Set[E]) Empty() bool {
return len(s.m) == 0
}
func (s *Set[E]) All() iter.Seq[E] {
return maps.Keys(s.m)
}

View File

@@ -0,0 +1,102 @@
package signature
import (
"bytes"
"errors"
"fmt"
)
// FIXME FIXME: MIME type? Int? String?
// An interface with a name, parse methods?
type FormatID string
const (
SimpleSigningFormat FormatID = "simple-signing"
SigstoreFormat FormatID = "sigstore-json"
// Update also UnsupportedFormatError below
)
// Signature is an image signature of some kind.
type Signature interface {
FormatID() FormatID
// blobChunk returns a representation of signature as a []byte, suitable for long-term storage.
// Almost everyone should use signature.Blob() instead.
blobChunk() ([]byte, error)
}
// Blob returns a representation of sig as a []byte, suitable for long-term storage.
func Blob(sig Signature) ([]byte, error) {
chunk, err := sig.blobChunk()
if err != nil {
return nil, err
}
format := sig.FormatID()
switch format {
case SimpleSigningFormat:
// For compatibility with old dir formats:
return chunk, nil
default:
res := []byte{0} // Start with a zero byte to clearly mark this is a binary format, and disambiguate from random text.
res = append(res, []byte(format)...)
res = append(res, '\n')
res = append(res, chunk...)
return res, nil
}
}
// FromBlob returns a signature from parsing a blob created by signature.Blob.
func FromBlob(blob []byte) (Signature, error) {
if len(blob) == 0 {
return nil, errors.New("empty signature blob")
}
// Historically weve just been using GPG with no identification; try to auto-detect that.
switch blob[0] {
// OpenPGP "compressed data" wrapping the message
case 0xA0, 0xA1, 0xA2, 0xA3, // bit 7 = 1; bit 6 = 0 (old packet format); bits 5…2 = 8 (tag: compressed data packet); bits 1…0 = length-type (any)
0xC8, // bit 7 = 1; bit 6 = 1 (new packet format); bits 5…0 = 8 (tag: compressed data packet)
// OpenPGP “one-pass signature” starting a signature
0x90, 0x91, 0x92, 0x3d, // bit 7 = 1; bit 6 = 0 (old packet format); bits 5…2 = 4 (tag: one-pass signature packet); bits 1…0 = length-type (any)
0xC4, // bit 7 = 1; bit 6 = 1 (new packet format); bits 5…0 = 4 (tag: one-pass signature packet)
// OpenPGP signature packet signing the following data
0x88, 0x89, 0x8A, 0x8B, // bit 7 = 1; bit 6 = 0 (old packet format); bits 5…2 = 2 (tag: signature packet); bits 1…0 = length-type (any)
0xC2: // bit 7 = 1; bit 6 = 1 (new packet format); bits 5…0 = 2 (tag: signature packet)
return SimpleSigningFromBlob(blob), nil
// The newer format: binary 0, format name, newline, data
case 0x00:
blob = blob[1:]
formatBytes, blobChunk, foundNewline := bytes.Cut(blob, []byte{'\n'})
if !foundNewline {
return nil, fmt.Errorf("invalid signature format, missing newline")
}
for _, b := range formatBytes {
if b < 32 || b >= 0x7F {
return nil, fmt.Errorf("invalid signature format, non-ASCII byte %#x", b)
}
}
switch {
case bytes.Equal(formatBytes, []byte(SimpleSigningFormat)):
return SimpleSigningFromBlob(blobChunk), nil
case bytes.Equal(formatBytes, []byte(SigstoreFormat)):
return sigstoreFromBlobChunk(blobChunk)
default:
return nil, fmt.Errorf("unrecognized signature format %q", string(formatBytes))
}
default:
return nil, fmt.Errorf("unrecognized signature format, starting with binary %#x", blob[0])
}
}
// UnsupportedFormatError returns an error complaining about sig having an unsupported format.
func UnsupportedFormatError(sig Signature) error {
formatID := sig.FormatID()
switch formatID {
case SimpleSigningFormat, SigstoreFormat:
return fmt.Errorf("unsupported signature format %s", string(formatID))
default:
return fmt.Errorf("unsupported, and unrecognized, signature format %q", string(formatID))
}
}

View File

@@ -0,0 +1,86 @@
package signature
import (
"bytes"
"encoding/json"
"maps"
)
const (
// from sigstore/cosign/pkg/types.SimpleSigningMediaType
SigstoreSignatureMIMEType = "application/vnd.dev.cosign.simplesigning.v1+json"
// from sigstore/cosign/pkg/oci/static.SignatureAnnotationKey
SigstoreSignatureAnnotationKey = "dev.cosignproject.cosign/signature"
// from sigstore/cosign/pkg/oci/static.BundleAnnotationKey
SigstoreSETAnnotationKey = "dev.sigstore.cosign/bundle"
// from sigstore/cosign/pkg/oci/static.CertificateAnnotationKey
SigstoreCertificateAnnotationKey = "dev.sigstore.cosign/certificate"
// from sigstore/cosign/pkg/oci/static.ChainAnnotationKey
SigstoreIntermediateCertificateChainAnnotationKey = "dev.sigstore.cosign/chain"
)
// Sigstore is a github.com/cosign/cosign signature.
// For the persistent-storage format used for blobChunk(), we want
// a degree of forward compatibility against unexpected field changes
// (as has happened before), which is why this data type
// contains just a payload + annotations (including annotations
// that we dont recognize or support), instead of individual fields
// for the known annotations.
type Sigstore struct {
untrustedMIMEType string
untrustedPayload []byte
untrustedAnnotations map[string]string
}
// sigstoreJSONRepresentation needs the files to be public, which we dont want for
// the main Sigstore type.
type sigstoreJSONRepresentation struct {
UntrustedMIMEType string `json:"mimeType"`
UntrustedPayload []byte `json:"payload"`
UntrustedAnnotations map[string]string `json:"annotations"`
}
// SigstoreFromComponents returns a Sigstore object from its components.
func SigstoreFromComponents(untrustedMimeType string, untrustedPayload []byte, untrustedAnnotations map[string]string) Sigstore {
return Sigstore{
untrustedMIMEType: untrustedMimeType,
untrustedPayload: bytes.Clone(untrustedPayload),
untrustedAnnotations: maps.Clone(untrustedAnnotations),
}
}
// sigstoreFromBlobChunk converts a Sigstore signature, as returned by Sigstore.blobChunk, into a Sigstore object.
func sigstoreFromBlobChunk(blobChunk []byte) (Sigstore, error) {
var v sigstoreJSONRepresentation
if err := json.Unmarshal(blobChunk, &v); err != nil {
return Sigstore{}, err
}
return SigstoreFromComponents(v.UntrustedMIMEType,
v.UntrustedPayload,
v.UntrustedAnnotations), nil
}
func (s Sigstore) FormatID() FormatID {
return SigstoreFormat
}
// blobChunk returns a representation of signature as a []byte, suitable for long-term storage.
// Almost everyone should use signature.Blob() instead.
func (s Sigstore) blobChunk() ([]byte, error) {
return json.Marshal(sigstoreJSONRepresentation{
UntrustedMIMEType: s.UntrustedMIMEType(),
UntrustedPayload: s.UntrustedPayload(),
UntrustedAnnotations: s.UntrustedAnnotations(),
})
}
func (s Sigstore) UntrustedMIMEType() string {
return s.untrustedMIMEType
}
func (s Sigstore) UntrustedPayload() []byte {
return bytes.Clone(s.untrustedPayload)
}
func (s Sigstore) UntrustedAnnotations() map[string]string {
return maps.Clone(s.untrustedAnnotations)
}

View File

@@ -0,0 +1,29 @@
package signature
import "bytes"
// SimpleSigning is a “simple signing” signature.
type SimpleSigning struct {
untrustedSignature []byte
}
// SimpleSigningFromBlob converts a “simple signing” signature into a SimpleSigning object.
func SimpleSigningFromBlob(blobChunk []byte) SimpleSigning {
return SimpleSigning{
untrustedSignature: bytes.Clone(blobChunk),
}
}
func (s SimpleSigning) FormatID() FormatID {
return SimpleSigningFormat
}
// blobChunk returns a representation of signature as a []byte, suitable for long-term storage.
// Almost everyone should use signature.Blob() instead.
func (s SimpleSigning) blobChunk() ([]byte, error) {
return bytes.Clone(s.untrustedSignature), nil
}
func (s SimpleSigning) UntrustedSignature() []byte {
return bytes.Clone(s.untrustedSignature)
}

View File

@@ -0,0 +1,40 @@
package streamdigest
import (
"fmt"
"io"
"os"
"go.podman.io/image/v5/internal/putblobdigest"
"go.podman.io/image/v5/internal/tmpdir"
"go.podman.io/image/v5/types"
)
// ComputeBlobInfo streams a blob to a temporary file and populates Digest and Size in inputInfo.
// The temporary file is returned as an io.Reader along with a cleanup function.
// It is the caller's responsibility to call the cleanup function, which closes and removes the temporary file.
// If an error occurs, inputInfo is not modified.
func ComputeBlobInfo(sys *types.SystemContext, stream io.Reader, inputInfo *types.BlobInfo) (io.Reader, func(), error) {
diskBlob, err := tmpdir.CreateBigFileTemp(sys, "stream-blob")
if err != nil {
return nil, nil, fmt.Errorf("creating temporary on-disk layer: %w", err)
}
cleanup := func() {
diskBlob.Close()
os.Remove(diskBlob.Name())
}
digester, stream := putblobdigest.DigestIfCanonicalUnknown(stream, *inputInfo)
written, err := io.Copy(diskBlob, stream)
if err != nil {
cleanup()
return nil, nil, fmt.Errorf("writing to temporary on-disk layer: %w", err)
}
_, err = diskBlob.Seek(0, io.SeekStart)
if err != nil {
cleanup()
return nil, nil, fmt.Errorf("rewinding temporary on-disk layer: %w", err)
}
inputInfo.Digest = digester.Digest()
inputInfo.Size = written
return diskBlob, cleanup, nil
}

44
vendor/go.podman.io/image/v5/internal/tmpdir/tmpdir.go generated vendored Normal file
View File

@@ -0,0 +1,44 @@
package tmpdir
import (
"os"
"runtime"
"go.podman.io/image/v5/types"
)
// unixTempDirForBigFiles is the directory path to store big files on non Windows systems.
// You can override this at build time with
// -ldflags '-X go.podman.io/image/v5/internal/tmpdir.unixTempDirForBigFiles=$your_path'
var unixTempDirForBigFiles = builtinUnixTempDirForBigFiles
// builtinUnixTempDirForBigFiles is the directory path to store big files.
// Do not use the system default of os.TempDir(), usually /tmp, because with systemd it could be a tmpfs.
// DO NOT change this, instead see unixTempDirForBigFiles above.
const builtinUnixTempDirForBigFiles = "/var/tmp"
const prefix = "container_images_"
// TemporaryDirectoryForBigFiles returns a directory for temporary (big) files.
// On non Windows systems it avoids the use of os.TempDir(), because the default temporary directory usually falls under /tmp
// which on systemd based systems could be the unsuitable tmpfs filesystem.
func temporaryDirectoryForBigFiles(sys *types.SystemContext) string {
if sys != nil && sys.BigFilesTemporaryDir != "" {
return sys.BigFilesTemporaryDir
}
var temporaryDirectoryForBigFiles string
if runtime.GOOS == "windows" {
temporaryDirectoryForBigFiles = os.TempDir()
} else {
temporaryDirectoryForBigFiles = unixTempDirForBigFiles
}
return temporaryDirectoryForBigFiles
}
func CreateBigFileTemp(sys *types.SystemContext, name string) (*os.File, error) {
return os.CreateTemp(temporaryDirectoryForBigFiles(sys), prefix+name)
}
func MkDirBigFileTemp(sys *types.SystemContext, name string) (string, error) {
return os.MkdirTemp(temporaryDirectoryForBigFiles(sys), prefix+name)
}

View File

@@ -0,0 +1,61 @@
package uploadreader
import (
"io"
"sync"
)
// UploadReader is a pass-through reader for use in sending non-trivial data using the net/http
// package (http.NewRequest, http.Post and the like).
//
// The net/http package uses a separate goroutine to upload data to a HTTP connection,
// and it is possible for the server to return a response (typically an error) before consuming
// the full body of the request. In that case http.Client.Do can return with an error while
// the body is still being read — regardless of the cancellation, if any, of http.Request.Context().
//
// As a result, any data used/updated by the io.Reader() provided as the request body may be
// used/updated even after http.Client.Do returns, causing races.
//
// To fix this, UploadReader provides a synchronized Terminate() method, which can block for
// a not-completely-negligible time (for a duration of the underlying Read()), but guarantees that
// after Terminate() returns, the underlying reader is never used any more (unlike calling
// the cancellation callback of context.WithCancel, which returns before any recipients may have
// reacted to the cancellation).
type UploadReader struct {
mutex sync.Mutex
// The following members can only be used with mutex held
reader io.Reader
terminationError error // nil if not terminated yet
}
// NewUploadReader returns an UploadReader for an "underlying" reader.
func NewUploadReader(underlying io.Reader) *UploadReader {
return &UploadReader{
reader: underlying,
terminationError: nil,
}
}
// Read returns the error set by Terminate, if any, or calls the underlying reader.
// It is safe to call this from a different goroutine than Terminate.
func (ur *UploadReader) Read(p []byte) (int, error) {
ur.mutex.Lock()
defer ur.mutex.Unlock()
if ur.terminationError != nil {
return 0, ur.terminationError
}
return ur.reader.Read(p)
}
// Terminate waits for in-progress Read calls, if any, to finish, and ensures that after
// this function returns, any Read calls will fail with the provided error, and the underlying
// reader will never be used any more.
//
// It is safe to call this from a different goroutine than Read.
func (ur *UploadReader) Terminate(err error) {
ur.mutex.Lock() // May block for some time if ur.reader.Read() is in progress
defer ur.mutex.Unlock()
ur.terminationError = err
}

View File

@@ -0,0 +1,6 @@
package useragent
import "go.podman.io/image/v5/version"
// DefaultUserAgent is a value that should be used by User-Agent headers, unless the user specifically instructs us otherwise.
var DefaultUserAgent = "containers/" + version.Version + " (github.com/containers/image)"

152
vendor/go.podman.io/image/v5/manifest/common.go generated vendored Normal file
View File

@@ -0,0 +1,152 @@
package manifest
import (
"fmt"
"github.com/sirupsen/logrus"
compressiontypes "go.podman.io/image/v5/pkg/compression/types"
"go.podman.io/image/v5/types"
)
// layerInfosToStrings converts a list of layer infos, presumably obtained from a Manifest.LayerInfos()
// method call, into a format suitable for inclusion in a types.ImageInspectInfo structure.
func layerInfosToStrings(infos []LayerInfo) []string {
layers := make([]string, len(infos))
for i, info := range infos {
layers[i] = info.Digest.String()
}
return layers
}
// compressionMIMETypeSet describes a set of MIME type “variants” that represent differently-compressed
// versions of “the same kind of content”.
// The map key is the return value of compressiontypes.Algorithm.Name(), or mtsUncompressed;
// the map value is a MIME type, or mtsUnsupportedMIMEType to mean "recognized but unsupported".
type compressionMIMETypeSet map[string]string
const mtsUncompressed = "" // A key in compressionMIMETypeSet for the uncompressed variant
const mtsUnsupportedMIMEType = "" // A value in compressionMIMETypeSet that means “recognized but unsupported”
// findCompressionMIMETypeSet returns a pointer to a compressionMIMETypeSet in variantTable that contains a value of mimeType, or nil if not found
func findCompressionMIMETypeSet(variantTable []compressionMIMETypeSet, mimeType string) compressionMIMETypeSet {
for _, variants := range variantTable {
for _, mt := range variants {
if mt == mimeType {
return variants
}
}
}
return nil
}
// compressionVariantMIMEType returns a variant of mimeType for the specified algorithm (which may be nil
// to mean "no compression"), based on variantTable.
// The returned error will be a ManifestLayerCompressionIncompatibilityError if mimeType has variants
// that differ only in what type of compression is applied, but it can't be combined with this
// algorithm to produce an updated MIME type that complies with the standard that defines mimeType.
// If the compression algorithm is unrecognized, or mimeType is not known to have variants that
// differ from it only in what type of compression has been applied, the returned error will not be
// a ManifestLayerCompressionIncompatibilityError.
func compressionVariantMIMEType(variantTable []compressionMIMETypeSet, mimeType string, algorithm *compressiontypes.Algorithm) (string, error) {
if mimeType == mtsUnsupportedMIMEType { // Prevent matching against the {algo:mtsUnsupportedMIMEType} entries
return "", fmt.Errorf("cannot update unknown MIME type")
}
variants := findCompressionMIMETypeSet(variantTable, mimeType)
if variants != nil {
name := mtsUncompressed
if algorithm != nil {
name = algorithm.BaseVariantName()
}
if res, ok := variants[name]; ok {
if res != mtsUnsupportedMIMEType {
return res, nil
}
if name != mtsUncompressed {
return "", ManifestLayerCompressionIncompatibilityError{fmt.Sprintf("%s compression is not supported for type %q", name, mimeType)}
}
return "", ManifestLayerCompressionIncompatibilityError{fmt.Sprintf("uncompressed variant is not supported for type %q", mimeType)}
}
if name != mtsUncompressed {
return "", ManifestLayerCompressionIncompatibilityError{fmt.Sprintf("unknown compressed with algorithm %s variant for type %q", name, mimeType)}
}
// We can't very well say “the idea of no compression is unknown”
return "", ManifestLayerCompressionIncompatibilityError{fmt.Sprintf("uncompressed variant is not supported for type %q", mimeType)}
}
if algorithm != nil {
return "", fmt.Errorf("unsupported MIME type for compression: %q", mimeType)
}
return "", fmt.Errorf("unsupported MIME type for decompression: %q", mimeType)
}
// updatedMIMEType returns the result of applying edits in updated (MediaType, CompressionOperation) to
// mimeType, based on variantTable. It may use updated.Digest for error messages.
// The returned error will be a ManifestLayerCompressionIncompatibilityError if mimeType has variants
// that differ only in what type of compression is applied, but applying updated.CompressionOperation
// and updated.CompressionAlgorithm to it won't produce an updated MIME type that complies with the
// standard that defines mimeType.
func updatedMIMEType(variantTable []compressionMIMETypeSet, mimeType string, updated types.BlobInfo) (string, error) {
// Note that manifests in containers-storage might be reporting the
// wrong media type since the original manifests are stored while layers
// are decompressed in storage. Hence, we need to consider the case
// that an already {de}compressed layer should be {de}compressed;
// compressionVariantMIMEType does that by not caring whether the original is
// {de}compressed.
switch updated.CompressionOperation {
case types.PreserveOriginal:
// Force a change to the media type if we're being told to use a particular compressor,
// since it might be different from the one associated with the media type. Otherwise,
// try to keep the original media type.
if updated.CompressionAlgorithm != nil {
return compressionVariantMIMEType(variantTable, mimeType, updated.CompressionAlgorithm)
}
// Keep the original media type.
return mimeType, nil
case types.Decompress:
return compressionVariantMIMEType(variantTable, mimeType, nil)
case types.Compress:
if updated.CompressionAlgorithm == nil {
logrus.Debugf("Error preparing updated manifest: blob %q was compressed but does not specify by which algorithm: falling back to use the original blob", updated.Digest)
return mimeType, nil
}
return compressionVariantMIMEType(variantTable, mimeType, updated.CompressionAlgorithm)
default:
return "", fmt.Errorf("unknown compression operation (%d)", updated.CompressionOperation)
}
}
// ManifestLayerCompressionIncompatibilityError indicates that a specified compression algorithm
// could not be applied to a layer MIME type. A caller that receives this should either retry
// the call with a different compression algorithm, or attempt to use a different manifest type.
type ManifestLayerCompressionIncompatibilityError struct {
text string
}
func (m ManifestLayerCompressionIncompatibilityError) Error() string {
return m.text
}
// compressionVariantsRecognizeMIMEType returns true if variantTable contains data about compressing/decompressing layers with mimeType
// Note that the caller still needs to worry about a specific algorithm not being supported.
func compressionVariantsRecognizeMIMEType(variantTable []compressionMIMETypeSet, mimeType string) bool {
if mimeType == mtsUnsupportedMIMEType { // Prevent matching against the {algo:mtsUnsupportedMIMEType} entries
return false
}
variants := findCompressionMIMETypeSet(variantTable, mimeType)
return variants != nil // Alternatively, this could be len(variants) > 1, but really the caller should ask about a specific algorithm.
}
// imgInspectLayersFromLayerInfos converts a list of layer infos, presumably obtained from a Manifest.LayerInfos()
// method call, into a format suitable for inclusion in a types.ImageInspectInfo structure.
func imgInspectLayersFromLayerInfos(infos []LayerInfo) []types.ImageInspectLayer {
layers := make([]types.ImageInspectLayer, len(infos))
for i, info := range infos {
layers[i].MIMEType = info.MediaType
layers[i].Digest = info.Digest
layers[i].Size = info.Size
layers[i].Annotations = info.Annotations
}
return layers
}

346
vendor/go.podman.io/image/v5/manifest/docker_schema1.go generated vendored Normal file
View File

@@ -0,0 +1,346 @@
package manifest
import (
"encoding/json"
"errors"
"fmt"
"slices"
"strings"
"time"
"github.com/docker/docker/api/types/versions"
"github.com/opencontainers/go-digest"
"go.podman.io/image/v5/docker/reference"
"go.podman.io/image/v5/internal/manifest"
"go.podman.io/image/v5/internal/set"
compressiontypes "go.podman.io/image/v5/pkg/compression/types"
"go.podman.io/image/v5/types"
"go.podman.io/storage/pkg/regexp"
)
// Schema1FSLayers is an entry of the "fsLayers" array in docker/distribution schema 1.
type Schema1FSLayers struct {
BlobSum digest.Digest `json:"blobSum"`
}
// Schema1History is an entry of the "history" array in docker/distribution schema 1.
type Schema1History struct {
V1Compatibility string `json:"v1Compatibility"`
}
// Schema1 is a manifest in docker/distribution schema 1.
type Schema1 struct {
Name string `json:"name"`
Tag string `json:"tag"`
Architecture string `json:"architecture"`
FSLayers []Schema1FSLayers `json:"fsLayers"`
History []Schema1History `json:"history"` // Keep this in sync with ExtractedV1Compatibility!
ExtractedV1Compatibility []Schema1V1Compatibility `json:"-"` // Keep this in sync with History! Does not contain the full config (Schema2V1Image)
SchemaVersion int `json:"schemaVersion"`
}
type schema1V1CompatibilityContainerConfig struct {
Cmd []string
}
// Schema1V1Compatibility is a v1Compatibility in docker/distribution schema 1.
type Schema1V1Compatibility struct {
ID string `json:"id"`
Parent string `json:"parent,omitempty"`
Comment string `json:"comment,omitempty"`
Created time.Time `json:"created"`
ContainerConfig schema1V1CompatibilityContainerConfig `json:"container_config,omitempty"`
Author string `json:"author,omitempty"`
ThrowAway bool `json:"throwaway,omitempty"`
}
// Schema1FromManifest creates a Schema1 manifest instance from a manifest blob.
// (NOTE: The instance is not necessary a literal representation of the original blob,
// layers with duplicate IDs are eliminated.)
func Schema1FromManifest(manifestBlob []byte) (*Schema1, error) {
s1 := Schema1{}
if err := json.Unmarshal(manifestBlob, &s1); err != nil {
return nil, err
}
if s1.SchemaVersion != 1 {
return nil, fmt.Errorf("unsupported schema version %d", s1.SchemaVersion)
}
if err := manifest.ValidateUnambiguousManifestFormat(manifestBlob, DockerV2Schema1SignedMediaType,
manifest.AllowedFieldFSLayers|manifest.AllowedFieldHistory); err != nil {
return nil, err
}
if err := s1.initialize(); err != nil {
return nil, err
}
if err := s1.fixManifestLayers(); err != nil {
return nil, err
}
return &s1, nil
}
// Schema1FromComponents creates an Schema1 manifest instance from the supplied data.
func Schema1FromComponents(ref reference.Named, fsLayers []Schema1FSLayers, history []Schema1History, architecture string) (*Schema1, error) {
var name, tag string
if ref != nil { // Well, what to do if it _is_ nil? Most consumers actually don't use these fields nowadays, so we might as well try not supplying them.
name = reference.Path(ref)
if tagged, ok := ref.(reference.NamedTagged); ok {
tag = tagged.Tag()
}
}
s1 := Schema1{
Name: name,
Tag: tag,
Architecture: architecture,
FSLayers: fsLayers,
History: history,
SchemaVersion: 1,
}
if err := s1.initialize(); err != nil {
return nil, err
}
return &s1, nil
}
// Schema1Clone creates a copy of the supplied Schema1 manifest.
func Schema1Clone(src *Schema1) *Schema1 {
copy := *src
return &copy
}
// initialize initializes ExtractedV1Compatibility and verifies invariants, so that the rest of this code can assume a minimally healthy manifest.
func (m *Schema1) initialize() error {
if len(m.FSLayers) != len(m.History) {
return errors.New("length of history not equal to number of layers")
}
if len(m.FSLayers) == 0 {
return errors.New("no FSLayers in manifest")
}
m.ExtractedV1Compatibility = make([]Schema1V1Compatibility, len(m.History))
for i, h := range m.History {
if err := json.Unmarshal([]byte(h.V1Compatibility), &m.ExtractedV1Compatibility[i]); err != nil {
return fmt.Errorf("parsing v2s1 history entry %d: %w", i, err)
}
}
return nil
}
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
func (m *Schema1) ConfigInfo() types.BlobInfo {
return types.BlobInfo{}
}
// LayerInfos returns a list of LayerInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (m *Schema1) LayerInfos() []LayerInfo {
layers := make([]LayerInfo, 0, len(m.FSLayers))
for i, layer := range slices.Backward(m.FSLayers) { // NOTE: This includes empty layers (where m.History.V1Compatibility->ThrowAway)
layers = append(layers, LayerInfo{
BlobInfo: types.BlobInfo{Digest: layer.BlobSum, Size: -1},
EmptyLayer: m.ExtractedV1Compatibility[i].ThrowAway,
})
}
return layers
}
const fakeSchema1MIMEType = DockerV2Schema2LayerMediaType // Used only in schema1CompressionMIMETypeSets
var schema1CompressionMIMETypeSets = []compressionMIMETypeSet{
{
mtsUncompressed: fakeSchema1MIMEType,
compressiontypes.GzipAlgorithmName: fakeSchema1MIMEType,
compressiontypes.ZstdAlgorithmName: mtsUnsupportedMIMEType,
},
}
// UpdateLayerInfos replaces the original layers with the specified BlobInfos (size+digest+urls), in order (the root layer first, and then successive layered layers)
func (m *Schema1) UpdateLayerInfos(layerInfos []types.BlobInfo) error {
// Our LayerInfos includes empty layers (where m.ExtractedV1Compatibility[].ThrowAway), so expect them to be included here as well.
if len(m.FSLayers) != len(layerInfos) {
return fmt.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(m.FSLayers), len(layerInfos))
}
m.FSLayers = make([]Schema1FSLayers, len(layerInfos))
for i, info := range layerInfos {
// There are no MIME types in schema1, but we do a “conversion” here to reject unsupported compression algorithms,
// in a way that is consistent with the other schema implementations.
if _, err := updatedMIMEType(schema1CompressionMIMETypeSets, fakeSchema1MIMEType, info); err != nil {
return fmt.Errorf("preparing updated manifest, layer %q: %w", info.Digest, err)
}
// (docker push) sets up m.ExtractedV1Compatibility[].{Id,Parent} based on values of info.Digest,
// but (docker pull) ignores them in favor of computing DiffIDs from uncompressed data, except verifying the child->parent links and uniqueness.
// So, we don't bother recomputing the IDs in m.History.V1Compatibility.
m.FSLayers[(len(layerInfos)-1)-i].BlobSum = info.Digest
if info.CryptoOperation != types.PreserveOriginalCrypto {
return fmt.Errorf("encryption change (for layer %q) is not supported in schema1 manifests", info.Digest)
}
}
return nil
}
// Serialize returns the manifest in a blob format.
// NOTE: Serialize() does not in general reproduce the original blob if this object was loaded from one, even if no modifications were made!
func (m *Schema1) Serialize() ([]byte, error) {
// docker/distribution requires a signature even if the incoming data uses the nominally unsigned DockerV2Schema1MediaType.
unsigned, err := json.Marshal(*m)
if err != nil {
return nil, err
}
return AddDummyV2S1Signature(unsigned)
}
// fixManifestLayers, after validating the supplied manifest
// (to use correctly-formatted IDs, and to not have non-consecutive ID collisions in m.History),
// modifies manifest to only have one entry for each layer ID in m.History (deleting the older duplicates,
// both from m.History and m.FSLayers).
// Note that even after this succeeds, m.FSLayers may contain duplicate entries
// (for Dockerfile operations which change the configuration but not the filesystem).
func (m *Schema1) fixManifestLayers() error {
// m.initialize() has verified that len(m.FSLayers) == len(m.History)
for _, compat := range m.ExtractedV1Compatibility {
if err := validateV1ID(compat.ID); err != nil {
return err
}
}
if m.ExtractedV1Compatibility[len(m.ExtractedV1Compatibility)-1].Parent != "" {
return errors.New("Invalid parent ID in the base layer of the image")
}
// check general duplicates to error instead of a deadlock
idmap := set.New[string]()
var lastID string
for _, img := range m.ExtractedV1Compatibility {
// skip IDs that appear after each other, we handle those later
if img.ID != lastID && idmap.Contains(img.ID) {
return fmt.Errorf("ID %+v appears multiple times in manifest", img.ID)
}
lastID = img.ID
idmap.Add(lastID)
}
// backwards loop so that we keep the remaining indexes after removing items
for i := len(m.ExtractedV1Compatibility) - 2; i >= 0; i-- {
if m.ExtractedV1Compatibility[i].ID == m.ExtractedV1Compatibility[i+1].ID { // repeated ID. remove and continue
m.FSLayers = slices.Delete(m.FSLayers, i, i+1)
m.History = slices.Delete(m.History, i, i+1)
m.ExtractedV1Compatibility = slices.Delete(m.ExtractedV1Compatibility, i, i+1)
} else if m.ExtractedV1Compatibility[i].Parent != m.ExtractedV1Compatibility[i+1].ID {
return fmt.Errorf("Invalid parent ID. Expected %v, got %q", m.ExtractedV1Compatibility[i+1].ID, m.ExtractedV1Compatibility[i].Parent)
}
}
return nil
}
var validHex = regexp.Delayed(`^([a-f0-9]{64})$`)
func validateV1ID(id string) error {
if ok := validHex.MatchString(id); !ok {
return fmt.Errorf("image ID %q is invalid", id)
}
return nil
}
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
func (m *Schema1) Inspect(_ func(types.BlobInfo) ([]byte, error)) (*types.ImageInspectInfo, error) {
s1 := &Schema2V1Image{}
if err := json.Unmarshal([]byte(m.History[0].V1Compatibility), s1); err != nil {
return nil, err
}
layerInfos := m.LayerInfos()
i := &types.ImageInspectInfo{
Tag: m.Tag,
Created: &s1.Created,
DockerVersion: s1.DockerVersion,
Architecture: s1.Architecture,
Variant: s1.Variant,
Os: s1.OS,
Layers: layerInfosToStrings(layerInfos),
LayersData: imgInspectLayersFromLayerInfos(layerInfos),
Author: s1.Author,
}
if s1.Config != nil {
i.Labels = s1.Config.Labels
i.Env = s1.Config.Env
}
return i, nil
}
// ToSchema2Config builds a schema2-style configuration blob using the supplied diffIDs.
func (m *Schema1) ToSchema2Config(diffIDs []digest.Digest) ([]byte, error) {
// Convert the schema 1 compat info into a schema 2 config, constructing some of the fields
// that aren't directly comparable using info from the manifest.
if len(m.History) == 0 {
return nil, errors.New("image has no layers")
}
s1 := Schema2V1Image{}
config := []byte(m.History[0].V1Compatibility)
err := json.Unmarshal(config, &s1)
if err != nil {
return nil, fmt.Errorf("decoding configuration: %w", err)
}
// Images created with versions prior to 1.8.3 require us to re-encode the encoded object,
// adding some fields that aren't "omitempty".
if s1.DockerVersion != "" && versions.LessThan(s1.DockerVersion, "1.8.3") {
config, err = json.Marshal(&s1)
if err != nil {
return nil, fmt.Errorf("re-encoding compat image config %#v: %w", s1, err)
}
}
// Build the history.
convertedHistory := []Schema2History{}
for _, compat := range slices.Backward(m.ExtractedV1Compatibility) {
hitem := Schema2History{
Created: compat.Created,
CreatedBy: strings.Join(compat.ContainerConfig.Cmd, " "),
Author: compat.Author,
Comment: compat.Comment,
EmptyLayer: compat.ThrowAway,
}
convertedHistory = append(convertedHistory, hitem)
}
// Build the rootfs information. We need the decompressed sums that we've been
// calculating to fill in the DiffIDs. It's expected (but not enforced by us)
// that the number of diffIDs corresponds to the number of non-EmptyLayer
// entries in the history.
rootFS := &Schema2RootFS{
Type: "layers",
DiffIDs: diffIDs,
}
// And now for some raw manipulation.
raw := make(map[string]*json.RawMessage)
err = json.Unmarshal(config, &raw)
if err != nil {
return nil, fmt.Errorf("re-decoding compat image config %#v: %w", s1, err)
}
// Drop some fields.
delete(raw, "id")
delete(raw, "parent")
delete(raw, "parent_id")
delete(raw, "layer_id")
delete(raw, "throwaway")
delete(raw, "Size")
// Add the history and rootfs information.
rootfs, err := json.Marshal(rootFS)
if err != nil {
return nil, fmt.Errorf("error encoding rootfs information %#v: %w", rootFS, err)
}
rawRootfs := json.RawMessage(rootfs)
raw["rootfs"] = &rawRootfs
history, err := json.Marshal(convertedHistory)
if err != nil {
return nil, fmt.Errorf("error encoding history information %#v: %w", convertedHistory, err)
}
rawHistory := json.RawMessage(history)
raw["history"] = &rawHistory
// Encode the result.
config, err = json.Marshal(raw)
if err != nil {
return nil, fmt.Errorf("error re-encoding compat image config %#v: %w", s1, err)
}
return config, nil
}
// ImageID computes an ID which can uniquely identify this image by its contents.
func (m *Schema1) ImageID(diffIDs []digest.Digest) (string, error) {
image, err := m.ToSchema2Config(diffIDs)
if err != nil {
return "", err
}
return digest.FromBytes(image).Encoded(), nil
}

307
vendor/go.podman.io/image/v5/manifest/docker_schema2.go generated vendored Normal file
View File

@@ -0,0 +1,307 @@
package manifest
import (
"encoding/json"
"fmt"
"time"
"github.com/opencontainers/go-digest"
"go.podman.io/image/v5/internal/manifest"
compressiontypes "go.podman.io/image/v5/pkg/compression/types"
"go.podman.io/image/v5/pkg/strslice"
"go.podman.io/image/v5/types"
)
// Schema2Descriptor is a “descriptor” in docker/distribution schema 2.
type Schema2Descriptor = manifest.Schema2Descriptor
// BlobInfoFromSchema2Descriptor returns a types.BlobInfo based on the input schema 2 descriptor.
func BlobInfoFromSchema2Descriptor(desc Schema2Descriptor) types.BlobInfo {
return types.BlobInfo{
Digest: desc.Digest,
Size: desc.Size,
URLs: desc.URLs,
MediaType: desc.MediaType,
}
}
// Schema2 is a manifest in docker/distribution schema 2.
type Schema2 struct {
SchemaVersion int `json:"schemaVersion"`
MediaType string `json:"mediaType"`
ConfigDescriptor Schema2Descriptor `json:"config"`
LayersDescriptors []Schema2Descriptor `json:"layers"`
}
// Schema2Port is a Port, a string containing port number and protocol in the
// format "80/tcp", from docker/go-connections/nat.
type Schema2Port string
// Schema2PortSet is a PortSet, a collection of structs indexed by Port, from
// docker/go-connections/nat.
type Schema2PortSet map[Schema2Port]struct{}
// Schema2HealthConfig is a HealthConfig, which holds configuration settings
// for the HEALTHCHECK feature, from docker/docker/api/types/container.
type Schema2HealthConfig struct {
// Test is the test to perform to check that the container is healthy.
// An empty slice means to inherit the default.
// The options are:
// {} : inherit healthcheck
// {"NONE"} : disable healthcheck
// {"CMD", args...} : exec arguments directly
// {"CMD-SHELL", command} : run command with system's default shell
Test []string `json:",omitempty"`
// Zero means to inherit. Durations are expressed as integer nanoseconds.
StartPeriod time.Duration `json:",omitempty"` // StartPeriod is the time to wait after starting before running the first check.
StartInterval time.Duration `json:",omitempty"` // StartInterval is the time to wait between checks during the start period.
Interval time.Duration `json:",omitempty"` // Interval is the time to wait between checks.
Timeout time.Duration `json:",omitempty"` // Timeout is the time to wait before considering the check to have hung.
// Retries is the number of consecutive failures needed to consider a container as unhealthy.
// Zero means inherit.
Retries int `json:",omitempty"`
}
// Schema2Config is a Config in docker/docker/api/types/container.
type Schema2Config struct {
Hostname string // Hostname
Domainname string // Domainname
User string // User that will run the command(s) inside the container, also support user:group
AttachStdin bool // Attach the standard input, makes possible user interaction
AttachStdout bool // Attach the standard output
AttachStderr bool // Attach the standard error
ExposedPorts Schema2PortSet `json:",omitempty"` // List of exposed ports
Tty bool // Attach standard streams to a tty, including stdin if it is not closed.
OpenStdin bool // Open stdin
StdinOnce bool // If true, close stdin after the 1 attached client disconnects.
Env []string // List of environment variable to set in the container
Cmd strslice.StrSlice // Command to run when starting the container
Healthcheck *Schema2HealthConfig `json:",omitempty"` // Healthcheck describes how to check the container is healthy
ArgsEscaped bool `json:",omitempty"` // True if command is already escaped (Windows specific)
Image string // Name of the image as it was passed by the operator (e.g. could be symbolic)
Volumes map[string]struct{} // List of volumes (mounts) used for the container
WorkingDir string // Current directory (PWD) in the command will be launched
Entrypoint strslice.StrSlice // Entrypoint to run when starting the container
NetworkDisabled bool `json:",omitempty"` // Is network disabled
MacAddress string `json:",omitempty"` // Mac Address of the container
OnBuild []string // ONBUILD metadata that were defined on the image Dockerfile
Labels map[string]string // List of labels set to this container
StopSignal string `json:",omitempty"` // Signal to stop a container
StopTimeout *int `json:",omitempty"` // Timeout (in seconds) to stop a container
Shell strslice.StrSlice `json:",omitempty"` // Shell for shell-form of RUN, CMD, ENTRYPOINT
}
// Schema2V1Image is a V1Image in docker/docker/image.
type Schema2V1Image struct {
// ID is a unique 64 character identifier of the image
ID string `json:"id,omitempty"`
// Parent is the ID of the parent image
Parent string `json:"parent,omitempty"`
// Comment is the commit message that was set when committing the image
Comment string `json:"comment,omitempty"`
// Created is the timestamp at which the image was created
Created time.Time `json:"created"`
// Container is the id of the container used to commit
Container string `json:"container,omitempty"`
// ContainerConfig is the configuration of the container that is committed into the image
ContainerConfig Schema2Config `json:"container_config,omitempty"`
// DockerVersion specifies the version of Docker that was used to build the image
DockerVersion string `json:"docker_version,omitempty"`
// Author is the name of the author that was specified when committing the image
Author string `json:"author,omitempty"`
// Config is the configuration of the container received from the client
Config *Schema2Config `json:"config,omitempty"`
// Architecture is the hardware that the image is built and runs on
Architecture string `json:"architecture,omitempty"`
// Variant is a variant of the CPU that the image is built and runs on
Variant string `json:"variant,omitempty"`
// OS is the operating system used to built and run the image
OS string `json:"os,omitempty"`
// Size is the total size of the image including all layers it is composed of
Size int64 `json:",omitempty"`
}
// Schema2RootFS is a description of how to build up an image's root filesystem, from docker/docker/image.
type Schema2RootFS struct {
Type string `json:"type"`
DiffIDs []digest.Digest `json:"diff_ids,omitempty"`
}
// Schema2History stores build commands that were used to create an image, from docker/docker/image.
type Schema2History struct {
// Created is the timestamp at which the image was created
Created time.Time `json:"created"`
// Author is the name of the author that was specified when committing the image
Author string `json:"author,omitempty"`
// CreatedBy keeps the Dockerfile command used while building the image
CreatedBy string `json:"created_by,omitempty"`
// Comment is the commit message that was set when committing the image
Comment string `json:"comment,omitempty"`
// EmptyLayer is set to true if this history item did not generate a
// layer. Otherwise, the history item is associated with the next
// layer in the RootFS section.
EmptyLayer bool `json:"empty_layer,omitempty"`
}
// Schema2Image is an Image in docker/docker/image.
type Schema2Image struct {
Schema2V1Image
Parent digest.Digest `json:"parent,omitempty"`
RootFS *Schema2RootFS `json:"rootfs,omitempty"`
History []Schema2History `json:"history,omitempty"`
OSVersion string `json:"os.version,omitempty"`
OSFeatures []string `json:"os.features,omitempty"`
}
// Schema2FromManifest creates a Schema2 manifest instance from a manifest blob.
func Schema2FromManifest(manifestBlob []byte) (*Schema2, error) {
s2 := Schema2{}
if err := json.Unmarshal(manifestBlob, &s2); err != nil {
return nil, err
}
if err := manifest.ValidateUnambiguousManifestFormat(manifestBlob, DockerV2Schema2MediaType,
manifest.AllowedFieldConfig|manifest.AllowedFieldLayers); err != nil {
return nil, err
}
// Check manifest's and layers' media types.
if err := SupportedSchema2MediaType(s2.MediaType); err != nil {
return nil, err
}
for _, layer := range s2.LayersDescriptors {
if err := SupportedSchema2MediaType(layer.MediaType); err != nil {
return nil, err
}
}
return &s2, nil
}
// Schema2FromComponents creates an Schema2 manifest instance from the supplied data.
func Schema2FromComponents(config Schema2Descriptor, layers []Schema2Descriptor) *Schema2 {
return &Schema2{
SchemaVersion: 2,
MediaType: DockerV2Schema2MediaType,
ConfigDescriptor: config,
LayersDescriptors: layers,
}
}
// Schema2Clone creates a copy of the supplied Schema2 manifest.
func Schema2Clone(src *Schema2) *Schema2 {
copy := *src
return &copy
}
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
func (m *Schema2) ConfigInfo() types.BlobInfo {
return BlobInfoFromSchema2Descriptor(m.ConfigDescriptor)
}
// LayerInfos returns a list of LayerInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (m *Schema2) LayerInfos() []LayerInfo {
blobs := make([]LayerInfo, 0, len(m.LayersDescriptors))
for _, layer := range m.LayersDescriptors {
blobs = append(blobs, LayerInfo{
BlobInfo: BlobInfoFromSchema2Descriptor(layer),
EmptyLayer: false,
})
}
return blobs
}
var schema2CompressionMIMETypeSets = []compressionMIMETypeSet{
{
mtsUncompressed: DockerV2Schema2ForeignLayerMediaType,
compressiontypes.GzipAlgorithmName: DockerV2Schema2ForeignLayerMediaTypeGzip,
compressiontypes.ZstdAlgorithmName: mtsUnsupportedMIMEType,
},
{
mtsUncompressed: DockerV2SchemaLayerMediaTypeUncompressed,
compressiontypes.GzipAlgorithmName: DockerV2Schema2LayerMediaType,
compressiontypes.ZstdAlgorithmName: mtsUnsupportedMIMEType,
},
}
// UpdateLayerInfos replaces the original layers with the specified BlobInfos (size+digest+urls), in order (the root layer first, and then successive layered layers)
// The returned error will be a manifest.ManifestLayerCompressionIncompatibilityError if any of the layerInfos includes a combination of CompressionOperation and
// CompressionAlgorithm that would result in anything other than gzip compression.
func (m *Schema2) UpdateLayerInfos(layerInfos []types.BlobInfo) error {
if len(m.LayersDescriptors) != len(layerInfos) {
return fmt.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(m.LayersDescriptors), len(layerInfos))
}
original := m.LayersDescriptors
m.LayersDescriptors = make([]Schema2Descriptor, len(layerInfos))
for i, info := range layerInfos {
mimeType := original[i].MediaType
// First make sure we support the media type of the original layer.
if err := SupportedSchema2MediaType(mimeType); err != nil {
return fmt.Errorf("Error preparing updated manifest: unknown media type of original layer %q: %q", info.Digest, mimeType)
}
mimeType, err := updatedMIMEType(schema2CompressionMIMETypeSets, mimeType, info)
if err != nil {
return fmt.Errorf("preparing updated manifest, layer %q: %w", info.Digest, err)
}
m.LayersDescriptors[i].MediaType = mimeType
m.LayersDescriptors[i].Digest = info.Digest
m.LayersDescriptors[i].Size = info.Size
m.LayersDescriptors[i].URLs = info.URLs
if info.CryptoOperation != types.PreserveOriginalCrypto {
return fmt.Errorf("encryption change (for layer %q) is not supported in schema2 manifests", info.Digest)
}
}
return nil
}
// Serialize returns the manifest in a blob format.
// NOTE: Serialize() does not in general reproduce the original blob if this object was loaded from one, even if no modifications were made!
func (m *Schema2) Serialize() ([]byte, error) {
return json.Marshal(*m)
}
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
func (m *Schema2) Inspect(configGetter func(types.BlobInfo) ([]byte, error)) (*types.ImageInspectInfo, error) {
config, err := configGetter(m.ConfigInfo())
if err != nil {
return nil, err
}
s2 := &Schema2Image{}
if err := json.Unmarshal(config, s2); err != nil {
return nil, err
}
layerInfos := m.LayerInfos()
i := &types.ImageInspectInfo{
Tag: "",
Created: &s2.Created,
DockerVersion: s2.DockerVersion,
Architecture: s2.Architecture,
Variant: s2.Variant,
Os: s2.OS,
Layers: layerInfosToStrings(layerInfos),
LayersData: imgInspectLayersFromLayerInfos(layerInfos),
Author: s2.Author,
}
if s2.Config != nil {
i.Labels = s2.Config.Labels
i.Env = s2.Config.Env
}
return i, nil
}
// ImageID computes an ID which can uniquely identify this image by its contents.
func (m *Schema2) ImageID([]digest.Digest) (string, error) {
if err := m.ConfigDescriptor.Digest.Validate(); err != nil {
return "", err
}
return m.ConfigDescriptor.Digest.Encoded(), nil
}
// CanChangeLayerCompression returns true if we can compress/decompress layers with mimeType in the current image
// (and the code can handle that).
// NOTE: Even if this returns true, the relevant format might not accept all compression algorithms; the set of accepted
// algorithms depends not on the current format, but possibly on the target of a conversion.
func (m *Schema2) CanChangeLayerCompression(mimeType string) bool {
return compressionVariantsRecognizeMIMEType(schema2CompressionMIMETypeSets, mimeType)
}

View File

@@ -0,0 +1,32 @@
package manifest
import (
"go.podman.io/image/v5/internal/manifest"
)
// Schema2PlatformSpec describes the platform which a particular manifest is
// specialized for.
type Schema2PlatformSpec = manifest.Schema2PlatformSpec
// Schema2ManifestDescriptor references a platform-specific manifest.
type Schema2ManifestDescriptor = manifest.Schema2ManifestDescriptor
// Schema2List is a list of platform-specific manifests.
type Schema2List = manifest.Schema2ListPublic
// Schema2ListFromComponents creates a Schema2 manifest list instance from the
// supplied data.
func Schema2ListFromComponents(components []Schema2ManifestDescriptor) *Schema2List {
return manifest.Schema2ListPublicFromComponents(components)
}
// Schema2ListClone creates a deep copy of the passed-in list.
func Schema2ListClone(list *Schema2List) *Schema2List {
return manifest.Schema2ListPublicClone(list)
}
// Schema2ListFromManifest creates a Schema2 manifest list instance from marshalled
// JSON, presumably generated by encoding a Schema2 manifest list.
func Schema2ListFromManifest(manifestBlob []byte) (*Schema2List, error) {
return manifest.Schema2ListPublicFromManifest(manifestBlob)
}

35
vendor/go.podman.io/image/v5/manifest/list.go generated vendored Normal file
View File

@@ -0,0 +1,35 @@
package manifest
import (
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"go.podman.io/image/v5/internal/manifest"
)
var (
// SupportedListMIMETypes is a list of the manifest list types that we know how to
// read/manipulate/write.
SupportedListMIMETypes = []string{
DockerV2ListMediaType,
imgspecv1.MediaTypeImageIndex,
}
)
// List is an interface for parsing, modifying lists of image manifests.
// Callers can either use this abstract interface without understanding the details of the formats,
// or instantiate a specific implementation (e.g. manifest.OCI1Index) and access the public members
// directly.
type List = manifest.ListPublic
// ListUpdate includes the fields which a List's UpdateInstances() method will modify.
type ListUpdate = manifest.ListUpdate
// ListFromBlob parses a list of manifests.
func ListFromBlob(manifestBlob []byte, manifestMIMEType string) (List, error) {
return manifest.ListPublicFromBlob(manifestBlob, manifestMIMEType)
}
// ConvertListToMIMEType converts the passed-in manifest list to a manifest
// list of the specified type.
func ConvertListToMIMEType(list List, manifestMIMEType string) (List, error) {
return list.ConvertToMIMEType(manifestMIMEType)
}

170
vendor/go.podman.io/image/v5/manifest/manifest.go generated vendored Normal file
View File

@@ -0,0 +1,170 @@
package manifest
import (
"fmt"
"github.com/containers/libtrust"
digest "github.com/opencontainers/go-digest"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"go.podman.io/image/v5/internal/manifest"
"go.podman.io/image/v5/types"
)
// FIXME: Should we just use docker/distribution and docker/docker implementations directly?
// FIXME(runcom, mitr): should we have a mediatype pkg??
const (
// DockerV2Schema1MediaType MIME type represents Docker manifest schema 1
DockerV2Schema1MediaType = manifest.DockerV2Schema1MediaType
// DockerV2Schema1SignedMediaType MIME type represents Docker manifest schema 1 with a JWS signature
DockerV2Schema1SignedMediaType = manifest.DockerV2Schema1SignedMediaType
// DockerV2Schema2MediaType MIME type represents Docker manifest schema 2
DockerV2Schema2MediaType = manifest.DockerV2Schema2MediaType
// DockerV2Schema2ConfigMediaType is the MIME type used for schema 2 config blobs.
DockerV2Schema2ConfigMediaType = manifest.DockerV2Schema2ConfigMediaType
// DockerV2Schema2LayerMediaType is the MIME type used for schema 2 layers.
DockerV2Schema2LayerMediaType = manifest.DockerV2Schema2LayerMediaType
// DockerV2SchemaLayerMediaTypeUncompressed is the mediaType used for uncompressed layers.
DockerV2SchemaLayerMediaTypeUncompressed = manifest.DockerV2SchemaLayerMediaTypeUncompressed
// DockerV2ListMediaType MIME type represents Docker manifest schema 2 list
DockerV2ListMediaType = manifest.DockerV2ListMediaType
// DockerV2Schema2ForeignLayerMediaType is the MIME type used for schema 2 foreign layers.
DockerV2Schema2ForeignLayerMediaType = manifest.DockerV2Schema2ForeignLayerMediaType
// DockerV2Schema2ForeignLayerMediaType is the MIME type used for gzipped schema 2 foreign layers.
DockerV2Schema2ForeignLayerMediaTypeGzip = manifest.DockerV2Schema2ForeignLayerMediaTypeGzip
)
// NonImageArtifactError (detected via errors.As) is used when asking for an image-specific operation
// on an object which is not a “container image” in the standard sense (e.g. an OCI artifact)
type NonImageArtifactError = manifest.NonImageArtifactError
// SupportedSchema2MediaType checks if the specified string is a supported Docker v2s2 media type.
func SupportedSchema2MediaType(m string) error {
switch m {
case DockerV2ListMediaType, DockerV2Schema1MediaType, DockerV2Schema1SignedMediaType, DockerV2Schema2ConfigMediaType, DockerV2Schema2ForeignLayerMediaType, DockerV2Schema2ForeignLayerMediaTypeGzip, DockerV2Schema2LayerMediaType, DockerV2Schema2MediaType, DockerV2SchemaLayerMediaTypeUncompressed:
return nil
default:
return fmt.Errorf("unsupported docker v2s2 media type: %q", m)
}
}
// DefaultRequestedManifestMIMETypes is a list of MIME types a types.ImageSource
// should request from the backend unless directed otherwise.
var DefaultRequestedManifestMIMETypes = []string{
imgspecv1.MediaTypeImageManifest,
DockerV2Schema2MediaType,
DockerV2Schema1SignedMediaType,
DockerV2Schema1MediaType,
DockerV2ListMediaType,
imgspecv1.MediaTypeImageIndex,
}
// Manifest is an interface for parsing, modifying image manifests in isolation.
// Callers can either use this abstract interface without understanding the details of the formats,
// or instantiate a specific implementation (e.g. manifest.OCI1) and access the public members
// directly.
//
// See types.Image for functionality not limited to manifests, including format conversions and config parsing.
// This interface is similar to, but not strictly equivalent to, the equivalent methods in types.Image.
type Manifest interface {
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
ConfigInfo() types.BlobInfo
// LayerInfos returns a list of LayerInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
LayerInfos() []LayerInfo
// UpdateLayerInfos replaces the original layers with the specified BlobInfos (size+digest+urls), in order (the root layer first, and then successive layered layers)
UpdateLayerInfos(layerInfos []types.BlobInfo) error
// ImageID computes an ID which can uniquely identify this image by its contents, irrespective
// of which (of possibly more than one simultaneously valid) reference was used to locate the
// image, and unchanged by whether or how the layers are compressed. The result takes the form
// of the hexadecimal portion of a digest.Digest.
ImageID(diffIDs []digest.Digest) (string, error)
// Inspect returns various information for (skopeo inspect) parsed from the manifest,
// incorporating information from a configuration blob returned by configGetter, if
// the underlying image format is expected to include a configuration blob.
Inspect(configGetter func(types.BlobInfo) ([]byte, error)) (*types.ImageInspectInfo, error)
// Serialize returns the manifest in a blob format.
// NOTE: Serialize() does not in general reproduce the original blob if this object was loaded from one, even if no modifications were made!
Serialize() ([]byte, error)
}
// LayerInfo is an extended version of types.BlobInfo for low-level users of Manifest.LayerInfos.
type LayerInfo struct {
types.BlobInfo
EmptyLayer bool // The layer is an “empty”/“throwaway” one, and may or may not be physically represented in various transport / storage systems. false if the manifest type does not have the concept.
}
// GuessMIMEType guesses MIME type of a manifest and returns it _if it is recognized_, or "" if unknown or unrecognized.
// FIXME? We should, in general, prefer out-of-band MIME type instead of blindly parsing the manifest,
// but we may not have such metadata available (e.g. when the manifest is a local file).
func GuessMIMEType(manifestBlob []byte) string {
return manifest.GuessMIMEType(manifestBlob)
}
// Digest returns the a digest of a docker manifest, with any necessary implied transformations like stripping v1s1 signatures.
func Digest(manifestBlob []byte) (digest.Digest, error) {
return manifest.Digest(manifestBlob)
}
// MatchesDigest returns true iff the manifest matches expectedDigest.
// Error may be set if this returns false.
// Note that this is not doing ConstantTimeCompare; by the time we get here, the cryptographic signature must already have been verified,
// or we are not using a cryptographic channel and the attacker can modify the digest along with the manifest blob.
func MatchesDigest(manifestBlob []byte, expectedDigest digest.Digest) (bool, error) {
return manifest.MatchesDigest(manifestBlob, expectedDigest)
}
// AddDummyV2S1Signature adds an JWS signature with a temporary key (i.e. useless) to a v2s1 manifest.
// This is useful to make the manifest acceptable to a docker/distribution registry (even though nothing needs or wants the JWS signature).
func AddDummyV2S1Signature(manifest []byte) ([]byte, error) {
key, err := libtrust.GenerateECP256PrivateKey()
if err != nil {
return nil, err // Coverage: This can fail only if rand.Reader fails.
}
js, err := libtrust.NewJSONSignature(manifest)
if err != nil {
return nil, err
}
if err := js.Sign(key); err != nil { // Coverage: This can fail basically only if rand.Reader fails.
return nil, err
}
return js.PrettySignature("signatures")
}
// MIMETypeIsMultiImage returns true if mimeType is a list of images
func MIMETypeIsMultiImage(mimeType string) bool {
return mimeType == DockerV2ListMediaType || mimeType == imgspecv1.MediaTypeImageIndex
}
// MIMETypeSupportsEncryption returns true if the mimeType supports encryption
func MIMETypeSupportsEncryption(mimeType string) bool {
return mimeType == imgspecv1.MediaTypeImageManifest
}
// NormalizedMIMEType returns the effective MIME type of a manifest MIME type returned by a server,
// centralizing various workarounds.
func NormalizedMIMEType(input string) string {
return manifest.NormalizedMIMEType(input)
}
// FromBlob returns a Manifest instance for the specified manifest blob and the corresponding MIME type
func FromBlob(manblob []byte, mt string) (Manifest, error) {
nmt := NormalizedMIMEType(mt)
switch nmt {
case DockerV2Schema1MediaType, DockerV2Schema1SignedMediaType:
return Schema1FromManifest(manblob)
case imgspecv1.MediaTypeImageManifest:
return OCI1FromManifest(manblob)
case DockerV2Schema2MediaType:
return Schema2FromManifest(manblob)
case DockerV2ListMediaType, imgspecv1.MediaTypeImageIndex:
return nil, fmt.Errorf("Treating manifest lists as individual manifests is not implemented")
}
// Note that this may not be reachable, NormalizedMIMEType has a default for unknown values.
return nil, fmt.Errorf("Unimplemented manifest MIME type %q (normalized as %q)", mt, nmt)
}

276
vendor/go.podman.io/image/v5/manifest/oci.go generated vendored Normal file
View File

@@ -0,0 +1,276 @@
package manifest
import (
"encoding/json"
"fmt"
"slices"
"strings"
ociencspec "github.com/containers/ocicrypt/spec"
"github.com/opencontainers/go-digest"
"github.com/opencontainers/image-spec/specs-go"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"go.podman.io/image/v5/internal/manifest"
compressiontypes "go.podman.io/image/v5/pkg/compression/types"
"go.podman.io/image/v5/types"
)
// BlobInfoFromOCI1Descriptor returns a types.BlobInfo based on the input OCI1 descriptor.
func BlobInfoFromOCI1Descriptor(desc imgspecv1.Descriptor) types.BlobInfo {
return types.BlobInfo{
Digest: desc.Digest,
Size: desc.Size,
URLs: desc.URLs,
Annotations: desc.Annotations,
MediaType: desc.MediaType,
}
}
// OCI1 is a manifest.Manifest implementation for OCI images.
// The underlying data from imgspecv1.Manifest is also available.
type OCI1 struct {
imgspecv1.Manifest
}
// SupportedOCI1MediaType checks if the specified string is a supported OCI1
// media type.
//
// Deprecated: blindly rejecting unknown MIME types when the consumer does not
// need to process the input just reduces interoperability (and violates the
// standard) with no benefit, and that this function does not check that the
// media type is appropriate for any specific purpose, so its not all that
// useful for validation anyway.
func SupportedOCI1MediaType(m string) error {
switch m {
case imgspecv1.MediaTypeDescriptor, imgspecv1.MediaTypeImageConfig,
imgspecv1.MediaTypeImageLayer, imgspecv1.MediaTypeImageLayerGzip, imgspecv1.MediaTypeImageLayerZstd,
imgspecv1.MediaTypeImageLayerNonDistributable, imgspecv1.MediaTypeImageLayerNonDistributableGzip, imgspecv1.MediaTypeImageLayerNonDistributableZstd, //nolint:staticcheck // NonDistributable layers are deprecated, but we want to continue to support manipulating pre-existing images.
imgspecv1.MediaTypeImageManifest,
imgspecv1.MediaTypeLayoutHeader,
ociencspec.MediaTypeLayerEnc, ociencspec.MediaTypeLayerGzipEnc:
return nil
default:
return fmt.Errorf("unsupported OCIv1 media type: %q", m)
}
}
// OCI1FromManifest creates an OCI1 manifest instance from a manifest blob.
func OCI1FromManifest(manifestBlob []byte) (*OCI1, error) {
oci1 := OCI1{}
if err := json.Unmarshal(manifestBlob, &oci1); err != nil {
return nil, err
}
if err := manifest.ValidateUnambiguousManifestFormat(manifestBlob, imgspecv1.MediaTypeImageManifest,
manifest.AllowedFieldConfig|manifest.AllowedFieldLayers); err != nil {
return nil, err
}
return &oci1, nil
}
// OCI1FromComponents creates an OCI1 manifest instance from the supplied data.
func OCI1FromComponents(config imgspecv1.Descriptor, layers []imgspecv1.Descriptor) *OCI1 {
return &OCI1{
imgspecv1.Manifest{
Versioned: specs.Versioned{SchemaVersion: 2},
MediaType: imgspecv1.MediaTypeImageManifest,
Config: config,
Layers: layers,
},
}
}
// OCI1Clone creates a copy of the supplied OCI1 manifest.
func OCI1Clone(src *OCI1) *OCI1 {
return &OCI1{
Manifest: src.Manifest,
}
}
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
func (m *OCI1) ConfigInfo() types.BlobInfo {
return BlobInfoFromOCI1Descriptor(m.Config)
}
// LayerInfos returns a list of LayerInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (m *OCI1) LayerInfos() []LayerInfo {
blobs := make([]LayerInfo, 0, len(m.Layers))
for _, layer := range m.Layers {
blobs = append(blobs, LayerInfo{
BlobInfo: BlobInfoFromOCI1Descriptor(layer),
EmptyLayer: false,
})
}
return blobs
}
var oci1CompressionMIMETypeSets = []compressionMIMETypeSet{
{
mtsUncompressed: imgspecv1.MediaTypeImageLayerNonDistributable, //nolint:staticcheck // NonDistributable layers are deprecated, but we want to continue to support manipulating pre-existing images.
compressiontypes.GzipAlgorithmName: imgspecv1.MediaTypeImageLayerNonDistributableGzip, //nolint:staticcheck // NonDistributable layers are deprecated, but we want to continue to support manipulating pre-existing images.
compressiontypes.ZstdAlgorithmName: imgspecv1.MediaTypeImageLayerNonDistributableZstd, //nolint:staticcheck // NonDistributable layers are deprecated, but we want to continue to support manipulating pre-existing images.
},
{
mtsUncompressed: imgspecv1.MediaTypeImageLayer,
compressiontypes.GzipAlgorithmName: imgspecv1.MediaTypeImageLayerGzip,
compressiontypes.ZstdAlgorithmName: imgspecv1.MediaTypeImageLayerZstd,
},
}
// UpdateLayerInfos replaces the original layers with the specified BlobInfos (size+digest+urls+mediatype), in order (the root layer first, and then successive layered layers)
// The returned error will be a manifest.ManifestLayerCompressionIncompatibilityError if any of the layerInfos includes a combination of CompressionOperation and
// CompressionAlgorithm that isn't supported by OCI.
//
// Its generally the callers responsibility to determine whether a particular edit is acceptable, rather than relying on
// failures of this function, because the layer is typically created _before_ UpdateLayerInfos is called, because UpdateLayerInfos needs
// to know the final digest). See OCI1.CanChangeLayerCompression for some help in determining this; other aspects like compression
// algorithms that might not be supported by a format, or the limited set of MIME types accepted for encryption, are not currently
// handled — that logic should eventually also be provided as OCI1 methods, not hard-coded in callers.
func (m *OCI1) UpdateLayerInfos(layerInfos []types.BlobInfo) error {
if len(m.Layers) != len(layerInfos) {
return fmt.Errorf("Error preparing updated manifest: layer count changed from %d to %d", len(m.Layers), len(layerInfos))
}
original := m.Layers
m.Layers = make([]imgspecv1.Descriptor, len(layerInfos))
for i, info := range layerInfos {
mimeType := original[i].MediaType
if info.CryptoOperation == types.Decrypt {
decMimeType, err := getDecryptedMediaType(mimeType)
if err != nil {
return fmt.Errorf("error preparing updated manifest: decryption specified but original mediatype is not encrypted: %q", mimeType)
}
mimeType = decMimeType
}
mimeType, err := updatedMIMEType(oci1CompressionMIMETypeSets, mimeType, info)
if err != nil {
return fmt.Errorf("preparing updated manifest, layer %q: %w", info.Digest, err)
}
if info.CryptoOperation == types.Encrypt {
encMediaType, err := getEncryptedMediaType(mimeType)
if err != nil {
return fmt.Errorf("error preparing updated manifest: encryption specified but no counterpart for mediatype: %q", mimeType)
}
mimeType = encMediaType
}
m.Layers[i].MediaType = mimeType
m.Layers[i].Digest = info.Digest
m.Layers[i].Size = info.Size
m.Layers[i].Annotations = info.Annotations
m.Layers[i].URLs = info.URLs
}
return nil
}
// getEncryptedMediaType will return the mediatype to its encrypted counterpart and return
// an error if the mediatype does not support encryption
func getEncryptedMediaType(mediatype string) (string, error) {
parts := strings.Split(mediatype, "+")
if slices.Contains(parts[1:], "encrypted") {
return "", fmt.Errorf("unsupported mediaType: %q already encrypted", mediatype)
}
unsuffixedMediatype := parts[0]
switch unsuffixedMediatype {
case DockerV2Schema2LayerMediaType, imgspecv1.MediaTypeImageLayer,
imgspecv1.MediaTypeImageLayerNonDistributable: //nolint:staticcheck // NonDistributable layers are deprecated, but we want to continue to support manipulating pre-existing images.
return mediatype + "+encrypted", nil
}
return "", fmt.Errorf("unsupported mediaType to encrypt: %q", mediatype)
}
// getDecryptedMediaType will return the mediatype to its encrypted counterpart and return
// an error if the mediatype does not support decryption
func getDecryptedMediaType(mediatype string) (string, error) {
res, ok := strings.CutSuffix(mediatype, "+encrypted")
if !ok {
return "", fmt.Errorf("unsupported mediaType to decrypt: %q", mediatype)
}
return res, nil
}
// Serialize returns the manifest in a blob format.
// NOTE: Serialize() does not in general reproduce the original blob if this object was loaded from one, even if no modifications were made!
func (m *OCI1) Serialize() ([]byte, error) {
return json.Marshal(*m)
}
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
func (m *OCI1) Inspect(configGetter func(types.BlobInfo) ([]byte, error)) (*types.ImageInspectInfo, error) {
if m.Config.MediaType != imgspecv1.MediaTypeImageConfig {
// We could return at least the layers, but thats already available in a better format via types.Image.LayerInfos.
// Most software calling this without human intervention is going to expect the values to be realistic and relevant,
// and is probably better served by failing; we can always re-visit that later if we fail now, but
// if we started returning some data for OCI artifacts now, we couldnt start failing in this function later.
return nil, manifest.NewNonImageArtifactError(&m.Manifest)
}
config, err := configGetter(m.ConfigInfo())
if err != nil {
return nil, err
}
v1 := &imgspecv1.Image{}
if err := json.Unmarshal(config, v1); err != nil {
return nil, err
}
d1 := &Schema2V1Image{}
if err := json.Unmarshal(config, d1); err != nil {
return nil, err
}
layerInfos := m.LayerInfos()
i := &types.ImageInspectInfo{
Tag: "",
Created: v1.Created,
DockerVersion: d1.DockerVersion,
Labels: v1.Config.Labels,
Architecture: v1.Architecture,
Variant: v1.Variant,
Os: v1.OS,
Layers: layerInfosToStrings(layerInfos),
LayersData: imgInspectLayersFromLayerInfos(layerInfos),
Env: v1.Config.Env,
Author: v1.Author,
}
return i, nil
}
// ImageID computes an ID which can uniquely identify this image by its contents.
func (m *OCI1) ImageID(diffIDs []digest.Digest) (string, error) {
// The way m.Config.Digest “uniquely identifies” an image is
// by containing RootFS.DiffIDs, which identify the layers of the image.
// For non-image artifacts, the we cant expect the config to change
// any time the other layers (semantically) change, so this approach of
// distinguishing objects only by m.Config.Digest doesnt work in general.
//
// Any caller of this method presumably wants to disambiguate the same
// images with a different representation, but doesnt want to disambiguate
// representations (by using a manifest digest). So, submitting a non-image
// artifact to such a caller indicates an expectation mismatch.
// So, we just fail here instead of inventing some other ID value (e.g.
// by combining the config and blob layer digests). That still
// gives us the option to not fail, and return some value, in the future,
// without committing to that approach now.
// (The only known caller of ImageID is storage/storageImageDestination.computeID,
// which cant work with non-image artifacts.)
if m.Config.MediaType != imgspecv1.MediaTypeImageConfig {
return "", manifest.NewNonImageArtifactError(&m.Manifest)
}
if err := m.Config.Digest.Validate(); err != nil {
return "", err
}
return m.Config.Digest.Encoded(), nil
}
// CanChangeLayerCompression returns true if we can compress/decompress layers with mimeType in the current image
// (and the code can handle that).
// NOTE: Even if this returns true, the relevant format might not accept all compression algorithms; the set of accepted
// algorithms depends not on the current format, but possibly on the target of a conversion.
func (m *OCI1) CanChangeLayerCompression(mimeType string) bool {
if m.Config.MediaType != imgspecv1.MediaTypeImageConfig {
return false
}
return compressionVariantsRecognizeMIMEType(oci1CompressionMIMETypeSets, mimeType)
}

27
vendor/go.podman.io/image/v5/manifest/oci_index.go generated vendored Normal file
View File

@@ -0,0 +1,27 @@
package manifest
import (
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"go.podman.io/image/v5/internal/manifest"
)
// OCI1Index is just an alias for the OCI index type, but one which we can
// provide methods for.
type OCI1Index = manifest.OCI1IndexPublic
// OCI1IndexFromComponents creates an OCI1 image index instance from the
// supplied data.
func OCI1IndexFromComponents(components []imgspecv1.Descriptor, annotations map[string]string) *OCI1Index {
return manifest.OCI1IndexPublicFromComponents(components, annotations)
}
// OCI1IndexClone creates a deep copy of the passed-in index.
func OCI1IndexClone(index *OCI1Index) *OCI1Index {
return manifest.OCI1IndexPublicClone(index)
}
// OCI1IndexFromManifest creates an OCI1 manifest index instance from marshalled
// JSON, presumably generated by encoding a OCI1 manifest index.
func OCI1IndexFromManifest(manifestBlob []byte) (*OCI1Index, error) {
return manifest.OCI1IndexPublicFromManifest(manifestBlob)
}

View File

@@ -0,0 +1,63 @@
// Package none implements a dummy BlobInfoCache which records no data.
package none
import (
"github.com/opencontainers/go-digest"
"go.podman.io/image/v5/internal/blobinfocache"
"go.podman.io/image/v5/types"
)
// noCache implements a dummy BlobInfoCache which records no data.
type noCache struct {
}
// NoCache implements BlobInfoCache by not recording any data.
//
// This exists primarily for implementations of configGetter for
// Manifest.Inspect, because configs only have one representation.
// Any use of BlobInfoCache with blobs should usually use at least a
// short-lived cache, ideally blobinfocache.DefaultCache.
var NoCache blobinfocache.BlobInfoCache2 = blobinfocache.FromBlobInfoCache(&noCache{})
// UncompressedDigest returns an uncompressed digest corresponding to anyDigest.
// May return anyDigest if it is known to be uncompressed.
// Returns "" if nothing is known about the digest (it may be compressed or uncompressed).
func (noCache) UncompressedDigest(anyDigest digest.Digest) digest.Digest {
return ""
}
// RecordDigestUncompressedPair records that the uncompressed version of anyDigest is uncompressed.
// Its allowed for anyDigest == uncompressed.
// WARNING: Only call this for LOCALLY VERIFIED data; dont record a digest pair just because some remote author claims so (e.g.
// because a manifest/config pair exists); otherwise the cache could be poisoned and allow substituting unexpected blobs.
// (Eventually, the DiffIDs in image config could detect the substitution, but that may be too late, and not all image formats contain that data.)
func (noCache) RecordDigestUncompressedPair(anyDigest digest.Digest, uncompressed digest.Digest) {
}
// UncompressedDigestForTOC returns an uncompressed digest corresponding to anyDigest.
// Returns "" if the uncompressed digest is unknown.
func (noCache) UncompressedDigestForTOC(tocDigest digest.Digest) digest.Digest {
return ""
}
// RecordTOCUncompressedPair records that the tocDigest corresponds to uncompressed.
// WARNING: Only call this for LOCALLY VERIFIED data; dont record a digest pair just because some remote author claims so (e.g.
// because a manifest/config pair exists); otherwise the cache could be poisoned and allow substituting unexpected blobs.
// (Eventually, the DiffIDs in image config could detect the substitution, but that may be too late, and not all image formats contain that data.)
func (noCache) RecordTOCUncompressedPair(tocDigest digest.Digest, uncompressed digest.Digest) {
}
// RecordKnownLocation records that a blob with the specified digest exists within the specified (transport, scope) scope,
// and can be reused given the opaque location data.
func (noCache) RecordKnownLocation(transport types.ImageTransport, scope types.BICTransportScope, blobDigest digest.Digest, location types.BICLocationReference) {
}
// CandidateLocations returns a prioritized, limited, number of blobs and their locations that could possibly be reused
// within the specified (transport scope) (if they still exist, which is not guaranteed).
//
// If !canSubstitute, the returned candidates will match the submitted digest exactly; if canSubstitute,
// data from previous RecordDigestUncompressedPair calls is used to also look up variants of the blob which have the same
// uncompressed digest.
func (noCache) CandidateLocations(transport types.ImageTransport, scope types.BICTransportScope, digest digest.Digest, canSubstitute bool) []types.BICReplacementCandidate {
return nil
}

View File

@@ -0,0 +1,80 @@
package internal
import "io"
// CompressorFunc writes the compressed stream to the given writer using the specified compression level.
//
// Compressing a stream may create integrity data that allows consuming the compressed byte stream
// while only using subsets of the compressed data (if the compressed data is seekable and most
// of the uncompressed data is already present via other means), while still protecting integrity
// of the compressed stream against unwanted modification. (In OCI container images, this metadata
// is usually carried in manifest annotations.)
//
// If the compression generates such metadata, it is written to the provided metadata map.
//
// The caller must call Close() on the stream (even if the input stream does not need closing!).
type CompressorFunc func(io.Writer, map[string]string, *int) (io.WriteCloser, error)
// DecompressorFunc returns the decompressed stream, given a compressed stream.
// The caller must call Close() on the decompressed stream (even if the compressed input stream does not need closing!).
type DecompressorFunc func(io.Reader) (io.ReadCloser, error)
// Algorithm is a compression algorithm that can be used for CompressStream.
type Algorithm struct {
name string
baseVariantName string
prefix []byte // Initial bytes of a stream compressed using this algorithm, or empty to disable detection.
decompressor DecompressorFunc
compressor CompressorFunc
}
// NewAlgorithm creates an Algorithm instance.
// nontrivialBaseVariantName is typically "".
// This function exists so that Algorithm instances can only be created by code that
// is allowed to import this internal subpackage.
func NewAlgorithm(name, nontrivialBaseVariantName string, prefix []byte, decompressor DecompressorFunc, compressor CompressorFunc) Algorithm {
baseVariantName := name
if nontrivialBaseVariantName != "" {
baseVariantName = nontrivialBaseVariantName
}
return Algorithm{
name: name,
baseVariantName: baseVariantName,
prefix: prefix,
decompressor: decompressor,
compressor: compressor,
}
}
// Name returns the name for the compression algorithm.
func (c Algorithm) Name() string {
return c.name
}
// BaseVariantName returns the name of the “base variant” of the compression algorithm.
// It is either equal to Name() of the same algorithm, or equal to Name() of some other Algorithm (the “base variant”).
// This supports a single level of “is-a” relationship between compression algorithms, e.g. where "zstd:chunked" data is valid "zstd" data.
func (c Algorithm) BaseVariantName() string {
return c.baseVariantName
}
// AlgorithmCompressor returns the compressor field of algo.
// This is a function instead of a public method so that it is only callable by code
// that is allowed to import this internal subpackage.
func AlgorithmCompressor(algo Algorithm) CompressorFunc {
return algo.compressor
}
// AlgorithmDecompressor returns the decompressor field of algo.
// This is a function instead of a public method so that it is only callable by code
// that is allowed to import this internal subpackage.
func AlgorithmDecompressor(algo Algorithm) DecompressorFunc {
return algo.decompressor
}
// AlgorithmPrefix returns the prefix field of algo.
// This is a function instead of a public method so that it is only callable by code
// that is allowed to import this internal subpackage.
func AlgorithmPrefix(algo Algorithm) []byte {
return algo.prefix
}

View File

@@ -0,0 +1,41 @@
package types
import (
"go.podman.io/image/v5/pkg/compression/internal"
)
// DecompressorFunc returns the decompressed stream, given a compressed stream.
// The caller must call Close() on the decompressed stream (even if the compressed input stream does not need closing!).
type DecompressorFunc = internal.DecompressorFunc
// Algorithm is a compression algorithm provided and supported by pkg/compression.
// It cant be supplied from the outside.
type Algorithm = internal.Algorithm
const (
// GzipAlgorithmName is the name used by pkg/compression.Gzip.
// NOTE: Importing only this /types package does not inherently guarantee a Gzip algorithm
// will actually be available. (In fact it is intended for this types package not to depend
// on any of the implementations.)
GzipAlgorithmName = "gzip"
// Bzip2AlgorithmName is the name used by pkg/compression.Bzip2.
// NOTE: Importing only this /types package does not inherently guarantee a Bzip2 algorithm
// will actually be available. (In fact it is intended for this types package not to depend
// on any of the implementations.)
Bzip2AlgorithmName = "bzip2"
// XzAlgorithmName is the name used by pkg/compression.Xz.
// NOTE: Importing only this /types package does not inherently guarantee a Xz algorithm
// will actually be available. (In fact it is intended for this types package not to depend
// on any of the implementations.)
XzAlgorithmName = "Xz"
// ZstdAlgorithmName is the name used by pkg/compression.Zstd.
// NOTE: Importing only this /types package does not inherently guarantee a Zstd algorithm
// will actually be available. (In fact it is intended for this types package not to depend
// on any of the implementations.)
ZstdAlgorithmName = "zstd"
// ZstdChunkedAlgorithmName is the name used by pkg/compression.ZstdChunked.
// NOTE: Importing only this /types package does not inherently guarantee a ZstdChunked algorithm
// will actually be available. (In fact it is intended for this types package not to depend
// on any of the implementations.)
ZstdChunkedAlgorithmName = "zstd:chunked"
)

View File

@@ -0,0 +1,950 @@
package config
import (
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"io/fs"
"iter"
"maps"
"os"
"os/exec"
"path/filepath"
"runtime"
"strings"
helperclient "github.com/docker/docker-credential-helpers/client"
"github.com/docker/docker-credential-helpers/credentials"
"github.com/sirupsen/logrus"
"go.podman.io/image/v5/docker/reference"
"go.podman.io/image/v5/internal/multierr"
"go.podman.io/image/v5/internal/set"
"go.podman.io/image/v5/pkg/sysregistriesv2"
"go.podman.io/image/v5/types"
"go.podman.io/storage/pkg/fileutils"
"go.podman.io/storage/pkg/homedir"
"go.podman.io/storage/pkg/ioutils"
)
type dockerAuthConfig struct {
Auth string `json:"auth,omitempty"`
IdentityToken string `json:"identitytoken,omitempty"`
}
type dockerConfigFile struct {
AuthConfigs map[string]dockerAuthConfig `json:"auths"`
CredHelpers map[string]string `json:"credHelpers,omitempty"`
}
var (
defaultPerUIDPathFormat = filepath.FromSlash("/run/containers/%d/auth.json")
xdgConfigHomePath = filepath.FromSlash("containers/auth.json")
xdgRuntimeDirPath = filepath.FromSlash("containers/auth.json")
dockerHomePath = filepath.FromSlash(".docker/config.json")
dockerLegacyHomePath = ".dockercfg"
nonLinuxAuthFilePath = filepath.FromSlash(".config/containers/auth.json")
// ErrNotLoggedIn is returned for users not logged into a registry
// that they are trying to logout of
ErrNotLoggedIn = errors.New("not logged in")
// ErrNotSupported is returned for unsupported methods
ErrNotSupported = errors.New("not supported")
)
// authPath combines a path to a file with container registry credentials,
// along with expected properties of that path (currently just whether it's
// legacy format or not).
type authPath struct {
path string
legacyFormat bool
}
// newAuthPathDefault constructs an authPath in non-legacy format.
func newAuthPathDefault(path string) authPath {
return authPath{path: path, legacyFormat: false}
}
// GetAllCredentials returns the registry credentials for all registries stored
// in any of the configured credential helpers.
func GetAllCredentials(sys *types.SystemContext) (map[string]types.DockerAuthConfig, error) {
// To keep things simple, let's first extract all registries from all
// possible sources, and then call `GetCredentials` on them. That
// prevents us from having to reverse engineer the logic in
// `GetCredentials`.
allKeys := set.New[string]()
// To use GetCredentials, we must at least convert the URL forms into host names.
// While we're at it, well also canonicalize docker.io to the standard format.
normalizedDockerIORegistry := normalizeRegistry("docker.io")
helpers, err := sysregistriesv2.CredentialHelpers(sys)
if err != nil {
return nil, err
}
for _, helper := range helpers {
switch helper {
// Special-case the built-in helper for auth files.
case sysregistriesv2.AuthenticationFileHelper:
for _, path := range getAuthFilePaths(sys, homedir.Get()) {
// parse returns an empty map in case the path doesn't exist.
fileContents, err := path.parse()
if err != nil {
return nil, fmt.Errorf("reading JSON file %q: %w", path.path, err)
}
// Credential helpers in the auth file have a
// direct mapping to a registry, so we can just
// walk the map.
allKeys.AddSeq(maps.Keys(fileContents.CredHelpers))
for key := range fileContents.AuthConfigs {
key := normalizeAuthFileKey(key, path.legacyFormat)
if key == normalizedDockerIORegistry {
key = "docker.io"
}
allKeys.Add(key)
}
}
// External helpers.
default:
creds, err := listCredsInCredHelper(helper)
if err != nil {
logrus.Debugf("Error listing credentials stored in credential helper %s: %v", helper, err)
if errors.Is(err, exec.ErrNotFound) {
creds = nil // It's okay if the helper doesn't exist.
} else {
return nil, err
}
}
allKeys.AddSeq(maps.Keys(creds))
}
}
// Now use `GetCredentials` to the specific auth configs for each
// previously listed registry.
allCreds := make(map[string]types.DockerAuthConfig)
for key := range allKeys.All() {
creds, err := GetCredentials(sys, key)
if err != nil {
// Note: we rely on the logging in `GetCredentials`.
return nil, err
}
if creds != (types.DockerAuthConfig{}) {
allCreds[key] = creds
}
}
return allCreds, nil
}
// getAuthFilePaths returns a slice of authPaths based on the system context
// in the order they should be searched. Note that some paths may not exist.
// The homeDir parameter should always be homedir.Get(), and is only intended to be overridden
// by tests.
func getAuthFilePaths(sys *types.SystemContext, homeDir string) []authPath {
paths := []authPath{}
pathToAuth, userSpecifiedPath, err := getPathToAuth(sys)
if err == nil {
paths = append(paths, pathToAuth)
} else {
// Error means that the path set for XDG_RUNTIME_DIR does not exist
// but we don't want to completely fail in the case that the user is pulling a public image
// Logging the error as a warning instead and moving on to pulling the image
logrus.Warnf("%v: Trying to pull image in the event that it is a public image.", err)
}
if !userSpecifiedPath {
xdgCfgHome := os.Getenv("XDG_CONFIG_HOME")
if xdgCfgHome == "" {
xdgCfgHome = filepath.Join(homeDir, ".config")
}
paths = append(paths, newAuthPathDefault(filepath.Join(xdgCfgHome, xdgConfigHomePath)))
if dockerConfig := os.Getenv("DOCKER_CONFIG"); dockerConfig != "" {
paths = append(paths, newAuthPathDefault(filepath.Join(dockerConfig, "config.json")))
} else {
paths = append(paths,
newAuthPathDefault(filepath.Join(homeDir, dockerHomePath)),
)
}
paths = append(paths,
authPath{path: filepath.Join(homeDir, dockerLegacyHomePath), legacyFormat: true},
)
}
return paths
}
// GetCredentials returns the registry credentials matching key, appropriate for
// sys and the users configuration.
// If an entry is not found, an empty struct is returned.
// A valid key is a repository, a namespace within a registry, or a registry hostname.
//
// GetCredentialsForRef should almost always be used in favor of this API.
func GetCredentials(sys *types.SystemContext, key string) (types.DockerAuthConfig, error) {
return getCredentialsWithHomeDir(sys, key, homedir.Get())
}
// GetCredentialsForRef returns the registry credentials necessary for
// accessing ref on the registry ref points to,
// appropriate for sys and the users configuration.
// If an entry is not found, an empty struct is returned.
func GetCredentialsForRef(sys *types.SystemContext, ref reference.Named) (types.DockerAuthConfig, error) {
return getCredentialsWithHomeDir(sys, ref.Name(), homedir.Get())
}
// getCredentialsWithHomeDir is an internal implementation detail of
// GetCredentialsForRef and GetCredentials. It exists only to allow testing it
// with an artificial home directory.
func getCredentialsWithHomeDir(sys *types.SystemContext, key, homeDir string) (types.DockerAuthConfig, error) {
_, err := validateKey(key)
if err != nil {
return types.DockerAuthConfig{}, err
}
if sys != nil && sys.DockerAuthConfig != nil {
logrus.Debugf("Returning credentials for %s from DockerAuthConfig", key)
return *sys.DockerAuthConfig, nil
}
var registry string // We compute this once because it is used in several places.
if firstSlash := strings.IndexRune(key, '/'); firstSlash != -1 {
registry = key[:firstSlash]
} else {
registry = key
}
// Anonymous function to query credentials from auth files.
getCredentialsFromAuthFiles := func() (types.DockerAuthConfig, string, error) {
for _, path := range getAuthFilePaths(sys, homeDir) {
creds, err := findCredentialsInFile(key, registry, path)
if err != nil {
return types.DockerAuthConfig{}, "", err
}
if creds != (types.DockerAuthConfig{}) {
return creds, path.path, nil
}
}
return types.DockerAuthConfig{}, "", nil
}
helpers, err := sysregistriesv2.CredentialHelpers(sys)
if err != nil {
return types.DockerAuthConfig{}, err
}
var multiErr []error
for _, helper := range helpers {
var (
creds types.DockerAuthConfig
helperKey string
credHelperPath string
err error
)
switch helper {
// Special-case the built-in helper for auth files.
case sysregistriesv2.AuthenticationFileHelper:
helperKey = key
creds, credHelperPath, err = getCredentialsFromAuthFiles()
// External helpers.
default:
// This intentionally uses "registry", not "key"; we don't support namespaced
// credentials in helpers, but a "registry" is a valid parent of "key".
helperKey = registry
creds, err = getCredsFromCredHelper(helper, registry)
}
if err != nil {
logrus.Debugf("Error looking up credentials for %s in credential helper %s: %v", helperKey, helper, err)
multiErr = append(multiErr, err)
continue
}
if creds != (types.DockerAuthConfig{}) {
msg := fmt.Sprintf("Found credentials for %s in credential helper %s", helperKey, helper)
if credHelperPath != "" {
msg = fmt.Sprintf("%s in file %s", msg, credHelperPath)
}
logrus.Debug(msg)
return creds, nil
}
}
if multiErr != nil {
return types.DockerAuthConfig{}, multierr.Format("errors looking up credentials:\n\t* ", "\nt* ", "\n", multiErr)
}
logrus.Debugf("No credentials for %s found", key)
return types.DockerAuthConfig{}, nil
}
// GetAuthentication returns the registry credentials matching key, appropriate for
// sys and the users configuration.
// If an entry is not found, an empty struct is returned.
// A valid key is a repository, a namespace within a registry, or a registry hostname.
//
// Deprecated: This API only has support for username and password. To get the
// support for oauth2 in container registry authentication, we added the new
// GetCredentialsForRef and GetCredentials API. The new API should be used and this API is kept to
// maintain backward compatibility.
func GetAuthentication(sys *types.SystemContext, key string) (string, string, error) {
return getAuthenticationWithHomeDir(sys, key, homedir.Get())
}
// getAuthenticationWithHomeDir is an internal implementation detail of GetAuthentication,
// it exists only to allow testing it with an artificial home directory.
func getAuthenticationWithHomeDir(sys *types.SystemContext, key, homeDir string) (string, string, error) {
creds, err := getCredentialsWithHomeDir(sys, key, homeDir)
if err != nil {
return "", "", err
}
if creds.IdentityToken != "" {
return "", "", fmt.Errorf("non-empty identity token found and this API doesn't support it: %w", ErrNotSupported)
}
return creds.Username, creds.Password, nil
}
// SetCredentials stores the username and password in a location
// appropriate for sys and the users configuration.
// A valid key is a repository, a namespace within a registry, or a registry hostname;
// using forms other than just a registry may fail depending on configuration.
// Returns a human-readable description of the location that was updated.
// NOTE: The return value is only intended to be read by humans; its form is not an API,
// it may change (or new forms can be added) any time.
func SetCredentials(sys *types.SystemContext, key, username, password string) (string, error) {
helpers, jsonEditor, key, isNamespaced, err := prepareForEdit(sys, key, true)
if err != nil {
return "", err
}
// Make sure to collect all errors.
var multiErr []error
for _, helper := range helpers {
var desc string
var err error
switch helper {
// Special-case the built-in helpers for auth files.
case sysregistriesv2.AuthenticationFileHelper:
desc, err = jsonEditor(sys, func(fileContents *dockerConfigFile) (bool, string, error) {
if ch, exists := fileContents.CredHelpers[key]; exists {
if isNamespaced {
return false, "", unsupportedNamespaceErr(ch)
}
desc, err := setCredsInCredHelper(ch, key, username, password)
if err != nil {
return false, "", err
}
return false, desc, nil
}
creds := base64.StdEncoding.EncodeToString([]byte(username + ":" + password))
newCreds := dockerAuthConfig{Auth: creds}
fileContents.AuthConfigs[key] = newCreds
return true, "", nil
})
// External helpers.
default:
if isNamespaced {
err = unsupportedNamespaceErr(helper)
} else {
desc, err = setCredsInCredHelper(helper, key, username, password)
}
}
if err != nil {
multiErr = append(multiErr, err)
logrus.Debugf("Error storing credentials for %s in credential helper %s: %v", key, helper, err)
continue
}
logrus.Debugf("Stored credentials for %s in credential helper %s", key, helper)
return desc, nil
}
return "", multierr.Format("Errors storing credentials\n\t* ", "\n\t* ", "\n", multiErr)
}
func unsupportedNamespaceErr(helper string) error {
return fmt.Errorf("namespaced key is not supported for credential helper %s", helper)
}
// SetAuthentication stores the username and password in the credential helper or file
// See the documentation of SetCredentials for format of "key"
func SetAuthentication(sys *types.SystemContext, key, username, password string) error {
_, err := SetCredentials(sys, key, username, password)
return err
}
// RemoveAuthentication removes credentials for `key` from all possible
// sources such as credential helpers and auth files.
// A valid key is a repository, a namespace within a registry, or a registry hostname;
// using forms other than just a registry may fail depending on configuration.
func RemoveAuthentication(sys *types.SystemContext, key string) error {
helpers, jsonEditor, key, isNamespaced, err := prepareForEdit(sys, key, true)
if err != nil {
return err
}
isLoggedIn := false
removeFromCredHelper := func(helper string) error {
if isNamespaced {
logrus.Debugf("Not removing credentials because namespaced keys are not supported for the credential helper: %s", helper)
return nil
}
err := deleteCredsFromCredHelper(helper, key)
if err == nil {
logrus.Debugf("Credentials for %q were deleted from credential helper %s", key, helper)
isLoggedIn = true
return nil
}
if credentials.IsErrCredentialsNotFoundMessage(err.Error()) {
logrus.Debugf("Not logged in to %s with credential helper %s", key, helper)
return nil
}
return fmt.Errorf("removing credentials for %s from credential helper %s: %w", key, helper, err)
}
var multiErr []error
for _, helper := range helpers {
var err error
switch helper {
// Special-case the built-in helper for auth files.
case sysregistriesv2.AuthenticationFileHelper:
_, err = jsonEditor(sys, func(fileContents *dockerConfigFile) (bool, string, error) {
var helperErr error
if innerHelper, exists := fileContents.CredHelpers[key]; exists {
helperErr = removeFromCredHelper(innerHelper)
}
if _, ok := fileContents.AuthConfigs[key]; ok {
isLoggedIn = true
delete(fileContents.AuthConfigs, key)
}
return true, "", helperErr
})
if err != nil {
multiErr = append(multiErr, err)
}
// External helpers.
default:
if err := removeFromCredHelper(helper); err != nil {
multiErr = append(multiErr, err)
}
}
}
if multiErr != nil {
return multierr.Format("errors removing credentials\n\t* ", "\n\t*", "\n", multiErr)
}
if !isLoggedIn {
return ErrNotLoggedIn
}
return nil
}
// RemoveAllAuthentication deletes all the credentials stored in credential
// helpers and auth files.
func RemoveAllAuthentication(sys *types.SystemContext) error {
helpers, jsonEditor, _, _, err := prepareForEdit(sys, "", false)
if err != nil {
return err
}
var multiErr []error
for _, helper := range helpers {
var err error
switch helper {
// Special-case the built-in helper for auth files.
case sysregistriesv2.AuthenticationFileHelper:
_, err = jsonEditor(sys, func(fileContents *dockerConfigFile) (bool, string, error) {
for registry, helper := range fileContents.CredHelpers {
// Helpers in auth files are expected
// to exist, so no special treatment
// for them.
if err := deleteCredsFromCredHelper(helper, registry); err != nil {
return false, "", err
}
}
fileContents.CredHelpers = make(map[string]string)
fileContents.AuthConfigs = make(map[string]dockerAuthConfig)
return true, "", nil
})
// External helpers.
default:
var creds map[string]string
creds, err = listCredsInCredHelper(helper)
if err != nil {
if errors.Is(err, exec.ErrNotFound) {
// It's okay if the helper doesn't exist.
continue
} else {
break
}
}
for registry := range creds {
err = deleteCredsFromCredHelper(helper, registry)
if err != nil {
break
}
}
}
if err != nil {
logrus.Debugf("Error removing credentials from credential helper %s: %v", helper, err)
multiErr = append(multiErr, err)
continue
}
logrus.Debugf("All credentials removed from credential helper %s", helper)
}
if multiErr != nil {
return multierr.Format("errors removing all credentials:\n\t* ", "\n\t* ", "\n", multiErr)
}
return nil
}
// prepareForEdit processes sys and key (if keyRelevant) to return:
// - a list of credential helpers
// - a function which can be used to edit the JSON file
// - the key value to actually use in credential helpers / JSON
// - a boolean which is true if key is namespaced (and should not be used with credential helpers).
func prepareForEdit(sys *types.SystemContext, key string, keyRelevant bool) ([]string, func(*types.SystemContext, func(*dockerConfigFile) (bool, string, error)) (string, error), string, bool, error) {
var isNamespaced bool
if keyRelevant {
ns, err := validateKey(key)
if err != nil {
return nil, nil, "", false, err
}
isNamespaced = ns
}
if sys != nil && sys.DockerCompatAuthFilePath != "" {
if sys.AuthFilePath != "" {
return nil, nil, "", false, errors.New("AuthFilePath and DockerCompatAuthFilePath can not be set simultaneously")
}
if keyRelevant {
if isNamespaced {
return nil, nil, "", false, fmt.Errorf("Credentials cannot be recorded in Docker-compatible format with namespaced key %q", key)
}
if key == "docker.io" {
key = "https://index.docker.io/v1/"
}
}
// Do not use helpers defined in sysregistriesv2 because Docker isnt aware of them.
return []string{sysregistriesv2.AuthenticationFileHelper}, modifyDockerConfigJSON, key, false, nil
}
helpers, err := sysregistriesv2.CredentialHelpers(sys)
if err != nil {
return nil, nil, "", false, err
}
return helpers, modifyJSON, key, isNamespaced, nil
}
func listCredsInCredHelper(credHelper string) (map[string]string, error) {
helperName := fmt.Sprintf("docker-credential-%s", credHelper)
p := helperclient.NewShellProgramFunc(helperName)
return helperclient.List(p)
}
// getPathToAuth gets the path of the auth.json file used for reading and writing credentials,
// and a boolean indicating whether the return value came from an explicit user choice (i.e. not defaults)
func getPathToAuth(sys *types.SystemContext) (authPath, bool, error) {
return getPathToAuthWithOS(sys, runtime.GOOS)
}
// getPathToAuthWithOS is an internal implementation detail of getPathToAuth,
// it exists only to allow testing it with an artificial runtime.GOOS.
func getPathToAuthWithOS(sys *types.SystemContext, goOS string) (authPath, bool, error) {
if sys != nil {
if sys.AuthFilePath != "" && sys.DockerCompatAuthFilePath != "" {
return authPath{}, false, errors.New("AuthFilePath and DockerCompatAuthFilePath can not be set simultaneously")
}
if sys.AuthFilePath != "" {
return newAuthPathDefault(sys.AuthFilePath), true, nil
}
// When reading, we can process auth.json and Dockers config.json with the same code.
// When writing, prepareForEdit chooses an appropriate jsonEditor implementation.
if sys.DockerCompatAuthFilePath != "" {
return newAuthPathDefault(sys.DockerCompatAuthFilePath), true, nil
}
if sys.LegacyFormatAuthFilePath != "" {
return authPath{path: sys.LegacyFormatAuthFilePath, legacyFormat: true}, true, nil
}
// Note: RootForImplicitAbsolutePaths should not affect paths starting with $HOME
if sys.RootForImplicitAbsolutePaths != "" && goOS == "linux" {
return newAuthPathDefault(filepath.Join(sys.RootForImplicitAbsolutePaths, fmt.Sprintf(defaultPerUIDPathFormat, os.Getuid()))), false, nil
}
}
if goOS != "linux" {
return newAuthPathDefault(filepath.Join(homedir.Get(), nonLinuxAuthFilePath)), false, nil
}
runtimeDir := os.Getenv("XDG_RUNTIME_DIR")
if runtimeDir != "" {
// This function does not in general need to separately check that the returned path exists; thats racy, and callers will fail accessing the file anyway.
// We are checking for fs.ErrNotExist here only to give the user better guidance what to do in this special case.
err := fileutils.Exists(runtimeDir)
if errors.Is(err, fs.ErrNotExist) {
// This means the user set the XDG_RUNTIME_DIR variable and either forgot to create the directory
// or made a typo while setting the environment variable,
// so return an error referring to $XDG_RUNTIME_DIR instead of xdgRuntimeDirPath inside.
return authPath{}, false, fmt.Errorf("%q directory set by $XDG_RUNTIME_DIR does not exist. Either create the directory or unset $XDG_RUNTIME_DIR.: %w", runtimeDir, err)
} // else ignore err and let the caller fail accessing xdgRuntimeDirPath.
return newAuthPathDefault(filepath.Join(runtimeDir, xdgRuntimeDirPath)), false, nil
}
return newAuthPathDefault(fmt.Sprintf(defaultPerUIDPathFormat, os.Getuid())), false, nil
}
// parse unmarshals the credentials stored in the auth.json file and returns it
// or returns an empty dockerConfigFile data structure if auth.json does not exist
// if the file exists and is empty, this function returns an error.
func (path authPath) parse() (dockerConfigFile, error) {
var fileContents dockerConfigFile
raw, err := os.ReadFile(path.path)
if err != nil {
if os.IsNotExist(err) {
fileContents.AuthConfigs = map[string]dockerAuthConfig{}
return fileContents, nil
}
return dockerConfigFile{}, err
}
if path.legacyFormat {
if err = json.Unmarshal(raw, &fileContents.AuthConfigs); err != nil {
return dockerConfigFile{}, fmt.Errorf("unmarshaling JSON at %q: %w", path.path, err)
}
return fileContents, nil
}
if err = json.Unmarshal(raw, &fileContents); err != nil {
return dockerConfigFile{}, fmt.Errorf("unmarshaling JSON at %q: %w", path.path, err)
}
if fileContents.AuthConfigs == nil {
fileContents.AuthConfigs = map[string]dockerAuthConfig{}
}
if fileContents.CredHelpers == nil {
fileContents.CredHelpers = make(map[string]string)
}
return fileContents, nil
}
// modifyJSON finds an auth.json file, calls editor on the contents, and
// writes it back if editor returns true.
// Returns a human-readable description of the file, to be returned by SetCredentials.
//
// The editor may also return a human-readable description of the updated location; if it is "",
// the file itself is used.
func modifyJSON(sys *types.SystemContext, editor func(fileContents *dockerConfigFile) (bool, string, error)) (string, error) {
path, _, err := getPathToAuth(sys)
if err != nil {
return "", err
}
if path.legacyFormat {
return "", fmt.Errorf("writes to %s using legacy format are not supported", path.path)
}
dir := filepath.Dir(path.path)
if err = os.MkdirAll(dir, 0700); err != nil {
return "", err
}
fileContents, err := path.parse()
if err != nil {
return "", fmt.Errorf("reading JSON file %q: %w", path.path, err)
}
updated, description, err := editor(&fileContents)
if err != nil {
return "", fmt.Errorf("updating %q: %w", path.path, err)
}
if updated {
newData, err := json.MarshalIndent(fileContents, "", "\t")
if err != nil {
return "", fmt.Errorf("marshaling JSON %q: %w", path.path, err)
}
if err = ioutils.AtomicWriteFile(path.path, newData, 0600); err != nil {
return "", fmt.Errorf("writing to file %q: %w", path.path, err)
}
}
if description == "" {
description = path.path
}
return description, nil
}
// modifyDockerConfigJSON finds a docker config.json file, calls editor on the contents, and
// writes it back if editor returns true.
// Returns a human-readable description of the file, to be returned by SetCredentials.
//
// The editor may also return a human-readable description of the updated location; if it is "",
// the file itself is used.
func modifyDockerConfigJSON(sys *types.SystemContext, editor func(fileContents *dockerConfigFile) (bool, string, error)) (string, error) {
if sys == nil || sys.DockerCompatAuthFilePath == "" {
return "", errors.New("internal error: modifyDockerConfigJSON called with DockerCompatAuthFilePath not set")
}
path := sys.DockerCompatAuthFilePath
dir := filepath.Dir(path)
if err := os.MkdirAll(dir, 0700); err != nil {
return "", err
}
// Try hard not to clobber fields we dont understand, even fields which may be added in future Docker versions.
var rawContents map[string]json.RawMessage
originalBytes, err := os.ReadFile(path)
switch {
case err == nil:
if err := json.Unmarshal(originalBytes, &rawContents); err != nil {
return "", fmt.Errorf("unmarshaling JSON at %q: %w", path, err)
}
case errors.Is(err, fs.ErrNotExist):
rawContents = map[string]json.RawMessage{}
default: // err != nil
return "", err
}
syntheticContents := dockerConfigFile{
AuthConfigs: map[string]dockerAuthConfig{},
CredHelpers: map[string]string{},
}
// json.Unmarshal also falls back to case-insensitive field matching; this code does not do that. Presumably
// config.json is mostly maintained by machines doing `docker login`, so the files should, hopefully, not contain field names with
// unexpected case.
if rawAuths, ok := rawContents["auths"]; ok {
// This conversion will lose fields we dont know about; when updating an entry, we cant tell whether an unknown field
// should be preserved or discarded (because it is made obsolete/unwanted with the new credentials).
// It might make sense to track which entries of "auths" we actually modified, and to not touch any others.
if err := json.Unmarshal(rawAuths, &syntheticContents.AuthConfigs); err != nil {
return "", fmt.Errorf(`unmarshaling "auths" in JSON at %q: %w`, path, err)
}
}
if rawCH, ok := rawContents["credHelpers"]; ok {
if err := json.Unmarshal(rawCH, &syntheticContents.CredHelpers); err != nil {
return "", fmt.Errorf(`unmarshaling "credHelpers" in JSON at %q: %w`, path, err)
}
}
updated, description, err := editor(&syntheticContents)
if err != nil {
return "", fmt.Errorf("updating %q: %w", path, err)
}
if updated {
rawAuths, err := json.MarshalIndent(syntheticContents.AuthConfigs, "", "\t")
if err != nil {
return "", fmt.Errorf("marshaling JSON %q: %w", path, err)
}
rawContents["auths"] = rawAuths
// We never modify syntheticContents.CredHelpers, so we dont need to update it.
newData, err := json.MarshalIndent(rawContents, "", "\t")
if err != nil {
return "", fmt.Errorf("marshaling JSON %q: %w", path, err)
}
if err = ioutils.AtomicWriteFile(path, newData, 0600); err != nil {
return "", fmt.Errorf("writing to file %q: %w", path, err)
}
}
if description == "" {
description = path
}
return description, nil
}
func getCredsFromCredHelper(credHelper, registry string) (types.DockerAuthConfig, error) {
helperName := fmt.Sprintf("docker-credential-%s", credHelper)
p := helperclient.NewShellProgramFunc(helperName)
creds, err := helperclient.Get(p, registry)
if err != nil {
if credentials.IsErrCredentialsNotFoundMessage(err.Error()) {
logrus.Debugf("Not logged in to %s with credential helper %s", registry, credHelper)
err = nil
}
return types.DockerAuthConfig{}, err
}
switch creds.Username {
case "<token>":
return types.DockerAuthConfig{
IdentityToken: creds.Secret,
}, nil
default:
return types.DockerAuthConfig{
Username: creds.Username,
Password: creds.Secret,
}, nil
}
}
// setCredsInCredHelper stores (username, password) for registry in credHelper.
// Returns a human-readable description of the destination, to be returned by SetCredentials.
func setCredsInCredHelper(credHelper, registry, username, password string) (string, error) {
helperName := fmt.Sprintf("docker-credential-%s", credHelper)
p := helperclient.NewShellProgramFunc(helperName)
creds := &credentials.Credentials{
ServerURL: registry,
Username: username,
Secret: password,
}
if err := helperclient.Store(p, creds); err != nil {
return "", err
}
return fmt.Sprintf("credential helper: %s", credHelper), nil
}
func deleteCredsFromCredHelper(credHelper, registry string) error {
helperName := fmt.Sprintf("docker-credential-%s", credHelper)
p := helperclient.NewShellProgramFunc(helperName)
return helperclient.Erase(p, registry)
}
// findCredentialsInFile looks for credentials matching "key"
// (which is "registry" or a namespace in "registry") in "path".
func findCredentialsInFile(key, registry string, path authPath) (types.DockerAuthConfig, error) {
fileContents, err := path.parse()
if err != nil {
return types.DockerAuthConfig{}, fmt.Errorf("reading JSON file %q: %w", path.path, err)
}
// First try cred helpers. They should always be normalized.
// This intentionally uses "registry", not "key"; we don't support namespaced
// credentials in helpers.
if ch, exists := fileContents.CredHelpers[registry]; exists {
logrus.Debugf("Looking up in credential helper %s based on credHelpers entry in %s", ch, path.path)
return getCredsFromCredHelper(ch, registry)
}
// Support sub-registry namespaces in auth.
// (This is not a feature of ~/.docker/config.json; we support it even for
// those files as an extension.)
//
// Repo or namespace keys are only supported as exact matches. For registry
// keys we prefer exact matches as well.
for key := range authKeyLookupOrder(key, registry, path.legacyFormat) {
if val, exists := fileContents.AuthConfigs[key]; exists {
return decodeDockerAuth(path.path, key, val)
}
}
// bad luck; let's normalize the entries first
// This primarily happens for legacyFormat, which for a time used API URLs
// (http[s:]//…/v1/) as keys.
// Secondarily, (docker login) accepted URLs with no normalization for
// several years, and matched registry hostnames against that, so support
// those entries even in non-legacyFormat ~/.docker/config.json.
// The docker.io registry still uses the /v1/ key with a special host name,
// so account for that as well.
registry = normalizeRegistry(registry)
for k, v := range fileContents.AuthConfigs {
if normalizeAuthFileKey(k, path.legacyFormat) == registry {
return decodeDockerAuth(path.path, k, v)
}
}
// Only log this if we found nothing; getCredentialsWithHomeDir logs the
// source of found data.
logrus.Debugf("No credentials matching %s found in %s", key, path.path)
return types.DockerAuthConfig{}, nil
}
// authKeyLookupOrder returns a sequence for lookup keys matching (key or registry)
// in file with legacyFormat, in order from the best match to worst.
// For example, in a non-legacy file,
// when given a repository key "quay.io/repo/ns/image", it returns
// - quay.io/repo/ns/image
// - quay.io/repo/ns
// - quay.io/repo
// - quay.io
func authKeyLookupOrder(key, registry string, legacyFormat bool) iter.Seq[string] {
return func(yield func(string) bool) {
if legacyFormat {
_ = yield(registry) // We stop in any case
return
}
for {
if !yield(key) {
return
}
lastSlash := strings.LastIndex(key, "/")
if lastSlash == -1 {
break
}
key = key[:lastSlash]
}
}
}
// decodeDockerAuth decodes the username and password from conf,
// which is entry key in path.
func decodeDockerAuth(path, key string, conf dockerAuthConfig) (types.DockerAuthConfig, error) {
decoded, err := base64.StdEncoding.DecodeString(conf.Auth)
if err != nil {
return types.DockerAuthConfig{}, err
}
user, passwordPart, valid := strings.Cut(string(decoded), ":")
if !valid {
// if it's invalid just skip, as docker does
if len(decoded) > 0 { // Docker writes "auths": { "$host": {} } entries if a credential helper is used, dont warn about those
logrus.Warnf(`Error parsing the "auth" field of a credential entry %q in %q, missing semicolon`, key, path) // Dont include the text of decoded, because that might put secrets into a log.
} else {
logrus.Debugf("Found an empty credential entry %q in %q (an unhandled credential helper marker?), moving on", key, path)
}
return types.DockerAuthConfig{}, nil
}
password := strings.Trim(passwordPart, "\x00")
return types.DockerAuthConfig{
Username: user,
Password: password,
IdentityToken: conf.IdentityToken,
}, nil
}
// normalizeAuthFileKey takes a key, converts it to a host name and normalizes
// the resulting registry.
func normalizeAuthFileKey(key string, legacyFormat bool) string {
stripped := strings.TrimPrefix(key, "http://")
stripped = strings.TrimPrefix(stripped, "https://")
if legacyFormat || stripped != key {
stripped, _, _ = strings.Cut(stripped, "/")
}
return normalizeRegistry(stripped)
}
// normalizeRegistry converts the provided registry if a known docker.io host
// is provided.
func normalizeRegistry(registry string) string {
switch registry {
case "registry-1.docker.io", "docker.io":
return "index.docker.io"
}
return registry
}
// validateKey verifies that the input key does not have a prefix that is not
// allowed and returns an indicator if the key is namespaced.
func validateKey(key string) (bool, error) {
if strings.HasPrefix(key, "http://") || strings.HasPrefix(key, "https://") {
return false, fmt.Errorf("key %s contains http[s]:// prefix", key)
}
// Ideally this should only accept explicitly valid keys, compare
// validateIdentityRemappingPrefix. For now, just reject values that look
// like tagged or digested values.
if strings.ContainsRune(key, '@') {
return false, fmt.Errorf(`key %s contains a '@' character`, key)
}
firstSlash := strings.IndexRune(key, '/')
isNamespaced := firstSlash != -1
// Reject host/repo:tag, but allow localhost:5000 and localhost:5000/foo.
if isNamespaced && strings.ContainsRune(key[firstSlash+1:], ':') {
return false, fmt.Errorf(`key %s contains a ':' character after host[:port]`, key)
}
// check if the provided key contains one or more subpaths.
return isNamespaced, nil
}

1
vendor/go.podman.io/image/v5/pkg/strslice/README.md generated vendored Normal file
View File

@@ -0,0 +1 @@
This package was replicated from [github.com/docker/docker v17.04.0-ce](https://github.com/docker/docker/tree/v17.04.0-ce/api/types/strslice).

30
vendor/go.podman.io/image/v5/pkg/strslice/strslice.go generated vendored Normal file
View File

@@ -0,0 +1,30 @@
package strslice
import "encoding/json"
// StrSlice represents a string or an array of strings.
// We need to override the json decoder to accept both options.
type StrSlice []string
// UnmarshalJSON decodes the byte slice whether it's a string or an array of
// strings. This method is needed to implement json.Unmarshaler.
func (e *StrSlice) UnmarshalJSON(b []byte) error {
if len(b) == 0 {
// With no input, we preserve the existing value by returning nil and
// leaving the target alone. This allows defining default values for
// the type.
return nil
}
p := make([]string, 0, 1)
if err := json.Unmarshal(b, &p); err != nil {
var s string
if err := json.Unmarshal(b, &s); err != nil {
return err
}
p = append(p, s)
}
*e = p
return nil
}

View File

@@ -0,0 +1,11 @@
//go:build !freebsd
package sysregistriesv2
// builtinRegistriesConfPath is the path to the registry configuration file.
// DO NOT change this, instead see systemRegistriesConfPath above.
const builtinRegistriesConfPath = "/etc/containers/registries.conf"
// builtinRegistriesConfDirPath is the path to the registry configuration directory.
// DO NOT change this, instead see systemRegistriesConfDirectoryPath above.
const builtinRegistriesConfDirPath = "/etc/containers/registries.conf.d"

View File

@@ -0,0 +1,11 @@
//go:build freebsd
package sysregistriesv2
// builtinRegistriesConfPath is the path to the registry configuration file.
// DO NOT change this, instead see systemRegistriesConfPath above.
const builtinRegistriesConfPath = "/usr/local/etc/containers/registries.conf"
// builtinRegistriesConfDirPath is the path to the registry configuration directory.
// DO NOT change this, instead see systemRegistriesConfDirectoryPath above.
const builtinRegistriesConfDirPath = "/usr/local/etc/containers/registries.conf.d"

View File

@@ -0,0 +1,353 @@
package sysregistriesv2
import (
"fmt"
"maps"
"os"
"path/filepath"
"reflect"
"strings"
"github.com/BurntSushi/toml"
"github.com/sirupsen/logrus"
"go.podman.io/image/v5/docker/reference"
"go.podman.io/image/v5/internal/multierr"
"go.podman.io/image/v5/internal/rootless"
"go.podman.io/image/v5/types"
"go.podman.io/storage/pkg/homedir"
"go.podman.io/storage/pkg/lockfile"
)
// defaultShortNameMode is the default mode of registries.conf files if the
// corresponding field is left empty.
const defaultShortNameMode = types.ShortNameModePermissive
// userShortNamesFile is the user-specific config file to store aliases.
var userShortNamesFile = filepath.FromSlash("containers/short-name-aliases.conf")
// shortNameAliasesConfPath returns the path to the machine-generated
// short-name-aliases.conf file.
func shortNameAliasesConfPath(ctx *types.SystemContext) (string, error) {
if ctx != nil && len(ctx.UserShortNameAliasConfPath) > 0 {
return ctx.UserShortNameAliasConfPath, nil
}
if rootless.GetRootlessEUID() == 0 {
// Root user or in a non-conforming user NS
return filepath.Join("/var/cache", userShortNamesFile), nil
}
// Rootless user
cacheRoot, err := homedir.GetCacheHome()
if err != nil {
return "", err
}
return filepath.Join(cacheRoot, userShortNamesFile), nil
}
// shortNameAliasConf is a subset of the `V2RegistriesConf` format. It's used in the
// software-maintained `userShortNamesFile`.
type shortNameAliasConf struct {
// A map for aliasing short names to their fully-qualified image
// reference counter parts.
// Note that Aliases is niled after being loaded from a file.
Aliases map[string]string `toml:"aliases"`
// If you add any field, make sure to update nonempty() below.
}
// nonempty returns true if config contains at least one configuration entry.
func (c *shortNameAliasConf) nonempty() bool {
copy := *c // A shallow copy
if copy.Aliases != nil && len(copy.Aliases) == 0 {
copy.Aliases = nil
}
return !reflect.DeepEqual(copy, shortNameAliasConf{})
}
// alias combines the parsed value of an alias with the config file it has been
// specified in. The config file is crucial for an improved user experience
// such that users are able to resolve potential pull errors.
type alias struct {
// The parsed value of an alias. May be nil if set to "" in a config.
value reference.Named
// The config file the alias originates from.
configOrigin string
}
// shortNameAliasCache is the result of parsing shortNameAliasConf,
// pre-processed for faster usage.
type shortNameAliasCache struct {
// Note that an alias value may be nil iff it's set as an empty string
// in the config.
namedAliases map[string]alias
}
// ResolveShortNameAlias performs an alias resolution of the specified name.
// The user-specific short-name-aliases.conf has precedence over aliases in the
// assembled registries.conf. It returns the possibly resolved alias or nil, a
// human-readable description of the config where the alias is specified, and
// an error. The origin of the config file is crucial for an improved user
// experience such that users are able to resolve potential pull errors.
// Almost all callers should use pkg/shortnames instead.
//
// Note that its the callers responsibility to pass only a repository
// (reference.IsNameOnly) as the short name.
func ResolveShortNameAlias(ctx *types.SystemContext, name string) (reference.Named, string, error) {
if err := validateShortName(name); err != nil {
return nil, "", err
}
confPath, lock, err := shortNameAliasesConfPathAndLock(ctx)
if err != nil {
return nil, "", err
}
// Acquire the lock as a reader to allow for multiple routines in the
// same process space to read simultaneously.
lock.RLock()
defer lock.Unlock()
_, aliasCache, err := loadShortNameAliasConf(confPath)
if err != nil {
return nil, "", err
}
// First look up the short-name-aliases.conf. Note that a value may be
// nil iff it's set as an empty string in the config.
alias, resolved := aliasCache.namedAliases[name]
if resolved {
return alias.value, alias.configOrigin, nil
}
config, err := getConfig(ctx)
if err != nil {
return nil, "", err
}
alias, resolved = config.aliasCache.namedAliases[name]
if resolved {
return alias.value, alias.configOrigin, nil
}
return nil, "", nil
}
// editShortNameAlias loads the aliases.conf file and changes it. If value is
// set, it adds the name-value pair as a new alias. Otherwise, it will remove
// name from the config.
func editShortNameAlias(ctx *types.SystemContext, name string, value *string) (retErr error) {
if err := validateShortName(name); err != nil {
return err
}
if value != nil {
if _, err := parseShortNameValue(*value); err != nil {
return err
}
}
confPath, lock, err := shortNameAliasesConfPathAndLock(ctx)
if err != nil {
return err
}
// Acquire the lock as a writer to prevent data corruption.
lock.Lock()
defer lock.Unlock()
// Load the short-name-alias.conf, add the specified name-value pair,
// and write it back to the file.
conf, _, err := loadShortNameAliasConf(confPath)
if err != nil {
return err
}
if conf.Aliases == nil { // Ensure we have a map to update.
conf.Aliases = make(map[string]string)
}
if value != nil {
conf.Aliases[name] = *value
} else {
// If the name does not exist, throw an error.
if _, exists := conf.Aliases[name]; !exists {
return fmt.Errorf("short-name alias %q not found in %q: please check registries.conf files", name, confPath)
}
delete(conf.Aliases, name)
}
f, err := os.OpenFile(confPath, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0600)
if err != nil {
return err
}
// since we are writing to this file, make sure we handle err on Close()
defer func() {
closeErr := f.Close()
if retErr == nil {
retErr = closeErr
}
}()
encoder := toml.NewEncoder(f)
return encoder.Encode(conf)
}
// AddShortNameAlias adds the specified name-value pair as a new alias to the
// user-specific aliases.conf. It may override an existing alias for `name`.
//
// Note that its the callers responsibility to pass only a repository
// (reference.IsNameOnly) as the short name.
func AddShortNameAlias(ctx *types.SystemContext, name string, value string) error {
return editShortNameAlias(ctx, name, &value)
}
// RemoveShortNameAlias clears the alias for the specified name. It throws an
// error in case name does not exist in the machine-generated
// short-name-alias.conf. In such case, the alias must be specified in one of
// the registries.conf files, which is the users' responsibility.
//
// Note that its the callers responsibility to pass only a repository
// (reference.IsNameOnly) as the short name.
func RemoveShortNameAlias(ctx *types.SystemContext, name string) error {
return editShortNameAlias(ctx, name, nil)
}
// parseShortNameValue parses the specified alias into a reference.Named. The alias is
// expected to not be tagged or carry a digest and *must* include a
// domain/registry.
//
// Note that the returned reference is always normalized.
func parseShortNameValue(alias string) (reference.Named, error) {
ref, err := reference.Parse(alias)
if err != nil {
return nil, fmt.Errorf("parsing alias %q: %w", alias, err)
}
if _, ok := ref.(reference.Digested); ok {
return nil, fmt.Errorf("invalid alias %q: must not contain digest", alias)
}
if _, ok := ref.(reference.Tagged); ok {
return nil, fmt.Errorf("invalid alias %q: must not contain tag", alias)
}
named, ok := ref.(reference.Named)
if !ok {
return nil, fmt.Errorf("invalid alias %q: must contain registry and repository", alias)
}
registry := reference.Domain(named)
if !strings.ContainsAny(registry, ".:") && registry != "localhost" {
return nil, fmt.Errorf("invalid alias %q: must contain registry and repository", alias)
}
// A final parse to make sure that docker.io references are correctly
// normalized (e.g., docker.io/alpine to docker.io/library/alpine.
named, err = reference.ParseNormalizedNamed(alias)
return named, err
}
// validateShortName parses the specified `name` of an alias (i.e., the left-hand
// side) and checks if it's a short name and does not include a tag or digest.
func validateShortName(name string) error {
repo, err := reference.Parse(name)
if err != nil {
return fmt.Errorf("cannot parse short name: %q: %w", name, err)
}
if _, ok := repo.(reference.Digested); ok {
return fmt.Errorf("invalid short name %q: must not contain digest", name)
}
if _, ok := repo.(reference.Tagged); ok {
return fmt.Errorf("invalid short name %q: must not contain tag", name)
}
named, ok := repo.(reference.Named)
if !ok {
return fmt.Errorf("invalid short name %q: no name", name)
}
registry := reference.Domain(named)
if strings.ContainsAny(registry, ".:") || registry == "localhost" {
return fmt.Errorf("invalid short name %q: must not contain registry", name)
}
return nil
}
// newShortNameAliasCache parses shortNameAliasConf and returns the corresponding internal
// representation.
func newShortNameAliasCache(path string, conf *shortNameAliasConf) (*shortNameAliasCache, error) {
res := shortNameAliasCache{
namedAliases: make(map[string]alias),
}
errs := []error{}
for name, value := range conf.Aliases {
if err := validateShortName(name); err != nil {
errs = append(errs, err)
}
// Empty right-hand side values in config files allow to reset
// an alias in a previously loaded config. This way, drop-in
// config files from registries.conf.d can reset potentially
// malconfigured aliases.
if value == "" {
res.namedAliases[name] = alias{nil, path}
continue
}
named, err := parseShortNameValue(value)
if err != nil {
// We want to report *all* malformed entries to avoid a
// whack-a-mole for the user.
errs = append(errs, err)
} else {
res.namedAliases[name] = alias{named, path}
}
}
if len(errs) > 0 {
return nil, multierr.Format("", "\n", "", errs)
}
return &res, nil
}
// updateWithConfigurationFrom updates c with configuration from updates.
// In case of conflict, updates is preferred.
func (c *shortNameAliasCache) updateWithConfigurationFrom(updates *shortNameAliasCache) {
maps.Copy(c.namedAliases, updates.namedAliases)
}
func loadShortNameAliasConf(confPath string) (*shortNameAliasConf, *shortNameAliasCache, error) {
conf := shortNameAliasConf{}
meta, err := toml.DecodeFile(confPath, &conf)
if err != nil && !os.IsNotExist(err) {
// It's okay if the config doesn't exist. Other errors are not.
return nil, nil, fmt.Errorf("loading short-name aliases config file %q: %w", confPath, err)
}
if keys := meta.Undecoded(); len(keys) > 0 {
logrus.Debugf("Failed to decode keys %q from %q", keys, confPath)
}
// Even if we dont always need the cache, doing so validates the machine-generated config. The
// file could still be corrupted by another process or user.
cache, err := newShortNameAliasCache(confPath, &conf)
if err != nil {
return nil, nil, fmt.Errorf("loading short-name aliases config file %q: %w", confPath, err)
}
return &conf, cache, nil
}
func shortNameAliasesConfPathAndLock(ctx *types.SystemContext) (string, *lockfile.LockFile, error) {
shortNameAliasesConfPath, err := shortNameAliasesConfPath(ctx)
if err != nil {
return "", nil, err
}
// Make sure the path to file exists.
if err := os.MkdirAll(filepath.Dir(shortNameAliasesConfPath), 0700); err != nil {
return "", nil, err
}
lockPath := shortNameAliasesConfPath + ".lock"
locker, err := lockfile.GetLockFile(lockPath)
return shortNameAliasesConfPath, locker, err
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,101 @@
package tlsclientconfig
import (
"crypto/tls"
"crypto/x509"
"errors"
"fmt"
"net"
"net/http"
"os"
"path/filepath"
"slices"
"strings"
"time"
"github.com/sirupsen/logrus"
)
// SetupCertificates opens all .crt, .cert, and .key files in dir and appends / loads certs and key pairs as appropriate to tlsc
func SetupCertificates(dir string, tlsc *tls.Config) error {
logrus.Debugf("Looking for TLS certificates and private keys in %s", dir)
fs, err := os.ReadDir(dir)
if err != nil {
if os.IsNotExist(err) {
return nil
}
if os.IsPermission(err) {
logrus.Debugf("Skipping scan of %s due to permission error: %v", dir, err)
return nil
}
return err
}
for _, f := range fs {
fullPath := filepath.Join(dir, f.Name())
if strings.HasSuffix(f.Name(), ".crt") {
logrus.Debugf(" crt: %s", fullPath)
data, err := os.ReadFile(fullPath)
if err != nil {
if errors.Is(err, os.ErrNotExist) {
// file must have been removed between the directory listing
// and the open call, ignore that as it is a expected race
continue
}
return err
}
if tlsc.RootCAs == nil {
systemPool, err := x509.SystemCertPool()
if err != nil {
return fmt.Errorf("unable to get system cert pool: %w", err)
}
tlsc.RootCAs = systemPool
}
tlsc.RootCAs.AppendCertsFromPEM(data)
}
if base, ok := strings.CutSuffix(f.Name(), ".cert"); ok {
certName := f.Name()
keyName := base + ".key"
logrus.Debugf(" cert: %s", fullPath)
if !hasFile(fs, keyName) {
return fmt.Errorf("missing key %s for client certificate %s. Note that CA certificates should use the extension .crt", keyName, certName)
}
cert, err := tls.LoadX509KeyPair(filepath.Join(dir, certName), filepath.Join(dir, keyName))
if err != nil {
return err
}
tlsc.Certificates = append(slices.Clone(tlsc.Certificates), cert)
}
if base, ok := strings.CutSuffix(f.Name(), ".key"); ok {
keyName := f.Name()
certName := base + ".cert"
logrus.Debugf(" key: %s", fullPath)
if !hasFile(fs, certName) {
return fmt.Errorf("missing client certificate %s for key %s", certName, keyName)
}
}
}
return nil
}
func hasFile(files []os.DirEntry, name string) bool {
return slices.ContainsFunc(files, func(f os.DirEntry) bool {
return f.Name() == name
})
}
// NewTransport Creates a default transport
func NewTransport() *http.Transport {
direct := &net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
}
tr := &http.Transport{
Proxy: http.ProxyFromEnvironment,
DialContext: direct.DialContext,
TLSHandshakeTimeout: 10 * time.Second,
IdleConnTimeout: 90 * time.Second,
MaxIdleConns: 100,
}
return tr
}

36
vendor/go.podman.io/image/v5/transports/stub.go generated vendored Normal file
View File

@@ -0,0 +1,36 @@
package transports
import (
"fmt"
"go.podman.io/image/v5/types"
)
// stubTransport is an implementation of types.ImageTransport which has a name, but rejects any references with “the transport $name: is not supported in this build”.
type stubTransport string
// NewStubTransport returns an implementation of types.ImageTransport which has a name, but rejects any references with “the transport $name: is not supported in this build”.
func NewStubTransport(name string) types.ImageTransport {
return stubTransport(name)
}
// Name returns the name of the transport, which must be unique among other transports.
func (s stubTransport) Name() string {
return string(s)
}
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an ImageReference.
func (s stubTransport) ParseReference(reference string) (types.ImageReference, error) {
return nil, fmt.Errorf(`The transport "%s:" is not supported in this build`, string(s))
}
// ValidatePolicyConfigurationScope checks that scope is a valid name for a signature.PolicyTransportScopes keys
// (i.e. a valid PolicyConfigurationIdentity() or PolicyConfigurationNamespaces() return value).
// It is acceptable to allow an invalid value which will never be matched, it can "only" cause user confusion.
// scope passed to this function will not be "", that value is always allowed.
func (s stubTransport) ValidatePolicyConfigurationScope(scope string) error {
// Allowing any reference in here allows tools with some transports stubbed-out to still
// use signature verification policies which refer to these stubbed-out transports.
// See also the treatment of unknown transports in policyTransportScopesWithTransport.UnmarshalJSON .
return nil
}

90
vendor/go.podman.io/image/v5/transports/transports.go generated vendored Normal file
View File

@@ -0,0 +1,90 @@
package transports
import (
"fmt"
"sort"
"sync"
"go.podman.io/image/v5/internal/set"
"go.podman.io/image/v5/types"
)
// knownTransports is a registry of known ImageTransport instances.
type knownTransports struct {
transports map[string]types.ImageTransport
mu sync.Mutex
}
func (kt *knownTransports) Get(k string) types.ImageTransport {
kt.mu.Lock()
t := kt.transports[k]
kt.mu.Unlock()
return t
}
func (kt *knownTransports) Remove(k string) {
kt.mu.Lock()
delete(kt.transports, k)
kt.mu.Unlock()
}
func (kt *knownTransports) Add(t types.ImageTransport) {
kt.mu.Lock()
defer kt.mu.Unlock()
name := t.Name()
if t := kt.transports[name]; t != nil {
panic(fmt.Sprintf("Duplicate image transport name %s", name))
}
kt.transports[name] = t
}
var kt *knownTransports
func init() {
kt = &knownTransports{
transports: make(map[string]types.ImageTransport),
}
}
// Get returns the transport specified by name or nil when unavailable.
func Get(name string) types.ImageTransport {
return kt.Get(name)
}
// Delete deletes a transport from the registered transports.
func Delete(name string) {
kt.Remove(name)
}
// Register registers a transport.
func Register(t types.ImageTransport) {
kt.Add(t)
}
// ImageName converts a types.ImageReference into an URL-like image name, which MUST be such that
// ParseImageName(ImageName(reference)) returns an equivalent reference.
//
// This is the generally recommended way to refer to images in the UI.
//
// NOTE: The returned string is not promised to be equal to the original input to ParseImageName;
// e.g. default attribute values omitted by the user may be filled in the return value, or vice versa.
func ImageName(ref types.ImageReference) string {
return ref.Transport().Name() + ":" + ref.StringWithinTransport()
}
var deprecatedTransports = set.NewWithValues("atomic", "ostree")
// ListNames returns a list of non deprecated transport names.
// Deprecated transports can be used, but are not presented to users.
func ListNames() []string {
kt.mu.Lock()
defer kt.mu.Unlock()
var names []string
for _, transport := range kt.transports {
if !deprecatedTransports.Contains(transport.Name()) {
names = append(names, transport.Name())
}
}
sort.Strings(names)
return names
}

731
vendor/go.podman.io/image/v5/types/types.go generated vendored Normal file
View File

@@ -0,0 +1,731 @@
package types
import (
"context"
"io"
"net/url"
"time"
digest "github.com/opencontainers/go-digest"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
"go.podman.io/image/v5/docker/reference"
compression "go.podman.io/image/v5/pkg/compression/types"
)
// ImageTransport is a top-level namespace for ways to store/load an image.
// It should generally correspond to ImageSource/ImageDestination implementations.
//
// Note that ImageTransport is based on "ways the users refer to image storage", not necessarily on the underlying physical transport.
// For example, all Docker References would be used within a single "docker" transport, regardless of whether the images are pulled over HTTP or HTTPS
// (or, even, IPv4 or IPv6).
//
// OTOH all images using the same transport should (apart from versions of the image format), be interoperable.
// For example, several different ImageTransport implementations may be based on local filesystem paths,
// but using completely different formats for the contents of that path (a single tar file, a directory containing tarballs, a fully expanded container filesystem, ...)
//
// See also transports.KnownTransports.
type ImageTransport interface {
// Name returns the name of the transport, which must be unique among other transports.
Name() string
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an ImageReference.
ParseReference(reference string) (ImageReference, error)
// ValidatePolicyConfigurationScope checks that scope is a valid name for a signature.PolicyTransportScopes keys
// (i.e. a valid PolicyConfigurationIdentity() or PolicyConfigurationNamespaces() return value).
// It is acceptable to allow an invalid value which will never be matched, it can "only" cause user confusion.
// scope passed to this function will not be "", that value is always allowed.
ValidatePolicyConfigurationScope(scope string) error
}
// ImageReference is an abstracted way to refer to an image location, namespaced within an ImageTransport.
//
// The object should preferably be immutable after creation, with any parsing/state-dependent resolving happening
// within an ImageTransport.ParseReference() or equivalent API creating the reference object.
// That's also why the various identification/formatting methods of this type do not support returning errors.
//
// WARNING: While this design freezes the content of the reference within this process, it can not freeze the outside
// world: paths may be replaced by symlinks elsewhere, HTTP APIs may start returning different results, and so on.
type ImageReference interface {
Transport() ImageTransport
// StringWithinTransport returns a string representation of the reference, which MUST be such that
// reference.Transport().ParseReference(reference.StringWithinTransport()) returns an equivalent reference.
// NOTE: The returned string is not promised to be equal to the original input to ParseReference;
// e.g. default attribute values omitted by the user may be filled in the return value, or vice versa.
// WARNING: Do not use the return value in the UI to describe an image, it does not contain the Transport().Name() prefix;
// instead, see transports.ImageName().
StringWithinTransport() string
// DockerReference returns a Docker reference associated with this reference
// (fully explicit, i.e. !reference.IsNameOnly, but reflecting user intent,
// not e.g. after redirect or alias processing), or nil if unknown/not applicable.
DockerReference() reference.Named
// PolicyConfigurationIdentity returns a string representation of the reference, suitable for policy lookup.
// This MUST reflect user intent, not e.g. after processing of third-party redirects or aliases;
// The value SHOULD be fully explicit about its semantics, with no hidden defaults, AND canonical
// (i.e. various references with exactly the same semantics should return the same configuration identity)
// It is fine for the return value to be equal to StringWithinTransport(), and it is desirable but
// not required/guaranteed that it will be a valid input to Transport().ParseReference().
// Returns "" if configuration identities for these references are not supported.
PolicyConfigurationIdentity() string
// PolicyConfigurationNamespaces returns a list of other policy configuration namespaces to search
// for if explicit configuration for PolicyConfigurationIdentity() is not set. The list will be processed
// in order, terminating on first match, and an implicit "" is always checked at the end.
// It is STRONGLY recommended for the first element, if any, to be a prefix of PolicyConfigurationIdentity(),
// and each following element to be a prefix of the element preceding it.
PolicyConfigurationNamespaces() []string
// NewImage returns a types.ImageCloser for this reference, possibly specialized for this ImageTransport.
// The caller must call .Close() on the returned ImageCloser.
// NOTE: If any kind of signature verification should happen, build an UnparsedImage from the value returned by NewImageSource,
// verify that UnparsedImage, and convert it into a real Image via image.FromUnparsedImage.
// WARNING: This may not do the right thing for a manifest list, see image.FromSource for details.
NewImage(ctx context.Context, sys *SystemContext) (ImageCloser, error)
// NewImageSource returns a types.ImageSource for this reference.
// The caller must call .Close() on the returned ImageSource.
NewImageSource(ctx context.Context, sys *SystemContext) (ImageSource, error)
// NewImageDestination returns a types.ImageDestination for this reference.
// The caller must call .Close() on the returned ImageDestination.
NewImageDestination(ctx context.Context, sys *SystemContext) (ImageDestination, error)
// DeleteImage deletes the named image from the registry, if supported.
DeleteImage(ctx context.Context, sys *SystemContext) error
}
// LayerCompression indicates if layers must be compressed, decompressed or preserved
type LayerCompression int
const (
// PreserveOriginal indicates the layer must be preserved, ie
// no compression or decompression.
PreserveOriginal LayerCompression = iota
// Decompress indicates the layer must be decompressed
Decompress
// Compress indicates the layer must be compressed
Compress
)
// LayerCrypto indicates if layers have been encrypted or decrypted or none
type LayerCrypto int
const (
// PreserveOriginalCrypto indicates the layer must be preserved, ie
// no encryption/decryption
PreserveOriginalCrypto LayerCrypto = iota
// Encrypt indicates the layer is encrypted
Encrypt
// Decrypt indicates the layer is decrypted
Decrypt
)
// BlobInfo collects known information about a blob (layer/config).
// In some situations, some fields may be unknown, in others they may be mandatory; documenting an “unknown” value here does not override that.
type BlobInfo struct {
Digest digest.Digest // "" if unknown.
Size int64 // -1 if unknown
URLs []string
Annotations map[string]string
MediaType string
// NOTE: The following fields contain desired _edits_ to blob infos.
// Conceptually then don't belong in the BlobInfo object at all;
// the edits should be provided specifically as parameters to the edit implementation.
// We cant remove the fields without breaking compatibility, but dont
// add any more.
// CompressionOperation is used in Image.UpdateLayerInfos to instruct
// whether the original layer's "compressed or not" should be preserved,
// possibly while changing the compression algorithm from one to another,
// or if it should be changed to compressed or decompressed.
// The field defaults to preserve the original layer's compressedness.
// TODO: To remove together with CryptoOperation in re-design to remove
// field out of BlobInfo.
CompressionOperation LayerCompression
// CompressionAlgorithm is used in Image.UpdateLayerInfos to set the correct
// MIME type for compressed layers (e.g., gzip or zstd). This field MUST be
// set when `CompressionOperation == Compress` and MAY be set when
// `CompressionOperation == PreserveOriginal` and the compression type is
// being changed for an already-compressed layer.
CompressionAlgorithm *compression.Algorithm
// CryptoOperation is used in Image.UpdateLayerInfos to instruct
// whether the original layer was encrypted/decrypted
// TODO: To remove together with CompressionOperation in re-design to
// remove field out of BlobInfo.
CryptoOperation LayerCrypto
// Before adding any fields to this struct, read the NOTE above.
}
// BICTransportScope encapsulates transport-dependent representation of a “scope” where blobs are or are not present.
// BlobInfocache.RecordKnownLocations / BlobInfocache.CandidateLocations record data about blobs keyed by (scope, digest).
// The scope will typically be similar to an ImageReference, or a superset of it within which blobs are reusable.
//
// NOTE: The contents of this structure may be recorded in a persistent file, possibly shared across different
// tools which use different versions of the transport. Allow for reasonable backward/forward compatibility,
// at least by not failing hard when encountering unknown data.
type BICTransportScope struct {
Opaque string
}
// BICLocationReference encapsulates transport-dependent representation of a blob location within a BICTransportScope.
// Each transport can store arbitrary data using BlobInfoCache.RecordKnownLocation, and ImageDestination.TryReusingBlob
// can look it up using BlobInfoCache.CandidateLocations.
//
// NOTE: The contents of this structure may be recorded in a persistent file, possibly shared across different
// tools which use different versions of the transport. Allow for reasonable backward/forward compatibility,
// at least by not failing hard when encountering unknown data.
type BICLocationReference struct {
Opaque string
}
// BICReplacementCandidate is an item returned by BlobInfoCache.CandidateLocations.
type BICReplacementCandidate struct {
Digest digest.Digest
Location BICLocationReference
}
// BlobInfoCache records data useful for reusing blobs, or substituting equivalent ones, to avoid unnecessary blob copies.
//
// It records two kinds of data:
//
// - Sets of corresponding digest vs. uncompressed digest ("DiffID") pairs:
// One of the two digests is known to be uncompressed, and a single uncompressed digest may correspond to more than one compressed digest.
// This allows matching compressed layer blobs to existing local uncompressed layers (to avoid unnecessary download and decompression),
// or uncompressed layer blobs to existing remote compressed layers (to avoid unnecessary compression and upload)/
//
// It is allowed to record an (uncompressed digest, the same uncompressed digest) correspondence, to express that the digest is known
// to be uncompressed (i.e. that a conversion from schema1 does not have to decompress the blob to compute a DiffID value).
//
// This mapping is primarily maintained in generic copy.Image code, but transports may want to contribute more data points if they independently
// compress/decompress blobs for their own purposes.
//
// - Known blob locations, managed by individual transports:
// The transports call RecordKnownLocation when encountering a blob that could possibly be reused (typically in GetBlob/PutBlob/TryReusingBlob),
// recording transport-specific information that allows the transport to reuse the blob in the future;
// then, TryReusingBlob implementations can call CandidateLocations to look up previously recorded blob locations that could be reused.
//
// Each transport defines its own “scopes” within which blob reuse is possible (e.g. in, the docker/distribution case, blobs
// can be directly reused within a registry, or mounted across registries within a registry server.)
//
// None of the methods return an error indication: errors when neither reading from, nor writing to, the cache, should be fatal;
// users of the cache should just fall back to copying the blobs the usual way.
//
// The BlobInfoCache interface is deprecated. Consumers of this library should use one of the implementations provided by
// subpackages of the library's "pkg/blobinfocache" package in preference to implementing the interface on their own.
type BlobInfoCache interface {
// UncompressedDigest returns an uncompressed digest corresponding to anyDigest.
// May return anyDigest if it is known to be uncompressed.
// Returns "" if nothing is known about the digest (it may be compressed or uncompressed).
UncompressedDigest(anyDigest digest.Digest) digest.Digest
// RecordDigestUncompressedPair records that the uncompressed version of anyDigest is uncompressed.
// Its allowed for anyDigest == uncompressed.
// WARNING: Only call this for LOCALLY VERIFIED data; dont record a digest pair just because some remote author claims so (e.g.
// because a manifest/config pair exists); otherwise the cache could be poisoned and allow substituting unexpected blobs.
// (Eventually, the DiffIDs in image config could detect the substitution, but that may be too late, and not all image formats contain that data.)
RecordDigestUncompressedPair(anyDigest digest.Digest, uncompressed digest.Digest)
// RecordKnownLocation records that a blob with the specified digest exists within the specified (transport, scope) scope,
// and can be reused given the opaque location data.
RecordKnownLocation(transport ImageTransport, scope BICTransportScope, digest digest.Digest, location BICLocationReference)
// CandidateLocations returns a prioritized, limited, number of blobs and their locations that could possibly be reused
// within the specified (transport scope) (if they still exist, which is not guaranteed).
//
// If !canSubstitute, the returned candidates will match the submitted digest exactly; if canSubstitute,
// data from previous RecordDigestUncompressedPair calls is used to also look up variants of the blob which have the same
// uncompressed digest.
CandidateLocations(transport ImageTransport, scope BICTransportScope, digest digest.Digest, canSubstitute bool) []BICReplacementCandidate
}
// ImageSource is a service, possibly remote (= slow), to download components of a single image or a named image set (manifest list).
// This is primarily useful for copying images around; for examining their properties, Image (below)
// is usually more useful.
// Each ImageSource should eventually be closed by calling Close().
//
// WARNING: Various methods which return an object identified by digest generally do not
// validate that the returned data actually matches that digest; this is the callers responsibility.
// See the individual methods documentation for potentially more details.
type ImageSource interface {
// Reference returns the reference used to set up this source, _as specified by the user_
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
Reference() ImageReference
// Close removes resources associated with an initialized ImageSource, if any.
Close() error
// GetManifest returns the image's manifest along with its MIME type (which may be empty when it can't be determined but the manifest is available).
// It may use a remote (= slow) service.
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve (when the primary manifest is a manifest list);
// this never happens if the primary manifest is not a manifest list (e.g. if the source never returns manifest lists).
//
// WARNING: This is a raw access to the data as provided by the source; if the reference contains a digest, or instanceDigest is set,
// callers must enforce the digest match themselves, typically by using image.UnparsedInstance to access the manifest instead
// of calling this directly. (Compare the generic warning applicable to all of the [ImageSource] interface.)
GetManifest(ctx context.Context, instanceDigest *digest.Digest) ([]byte, string, error)
// GetBlob returns a stream for the specified blob, and the blobs size (or -1 if unknown).
// The Digest field in BlobInfo is guaranteed to be provided, Size may be -1 and MediaType may be optionally provided.
// May update BlobInfoCache, preferably after it knows for certain that a blob truly exists at a specific location.
//
// WARNING: This is a raw access to the data as provided by the source; callers must validate the contents
// against the blobs digest themselves. (Compare the generic warning applicable to all of the [ImageSource] interface.)
GetBlob(context.Context, BlobInfo, BlobInfoCache) (io.ReadCloser, int64, error)
// HasThreadSafeGetBlob indicates whether GetBlob can be executed concurrently.
HasThreadSafeGetBlob() bool
// GetSignatures returns the image's signatures. It may use a remote (= slow) service.
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve signatures for
// (when the primary manifest is a manifest list); this never happens if the primary manifest is not a manifest list
// (e.g. if the source never returns manifest lists).
GetSignatures(ctx context.Context, instanceDigest *digest.Digest) ([][]byte, error)
// LayerInfosForCopy returns either nil (meaning the values in the manifest are fine), or updated values for the layer
// blobsums that are listed in the image's manifest. If values are returned, they should be used when using GetBlob()
// to read the image's layers.
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to retrieve BlobInfos for
// (when the primary manifest is a manifest list); this never happens if the primary manifest is not a manifest list
// (e.g. if the source never returns manifest lists).
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
LayerInfosForCopy(ctx context.Context, instanceDigest *digest.Digest) ([]BlobInfo, error)
}
// ImageDestination is a service, possibly remote (= slow), to store components of a single image.
//
// There is a specific required order for some of the calls:
// TryReusingBlob/PutBlob on the various blobs, if any, MUST be called before PutManifest (manifest references blobs, which may be created or compressed only at push time)
// PutSignatures, if called, MUST be called after PutManifest (signatures reference manifest contents)
// Finally, Commit MUST be called if the caller wants the image, as formed by the components saved above, to persist.
//
// Each ImageDestination should eventually be closed by calling Close().
type ImageDestination interface {
// Reference returns the reference used to set up this destination. Note that this should directly correspond to user's intent,
// e.g. it should use the public hostname instead of the result of resolving CNAMEs or following redirects.
Reference() ImageReference
// Close removes resources associated with an initialized ImageDestination, if any.
Close() error
// SupportedManifestMIMETypes tells which manifest mime types the destination supports
// If an empty slice or nil it's returned, then any mime type can be tried to upload
SupportedManifestMIMETypes() []string
// SupportsSignatures returns an error (to be displayed to the user) if the destination certainly can't store signatures.
// Note: It is still possible for PutSignatures to fail if SupportsSignatures returns nil.
SupportsSignatures(ctx context.Context) error
// DesiredLayerCompression indicates the kind of compression to apply on layers
DesiredLayerCompression() LayerCompression
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
AcceptsForeignLayerURLs() bool
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime architecture and OS. False otherwise.
MustMatchRuntimeOS() bool
// IgnoresEmbeddedDockerReference() returns true iff the destination does not care about Image.EmbeddedDockerReferenceConflicts(),
// and would prefer to receive an unmodified manifest instead of one modified for the destination.
// Does not make a difference if Reference().DockerReference() is nil.
IgnoresEmbeddedDockerReference() bool
// PutBlob writes contents of stream and returns data representing the result.
// inputInfo.Digest can be optionally provided if known; if provided, and stream is read to the end without error, the digest MUST match the stream contents.
// inputInfo.Size is the expected length of stream, if known.
// inputInfo.MediaType describes the blob format, if known.
// May update cache.
// WARNING: The contents of stream are being verified on the fly. Until stream.Read() returns io.EOF, the contents of the data SHOULD NOT be available
// to any other readers for download using the supplied digest.
// If stream.Read() at any time, ESPECIALLY at end of input, returns an error, PutBlob MUST 1) fail, and 2) delete any data stored so far.
PutBlob(ctx context.Context, stream io.Reader, inputInfo BlobInfo, cache BlobInfoCache, isConfig bool) (BlobInfo, error)
// HasThreadSafePutBlob indicates whether PutBlob can be executed concurrently.
HasThreadSafePutBlob() bool
// TryReusingBlob checks whether the transport already contains, or can efficiently reuse, a blob, and if so, applies it to the current destination
// (e.g. if the blob is a filesystem layer, this signifies that the changes it describes need to be applied again when composing a filesystem tree).
// info.Digest must not be empty.
// If canSubstitute, TryReusingBlob can use an equivalent equivalent of the desired blob; in that case the returned info may not match the input.
// If the blob has been successfully reused, returns (true, info, nil); info must contain at least a digest and size, and may
// include CompressionOperation and CompressionAlgorithm fields to indicate that a change to the compression type should be
// reflected in the manifest that will be written.
// If the transport can not reuse the requested blob, TryReusingBlob returns (false, {}, nil); it returns a non-nil error only on an unexpected failure.
// May use and/or update cache.
TryReusingBlob(ctx context.Context, info BlobInfo, cache BlobInfoCache, canSubstitute bool) (bool, BlobInfo, error)
// PutManifest writes manifest to the destination.
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to write the manifest for
// (when the primary manifest is a manifest list); this should always be nil if the primary manifest is not a manifest list.
// It is expected but not enforced that the instanceDigest, when specified, matches the digest of `manifest` as generated
// by `manifest.Digest()`.
// FIXME? This should also receive a MIME type if known, to differentiate between schema versions.
// If the destination is in principle available, refuses this manifest type (e.g. it does not recognize the schema),
// but may accept a different manifest type, the returned error must be an ManifestTypeRejectedError.
PutManifest(ctx context.Context, manifest []byte, instanceDigest *digest.Digest) error
// PutSignatures writes a set of signatures to the destination.
// If instanceDigest is not nil, it contains a digest of the specific manifest instance to write or overwrite the signatures for
// (when the primary manifest is a manifest list); this should always be nil if the primary manifest is not a manifest list.
// MUST be called after PutManifest (signatures may reference manifest contents).
PutSignatures(ctx context.Context, signatures [][]byte, instanceDigest *digest.Digest) error
// Commit marks the process of storing the image as successful and asks for the image to be persisted.
// unparsedToplevel contains data about the top-level manifest of the source (which may be a single-arch image or a manifest list
// if PutManifest was only called for the single-arch image with instanceDigest == nil), primarily to allow lookups by the
// original manifest list digest, if desired.
// WARNING: This does not have any transactional semantics:
// - Uploaded data MAY be visible to others before Commit() is called
// - Uploaded data MAY be removed or MAY remain around if Close() is called without Commit() (i.e. rollback is allowed but not guaranteed)
Commit(ctx context.Context, unparsedToplevel UnparsedImage) error
}
// ManifestTypeRejectedError is returned by ImageDestination.PutManifest if the destination is in principle available,
// refuses specifically this manifest type, but may accept a different manifest type.
type ManifestTypeRejectedError struct { // We only use a struct to allow a type assertion, without limiting the contents of the error otherwise.
Err error
}
func (e ManifestTypeRejectedError) Error() string {
return e.Err.Error()
}
// UnparsedImage is an Image-to-be; until it is verified and accepted, it only caries its identity and caches manifest and signature blobs.
// Thus, an UnparsedImage can be created from an ImageSource simply by fetching blobs without interpreting them,
// allowing cryptographic signature verification to happen first, before even fetching the manifest, or parsing anything else.
// This also makes the UnparsedImage→Image conversion an explicitly visible step.
//
// An UnparsedImage is a pair of (ImageSource, instance digest); it can represent either a manifest list or a single image instance.
//
// The UnparsedImage must not be used after the underlying ImageSource is Close()d.
type UnparsedImage interface {
// Reference returns the reference used to set up this source, _as specified by the user_
// (not as the image itself, or its underlying storage, claims). This can be used e.g. to determine which public keys are trusted for this image.
Reference() ImageReference
// Manifest is like ImageSource.GetManifest, but the result is cached; it is OK to call this however often you need.
Manifest(ctx context.Context) ([]byte, string, error)
// Signatures is like ImageSource.GetSignatures, but the result is cached; it is OK to call this however often you need.
Signatures(ctx context.Context) ([][]byte, error)
}
// Image is the primary API for inspecting properties of images.
// An Image is based on a pair of (ImageSource, instance digest); it can represent either a manifest list or a single image instance.
//
// The Image must not be used after the underlying ImageSource is Close()d.
type Image interface {
// Note that Reference may return nil in the return value of UpdatedImage!
UnparsedImage
// ConfigInfo returns a complete BlobInfo for the separate config object, or a BlobInfo{Digest:""} if there isn't a separate object.
// Note that the config object may not exist in the underlying storage in the return value of UpdatedImage! Use ConfigBlob() below.
ConfigInfo() BlobInfo
// ConfigBlob returns the blob described by ConfigInfo, if ConfigInfo().Digest != ""; nil otherwise.
// The result is cached; it is OK to call this however often you need.
ConfigBlob(context.Context) ([]byte, error)
// OCIConfig returns the image configuration as per OCI v1 image-spec. Information about
// layers in the resulting configuration isn't guaranteed to be returned to due how
// old image manifests work (docker v2s1 especially).
OCIConfig(context.Context) (*v1.Image, error)
// LayerInfos returns a list of BlobInfos of layers referenced by this image, in order (the root layer first, and then successive layered layers).
// The Digest field is guaranteed to be provided, Size may be -1 and MediaType may be optionally provided.
// WARNING: The list may contain duplicates, and they are semantically relevant.
LayerInfos() []BlobInfo
// LayerInfosForCopy returns either nil (meaning the values in the manifest are fine), or updated values for the layer blobsums that are listed in the image's manifest.
// The Digest field is guaranteed to be provided, Size may be -1 and MediaType may be optionally provided.
// WARNING: The list may contain duplicates, and they are semantically relevant.
LayerInfosForCopy(context.Context) ([]BlobInfo, error)
// EmbeddedDockerReferenceConflicts whether a Docker reference embedded in the manifest, if any, conflicts with destination ref.
// It returns false if the manifest does not embed a Docker reference.
// (This embedding unfortunately happens for Docker schema1, please do not add support for this in any new formats.)
EmbeddedDockerReferenceConflicts(ref reference.Named) bool
// Inspect returns various information for (skopeo inspect) parsed from the manifest and configuration.
Inspect(context.Context) (*ImageInspectInfo, error)
// UpdatedImageNeedsLayerDiffIDs returns true iff UpdatedImage(options) needs InformationOnly.LayerDiffIDs.
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
// (most importantly it forces us to download the full layers even if they are already present at the destination).
UpdatedImageNeedsLayerDiffIDs(options ManifestUpdateOptions) bool
// UpdatedImage returns a types.Image modified according to options.
// Everything in options.InformationOnly should be provided, other fields should be set only if a modification is desired.
// This does not change the state of the original Image object.
// The returned error will be a manifest.ManifestLayerCompressionIncompatibilityError if
// manifests of type options.ManifestMIMEType can not include layers that are compressed
// in accordance with the CompressionOperation and CompressionAlgorithm specified in one
// or more options.LayerInfos items, though retrying with a different
// options.ManifestMIMEType or with different CompressionOperation+CompressionAlgorithm
// values might succeed.
UpdatedImage(ctx context.Context, options ManifestUpdateOptions) (Image, error)
// SupportsEncryption returns an indicator that the image supports encryption
//
// Deprecated: Initially used to determine if a manifest can be copied from a source manifest type since
// the process of updating a manifest between different manifest types was to update then convert.
// This resulted in some fields in the update being lost. This has been fixed by: https://github.com/containers/image/pull/836
SupportsEncryption(ctx context.Context) bool
// Size returns an approximation of the amount of disk space which is consumed by the image in its current
// location. If the size is not known, -1 will be returned.
Size() (int64, error)
}
// ImageCloser is an Image with a Close() method which must be called by the user.
// This is returned by ImageReference.NewImage, which transparently instantiates a types.ImageSource,
// to ensure that the ImageSource is closed.
type ImageCloser interface {
Image
// Close removes resources associated with an initialized ImageCloser.
Close() error
}
// ManifestUpdateOptions is a way to pass named optional arguments to Image.UpdatedImage
type ManifestUpdateOptions struct {
LayerInfos []BlobInfo // Complete BlobInfos (size+digest+urls+annotations) which should replace the originals, in order (the root layer first, and then successive layered layers). BlobInfos' MediaType fields are ignored.
EmbeddedDockerReference reference.Named
ManifestMIMEType string
// The values below are NOT requests to modify the image; they provide optional context which may or may not be used.
InformationOnly ManifestUpdateInformation
}
// ManifestUpdateInformation is a component of ManifestUpdateOptions, named here
// only to make writing struct literals possible.
type ManifestUpdateInformation struct {
Destination ImageDestination // and yes, UpdatedImage may write to Destination (see the schema2 → schema1 conversion logic in image/docker_schema2.go)
LayerInfos []BlobInfo // Complete BlobInfos (size+digest) which have been uploaded, in order (the root layer first, and then successive layered layers)
LayerDiffIDs []digest.Digest // Digest values for the _uncompressed_ contents of the blobs which have been uploaded, in the same order.
}
// ImageInspectInfo is a set of metadata describing Docker images, primarily their manifest and configuration.
// The Tag field is a legacy field which is here just for the Docker v2s1 manifest. It won't be supported
// for other manifest types.
type ImageInspectInfo struct {
Tag string
Created *time.Time
DockerVersion string
Labels map[string]string
Architecture string
Variant string
Os string
Layers []string
LayersData []ImageInspectLayer
Env []string
Author string
}
// ImageInspectLayer is a set of metadata describing an image layers' detail
type ImageInspectLayer struct {
MIMEType string // "" if unknown.
Digest digest.Digest
Size int64 // -1 if unknown.
Annotations map[string]string
}
// DockerAuthConfig contains authorization information for connecting to a registry.
// the value of Username and Password can be empty for accessing the registry anonymously
type DockerAuthConfig struct {
Username string
Password string
// IdentityToken can be used as an refresh_token in place of username and
// password to obtain the bearer/access token in oauth2 flow. If identity
// token is set, password should not be set.
// Ref: https://docs.docker.com/registry/spec/auth/oauth/
IdentityToken string
}
// OptionalBool is a boolean with an additional undefined value, which is meant
// to be used in the context of user input to distinguish between a
// user-specified value and a default value.
type OptionalBool byte
const (
// OptionalBoolUndefined indicates that the OptionalBoolean hasn't been written.
OptionalBoolUndefined OptionalBool = iota
// OptionalBoolTrue represents the boolean true.
OptionalBoolTrue
// OptionalBoolFalse represents the boolean false.
OptionalBoolFalse
)
// NewOptionalBool converts the input bool into either OptionalBoolTrue or
// OptionalBoolFalse. The function is meant to avoid boilerplate code of users.
func NewOptionalBool(b bool) OptionalBool {
o := OptionalBoolFalse
if b {
o = OptionalBoolTrue
}
return o
}
// ShortNameMode defines the mode of short-name resolution.
//
// The use of unqualified-search registries entails an ambiguity as it's
// unclear from which registry a given image, referenced by a short name, may
// be pulled from.
//
// The ShortNameMode type defines how short names should resolve.
type ShortNameMode int
const (
ShortNameModeInvalid ShortNameMode = iota
// Use all configured unqualified-search registries without prompting
// the user.
ShortNameModeDisabled
// If stdout and stdin are a TTY, prompt the user to select a configured
// unqualified-search registry. Otherwise, use all configured
// unqualified-search registries.
//
// Note that if only one unqualified-search registry is set, it will be
// used without prompting.
ShortNameModePermissive
// Always prompt the user to select a configured unqualified-search
// registry. Throw an error if stdout or stdin is not a TTY as
// prompting isn't possible.
//
// Note that if only one unqualified-search registry is set, it will be
// used without prompting.
ShortNameModeEnforcing
)
// SystemContext allows parameterizing access to implicitly-accessed resources,
// like configuration files in /etc and users' login state in their home directory.
// Various components can share the same field only if their semantics is exactly
// the same; if in doubt, add a new field.
// It is always OK to pass nil instead of a SystemContext.
type SystemContext struct {
// If not "", prefixed to any absolute paths used by default by the library (e.g. in /etc/).
// Not used for any of the more specific path overrides available in this struct.
// Not used for any paths specified by users in config files (even if the location of the config file _was_ affected by it).
// NOTE: If this is set, environment-variable overrides of paths are ignored (to keep the semantics simple: to create an /etc replacement, just set RootForImplicitAbsolutePaths .
// and there is no need to worry about the environment.)
// NOTE: This does NOT affect paths starting by $HOME.
RootForImplicitAbsolutePaths string
// === Global configuration overrides ===
// If not "", overrides the system's default path for signature.Policy configuration.
SignaturePolicyPath string
// If not "", overrides the system's default path for registries.d (Docker signature storage configuration)
RegistriesDirPath string
// Path to the system-wide registries configuration file
SystemRegistriesConfPath string
// Path to the system-wide registries configuration directory
SystemRegistriesConfDirPath string
// Path to the user-specific short-names configuration file
UserShortNameAliasConfPath string
// If set, short-name resolution in pkg/shortnames must follow the specified mode
ShortNameMode *ShortNameMode
// If set, short names will resolve in pkg/shortnames to docker.io only, and unqualified-search registries and
// short-name aliases in registries.conf are ignored. Note that this field is only intended to help enforce
// resolving to Docker Hub in the Docker-compatible REST API of Podman; it should never be used outside this
// specific context.
PodmanOnlyShortNamesIgnoreRegistriesConfAndForceDockerHub bool
// If not "", overrides the default path for the registry authentication file, but only new format files
AuthFilePath string
// if not "", overrides the default path for the registry authentication file, but with the legacy format;
// the code currently will by default look for legacy format files like .dockercfg in the $HOME dir;
// but in addition to the home dir, openshift may mount .dockercfg files (via secret mount)
// in locations other than the home dir; openshift components should then set this field in those cases;
// this field is ignored if `AuthFilePath` is set (we favor the newer format);
// only reading of this data is supported;
LegacyFormatAuthFilePath string
// If set, a path to a Docker-compatible "config.json" file containing credentials; and no other files are processed.
// This must not be set if AuthFilePath is set.
// Only credentials and credential helpers in this file apre processed, not any other configuration in this file.
DockerCompatAuthFilePath string
// If not "", overrides the use of platform.GOARCH when choosing an image or verifying architecture match.
ArchitectureChoice string
// If not "", overrides the use of platform.GOOS when choosing an image or verifying OS match.
OSChoice string
// If not "", overrides the use of detected ARM platform variant when choosing an image or verifying variant match.
VariantChoice string
// If not "", overrides the system's default directory containing a blob info cache.
BlobInfoCacheDir string
// Additional tags when creating or copying a docker-archive.
DockerArchiveAdditionalTags []reference.NamedTagged
// If not "", overrides the temporary directory to use for storing big files
BigFilesTemporaryDir string
// === OCI.Transport overrides ===
// If not "", a directory containing a CA certificate (ending with ".crt"),
// a client certificate (ending with ".cert") and a client certificate key
// (ending with ".key") used when downloading OCI image layers.
OCICertPath string
// Allow downloading OCI image layers over HTTP, or HTTPS with failed TLS verification. Note that this does not affect other TLS connections.
OCIInsecureSkipTLSVerify bool
// If not "", use a shared directory for storing blobs rather than within OCI layouts
OCISharedBlobDirPath string
// Allow UnCompress image layer for OCI image layer
OCIAcceptUncompressedLayers bool
// === docker.Transport overrides ===
// If not "", a directory containing a CA certificate (ending with ".crt"),
// a client certificate (ending with ".cert") and a client certificate key
// (ending with ".key") used when talking to a container registry.
DockerCertPath string
// If not "", overrides the systems default path for a directory containing host[:port] subdirectories with the same structure as DockerCertPath above.
// Ignored if DockerCertPath is non-empty.
DockerPerHostCertDirPath string
// Allow contacting container registries over HTTP, or HTTPS with failed TLS verification. Note that this does not affect other TLS connections.
DockerInsecureSkipTLSVerify OptionalBool
// if nil, the library tries to parse ~/.docker/config.json to retrieve credentials
// Ignored if DockerBearerRegistryToken is non-empty.
DockerAuthConfig *DockerAuthConfig
// if not "", the library uses this registry token to authenticate to the registry
DockerBearerRegistryToken string
// if not "", an User-Agent header is added to each request when contacting a registry.
DockerRegistryUserAgent string
// if true, a V1 ping attempt isn't done to give users a better error. Default is false.
// Note that this field is used mainly to integrate containers/image into projectatomic/docker
// in order to not break any existing docker's integration tests.
// Deprecated: The V1 container registry detection is no longer performed, so setting this flag has no effect.
DockerDisableV1Ping bool
// If true, dockerImageDestination.SupportedManifestMIMETypes will omit the Schema1 media types from the supported list
DockerDisableDestSchema1MIMETypes bool
// If true, the physical pull source of docker transport images logged as info level
DockerLogMirrorChoice bool
// Directory to use for OSTree temporary files
//
// Deprecated: The OSTree transport has been removed.
OSTreeTmpDirPath string
// If true, all blobs will have precomputed digests to ensure layers are not uploaded that already exist on the registry.
// Note that this requires writing blobs to temporary files, and takes more time than the default behavior,
// when the digest for a blob is unknown.
DockerRegistryPushPrecomputeDigests bool
// DockerProxyURL specifies proxy configuration schema (like socks5://username:password@ip:port)
DockerProxyURL *url.URL
// === docker/daemon.Transport overrides ===
// A directory containing a CA certificate (ending with ".crt"),
// a client certificate (ending with ".cert") and a client certificate key
// (ending with ".key") used when talking to a Docker daemon.
DockerDaemonCertPath string
// The hostname or IP to the Docker daemon. If not set (aka ""), client.DefaultDockerHost is assumed.
DockerDaemonHost string
// Used to skip TLS verification, off by default. To take effect DockerDaemonCertPath needs to be specified as well.
DockerDaemonInsecureSkipTLSVerify bool
// === dir.Transport overrides ===
// DirForceCompress compresses the image layers if set to true
DirForceCompress bool
// DirForceDecompress decompresses the image layers if set to true
DirForceDecompress bool
// CompressionFormat is the format to use for the compression of the blobs
CompressionFormat *compression.Algorithm
// CompressionLevel specifies what compression level is used
CompressionLevel *int
}
// ProgressEvent is the type of events a progress reader can produce
// Warning: new event types may be added any time.
type ProgressEvent uint
const (
// ProgressEventNewArtifact will be fired on progress reader setup
ProgressEventNewArtifact ProgressEvent = iota
// ProgressEventRead indicates that the artifact download is currently in
// progress
ProgressEventRead
// ProgressEventDone is fired when the data transfer has been finished for
// the specific artifact
ProgressEventDone
// ProgressEventSkipped is fired when the artifact has been skipped because
// its already available at the destination
ProgressEventSkipped
)
// ProgressProperties is used to pass information from the copy code to a monitor which
// can use the real-time information to produce output or react to changes.
type ProgressProperties struct {
// The event indicating what
Event ProgressEvent
// The artifact which has been updated in this interval
Artifact BlobInfo
// The currently downloaded size in bytes
// Increases from 0 to the final Artifact size
Offset uint64
// The additional offset which has been downloaded inside the last update
// interval. Will be reset after each ProgressEventRead event.
OffsetUpdate uint64
}

18
vendor/go.podman.io/image/v5/version/version.go generated vendored Normal file
View File

@@ -0,0 +1,18 @@
package version
import "fmt"
const (
// VersionMajor is for an API incompatible changes
VersionMajor = 5
// VersionMinor is for functionality in a backwards-compatible manner
VersionMinor = 37
// VersionPatch is for backwards-compatible bug fixes
VersionPatch = 0
// VersionDev indicates development branch. Releases will be empty string.
VersionDev = ""
)
// Version is the specification version that the package types support.
var Version = fmt.Sprintf("%d.%d.%d%s", VersionMajor, VersionMinor, VersionPatch, VersionDev)