BuildRun - invokes the build.
You create a BuildRun to tell Shipwright to start building your application.
Build
The Build object provides a playbook on how to assemble your specific application. The simplest
build consists of a git source, a build strategy, and an output image:
Builds can be extended to push to private registries, use a different Dockerfile, and more.
BuildStrategy and ClusterBuildStrategy
BuildStrategy and ClusterBuildStrategy are related APIs to define how a given tool should be
used to assemble an application. They are distinguished by their scope - BuildStrategy objects
are namespace scoped, whereas ClusterBuildStrategy objects are cluster scoped.
The spec of a BuildStrategy or ClusterBuildStrategy consists of a buildSteps object, which look and feel like Kubernetes container
specifications. Below is an example spec for Kaniko, which can build an image from a
Dockerfile within a container:
# this is a fragment of a manifestspec:buildSteps:- name:build-and-pushimage:gcr.io/kaniko-project/executor:v1.3.0workingDir:/workspace/sourcesecurityContext:runAsUser:0capabilities:add:- CHOWN- DAC_OVERRIDE- FOWNER- SETGID- SETUID- SETFCAPenv:- name:DOCKER_CONFIGvalue:/tekton/home/.dockercommand:- /kaniko/executorargs:- --skip-tls-verify=true- --dockerfile=$(build.dockerfile)- --context=/workspace/source/$(build.source.contextDir)- --destination=$(build.output.image)- --oci-layout-path=/workspace/output/image- --snapshotMode=redoresources:limits:cpu:500mmemory:1Girequests:cpu:250mmemory:65Mi
BuildRun
Each BuildRun object invokes a build on your cluster. You can think of these as a Kubernetes
Jobs or Tekton TaskRuns - they represent a workload on your cluster, ultimately resulting in a
running Pod. See BuildRun for more details.
The operator will deploy Shipwright Builds in the provided targetNamespace.
When .spec.targetNamespace is not set, the namespace will default to shipwright-build.
Refer to the ShipwrightBuild documentation for more information about this custom resource.
Installing Shipwright Builds Directly
We also publish a Kubernetes manifest that installs Shipwright directly into the shipwright-build namespace.
Applying this manifest requires cluster administrator permissions:
The Shipwright community maintains a curated set of build strategies for popular build tools.
These can be optionally installed after Shipwright Builds has been deployed:
There are two types of strategies, the ClusterBuildStrategy (clusterbuildstrategies.shipwright.io/v1beta1) and the BuildStrategy (buildstrategies.shipwright.io/v1beta1). Both strategies define a shared group of steps, needed to fullfil the application build.
A ClusterBuildStrategy is available cluster-wide, while a BuildStrategy is available within a namespace.
Available ClusterBuildStrategies
Well-known strategies can be bootstrapped from here. The currently supported Cluster BuildStrategy are:
The buildah ClusterBuildStrategy uses buildah to build and push a container image, out of a Dockerfile. The Dockerfile should be specified on the Build resource.
The strategy will build the image for the platforms that are listed in the architectures parameter of a Build object
that refers it.
The strategy will require the cluster to have the necessary infrastructure to run the builds: worker nodes
for each architecture that is listed in the architectures parameter.
The ClusterBuildStrategy runs a main orchestrator pod.
The orchestrator pod will create one auxiliary job for each architecture requested by the Build.
The auxiliary jobs are responsible for building the container image and
coordinate with the orchestrator pod.
When all the builds are completed, the orchestrator pod will compose a manifest-list image and push it to the target registry.
The service account that runs the strategy must be bound to a role able to create, list, get and watchbatch/v1jobs and core/v1pods resources.
The role also needs to allow the create verb for the pods/exec resource.
Finally, when running in OKD or OpenShift clusters, the service account must be able to use the
privileged SecurityContextConstraint.
For each namespace where you want to use the strategy, you also need to apply the RBAC rules that allow the service
account to run the strategy. If the service account is named pipeline (default), you can use:
The build strategy provides the following parameters that you can set in a Build or BuildRun to control its behavior:
Parameter
Description
Default
architectures
The list of architectures to build the image for
[ “amd64” ]
build-args
The values for the args in the Dockerfile. Values must be in the format KEY=VALUE.
empty array
dockerfile
The path to the Dockerfile to be used for building the image.
Dockerfile
from
Image name used to replace the value in the first FROM instruction in the Dockerfile.
empty string
runtime-stage-from
Image name used to replace the value in the last FROM instruction in the Dockerfile.
empty string
build-contexts
Specify an additional build context using its short name and its location. Additional build contexts can be referenced in the same manner as we access different stages in COPY instruction. Use values in the form “name=value”. See man buildah-build.
empty array
registries-block
A list of registries to block. Images from these registries will not be pulled during the build.
empty array
registries-insecure
A list of registries that are insecure. Images from these registries will be pulled without verifying the TLS certificate.
empty array
registries-search
A list of registries to search for short name images. Images missing the fully-qualified name of a registry will be looked up in these registries.
empty array
request-cpu
The CPU request to set for the auxiliary jobs.
250m
request-memory
The memory request to set for the auxiliary jobs.
64Mi
limit-cpu
The CPU limit to set for the auxiliary jobs.
no limit
limit-memory
The memory limit to set for the auxiliary jobs.
2Gi
Volumes
Volume
Description
oci-archive-storage
Volume to contain the temporary single-arch images manifests in OCI format. It can be set to a persistent volume, e.g., for large images. The default is an emptyDir volume which means that the cached data is discarded at the end of a BuildRun and will make use of ephemeral storage (according to the cluster infrastructure setup).
Example build
---apiVersion:shipwright.io/v1beta1kind:Buildmetadata:name:multiarch-native-buildah-exspec:source:type:Gitgit:url:https://github.com/shipwright-io/sample-gocontextDir:docker-buildstrategy:name:multiarch-native-buildahkind:ClusterBuildStrategyparamValues:- name:architecturesvalues:# This will require a cluster with both arm64 and amd64 nodes- value:"amd64"- value:"arm64"- name:build-contextsvalues:- value:"ghcr.io/shipwright-io/shipwright-samples/golang:1.18=docker://ghcr.io/shipwright-io/shipwright-samples/golang:1.18"# The buildah `--from` replaces the first FROM statement- name:fromvalue:""# Using the build-contexts for this example# The runtime-stage-from implements the logic to replace the last stage FROM image of a Dockerfile- name:runtime-stage-fromvalue:docker://gcr.io/distroless/static:nonroot- name:dockerfilevalue:Dockerfileoutput:image:image-registry.openshift-image-registry.svc:5000/build-examples/taxi-app
Buildpacks v3
The buildpacks-v3 BuildStrategy/ClusterBuildStrategy uses a Cloud Native Builder (CNB) container image, and is able to implement lifecycle commands.
Installing Buildpacks v3 Strategy
You can install the BuildStrategy in your namespace or install the ClusterBuildStrategy at cluster scope so that it can be shared across namespaces.
To install the cluster scope strategy, you can choose between the Paketo and Heroku buildpacks family:
The kaniko ClusterBuildStrategy is composed by Kaniko’s executorkaniko, with the objective of building a container-image, out of a Dockerfile and context directory.
BuildKit is composed of the buildctl client and the buildkitd daemon. For the buildkit ClusterBuildStrategy, it runs on a daemonless mode, where both client and ephemeral daemon run in a single container. In addition, it runs without privileges (rootless).
Cache Exporters
By default, the buildkit ClusterBuildStrategy will use caching to optimize the build times. When pushing an image to a registry, it will use the inline export cache, which appends cache information to the image that is built. Please refer to export-cache docs for more information. Caching can be disabled by setting the cache parameter to "disabled". See Defining ParamValues for more information.
The sample build strategy contains a platforms array parameter that you can set to leverage BuildKit’s support to build multi-platform images. If you do not set this value, the image is built for the platform that is supported by the FROM image. If that image supports multiple platforms, then the image will be built for the platform of your Kubernetes node.
Known Limitations
The buildkit ClusterBuildStrategy currently locks the following parameters:
To allow running rootless, it requires both AppArmor and SecComp to be disabled using the unconfined profile.
Usage in Clusters with Pod Security Standards
The BuildKit strategy contains fields with regard to security settings. It therefore depends on the respective cluster setup and administrative configuration. These settings are:
Defining the unconfined profile for both AppArmor and seccomp as required by the underlying rootlesskit.
The allowPrivilegeEscalation settings is set to true to be able to use binaries that have the setuid bit set in order to run with “root” level privileges. In case of BuildKit, this is required by rootlesskit in order to set the user namespace mapping file /proc/<pid>/uid_map.
Use of non-root user with UID 1000/GID 1000 as the runAsUser.
These settings have no effect in case Pod Security Standards are not used.
Please note: At this point in time, there is no way to run rootlesskit to start the BuildKit daemon without the allowPrivilegeEscalation flag set to true. Clusters with the Restricted security standard in place will not be able to use this build strategy.
Version of ko, must be either latest for the newest release, or a ko release name
latest
package-directory
The directory inside the context directory containing the main package.
.
target-platform
Target platform to be built. For example: linux/arm64. Multiple platforms can be provided separated by comma, for example: linux/arm64,linux/amd64. The value all will build all platforms supported by the base image. The value current will build the platform on which the build runs.
current
Volumes
Volume
Description
gocache
Volume to contain the GOCACHE. Can be set to a persistent volume to optimize compilation performance for rebuilds. The default is an emptyDir volume which means that the cached data is discarded at the end of a BuildRun.
Source to Image
This BuildStrategy is composed by source-to-image and kaniko in order to generate a Dockerfile and prepare the application to be built later on with a builder.
s2i requires a specially crafted image, which can be informed as builderImage parameter on the Build resource.
s2i in order to generate a Dockerfile and prepare source-code for image build;
kaniko to create and push the container image to what is defined as output.image;
Strategy parameters
Strategy parameters allow users to parameterize their strategy definition, by allowing users to control the parameters values via the Build or BuildRun resources.
Users defining parameters under their strategies require to understand the following:
Definition: A list of parameters should be defined under spec.parameters. Each list item should consist of a name, a description, a type (either "array" or "string") and optionally a default value (for type=string), or defaults values (for type=array). If no default(s) are provided, then the user must define a value in the Build or BuildRun.
Usage: In order to use a parameter in the strategy steps, use the following syntax for type=string: $(params.your-parameter-name). String parameters can be used in all places in the buildSteps. Some example scenarios are:
image: to use a custom tag, for example golang:$(params.go-version) as it is done in the ko sample build strategy
args: to pass data into your builder command
env: to force a user to provide a value for an environment variable.
Arrays are referenced using $(params.your-array-parameter-name[*]), and can only be used in as the value for args or command because the defined as arrays by Kubernetes. For every item in the array, an arg will be set. For example, if you specify this in your build strategy step:
spec:parameters:- name:tool-argsdescription:Parameters for the tooltype:arraysteps:- name:a-stepcommand:- some-toolargs:- $(params.tool-args[*])
If the build user sets the value of tool-args to ["–some-arg", “some-value”], then the Pod will contain these args:
Parameterize: Any Build or BuildRun referencing your strategy, can set a value for your-parameter-name parameter if needed.
Note: Users can provide parameter values as simple strings or as references to keys in ConfigMaps and Secrets. If they use a ConfigMap or Secret, then the value can only be used if the parameter is used in the command, args, or env section of the buildSteps. For example, the above-mentioned scenario to set a step’s image to golang:$(params.go-version) does not allow the usage of ConfigMaps or Secrets.
---apiVersion:shipwright.io/v1beta1kind:ClusterBuildStrategymetadata:name:buildkit...spec:parameters:- name:build-argsdescription:"The values for the ARGs in the Dockerfile. Values must be in the format KEY=VALUE."type:arraydefaults:[]- name:cachedescription:"Configure BuildKit's cache usage. Allowed values are 'disabled' and 'registry'. The default is 'registry'."type:stringdefault:registry- name:insecure-registrytype:stringdescription:"enables the push to an insecure registry"default:"false"- name:secretsdescription:"The secrets to pass to the build. Values must be in the format ID=FILE_CONTENT."type:arraydefaults:[]- name:dockerfiledescription:The path to the Dockerfile to be used for building the image.type:stringdefault:"Dockerfile"steps:...- name:build-and-pushimage:moby/buildkit:v0.17.0-rootlessimagePullPolicy:AlwaysworkingDir:$(params.shp-source-root)...command:- /bin/ashargs:- -c- | set -euo pipefail
# Prepare the file arguments
DOCKERFILE_PATH='$(params.shp-source-context)/$(params.dockerfile)'
DOCKERFILE_DIR="$(dirname "${DOCKERFILE_PATH}")"
DOCKERFILE_NAME="$(basename "${DOCKERFILE_PATH}")"
# We only have ash here and therefore no bash arrays to help add dynamic arguments (the build-args) to the build command.
echo "#!/bin/ash" > /tmp/run.sh
echo "set -euo pipefail" >> /tmp/run.sh
echo "buildctl-daemonless.sh \\" >> /tmp/run.sh
echo "build \\" >> /tmp/run.sh
echo "--progress=plain \\" >> /tmp/run.sh
echo "--frontend=dockerfile.v0 \\" >> /tmp/run.sh
echo "--opt=filename=\"${DOCKERFILE_NAME}\" \\" >> /tmp/run.sh
echo "--local=context='$(params.shp-source-context)' \\" >> /tmp/run.sh
echo "--local=dockerfile=\"${DOCKERFILE_DIR}\" \\" >> /tmp/run.sh
echo "--output=type=image,name='$(params.shp-output-image)',push=true,registry.insecure=$(params.insecure-registry) \\" >> /tmp/run.sh
if [ "$(params.cache)" == "registry" ]; then
echo "--export-cache=type=inline \\" >> /tmp/run.sh
echo "--import-cache=type=registry,ref='$(params.shp-output-image)' \\" >> /tmp/run.sh
elif [ "$(params.cache)" == "disabled" ]; then
echo "--no-cache \\" >> /tmp/run.sh
else
echo -e "An invalid value for the parameter 'cache' has been provided: '$(params.cache)'. Allowed values are 'disabled' and 'registry'."
echo -n "InvalidParameterValue" > '$(results.shp-error-reason.path)'
echo -n "An invalid value for the parameter 'cache' has been provided: '$(params.cache)'. Allowed values are 'disabled' and 'registry'." > '$(results.shp-error-message.path)'
exit 1
fi
stage=""
for a in "$@"
do
if [ "${a}" == "--build-args" ]; then
stage=build-args
elif [ "${a}" == "--secrets" ]; then
stage=secrets
elif [ "${stage}" == "build-args" ]; then
echo "--opt=\"build-arg:${a}\" \\" >> /tmp/run.sh
elif [ "${stage}" == "secrets" ]; then
# Split ID=FILE_CONTENT into variables id and data
# using head because the data could be multiline
id="$(echo "${a}" | head -1 | sed 's/=.*//')"
# This is hacky, we remove the suffix ${id}= from all lines of the data.
# If the data would be multiple lines and a line would start with ${id}=
# then we would remove it. We could force users to give us the secret
# base64 encoded. But ultimately, the best solution might be if the user
# mounts the secret and just gives us the path here.
data="$(echo "${a}" | sed "s/^${id}=//")"
# Write the secret data into a temporary file, once we have volume support
# in the build strategy, we should use a memory based emptyDir for this.
echo -n "${data}" > "/tmp/secret_${id}"
# Add the secret argument
echo "--secret id=${id},src="/tmp/secret_${id}" \\" >> /tmp/run.sh
fi
done
echo "--metadata-file /tmp/image-metadata.json" >> /tmp/run.sh
chmod +x /tmp/run.sh
/tmp/run.sh
# Store the image digest
sed -E 's/.*containerimage.digest":"([^"]*).*/\1/' < /tmp/image-metadata.json > '$(results.shp-image-digest.path)'# That's the separator between the shell script and its args- --- --build-args- $(params.build-args[*])- --secrets- $(params.secrets[*])
See more information on how to use these parameters in a Build or BuildRun in the related documentation.
System parameters
Contrary to the strategy spec.parameters, you can use system parameters and their values defined at runtime when defining the steps of a build strategy to access system information as well as information provided by the user in their Build or BuildRun. The following parameters are available:
Parameter
Description
$(params.shp-source-root)
The absolute path to the directory that contains the user’s sources.
$(params.shp-source-context)
The absolute path to the context directory of the user’s sources. If the user specified no value for spec.source.contextDir in their Build, then this value will equal the value for $(params.shp-source-root). Note that this directory is not guaranteed to exist at the time the container for your step is started, you can therefore not use this parameter as a step’s working directory.
$(params.shp-output-directory)
The absolute path to a directory that the build strategy should store the image in. You can store a single tarball containing a single image, or an OCI image layout.
$(params.shp-output-image)
The URL of the image that the user wants to push, as specified in the Build’s spec.output.image or as an override from the BuildRun’s spec.output.image.
$(params.shp-output-insecure)
A flag that indicates the output image’s registry location is insecure because it uses a certificate not signed by a certificate authority, or uses HTTP.
Output directory vs. output image
As a build strategy author, you decide whether your build strategy or Shipwright pushes the build image to the container registry:
If you DO NOT use $(params.shp-output-directory), then Shipwright assumes that your build strategy PUSHES the image. We call this a strategy-managed push.
If you DO use $(params.shp-output-directory), then Shipwright assumes that your build strategy does NOT PUSH the image. We call this a shipwright-managed push.
When you use the $(params.shp-output-directory) parameter, then Shipwright will also set the image-related system results.
If you are uncertain about how to implement your build strategy, then follow this guidance:
If your build strategy tool cannot locally store an image but always pushes it, then you must do the push operation. An example is the Buildpacks strategy. You SHOULD respect the $(params.shp-output-insecure) parameter.
If your build strategy tool can locally store an image, then the choice depends on how you expect your build users to make use of your strategy, and the nature of your strategy.
Some build strategies do not produce all layers of an image, but use a common base image and put one or more layers on top with the application. An example is ko. Such base image layers are often already present in the destination registry (like in rebuilds). If the strategy can perform the push operation, then it can optimize the process and can omit the download of the base image when it is not required to push it. In the case of a shipwright-managed push, the complete image must be locally stored in $(params.shp-output-directory), which implies that a base image must always be downloaded.
Some build strategy tools do not make it easy to determine the digest or size of the image, which can make it complex for you to set the strategy results. In the case of a shipwright-managed push, Shipwright has the responsibility to set them.
Build users can configure the build to amend additional annotations, or labels to the final image. In the case of a shipwright-managed push, these can be set directly and the image will only be pushed once. In a strategy-managed push scenario, your build strategy will push the first version of the image without those annotations and labels. Shipwright will then mutate the image and push it again with the updated annotations and labels. Such a duplicate push can cause unexpected behavior with registries that trigger other actions when an image gets pushed, or that do not allow overwriting a tag.
The Shipwright maintainers plan to provide more capabilities in the future that need the image locally, such as vulnerability scanning, or software bill of material (SBOM) creation. These capabilities may be only fully supported with shipwright-managed push.
System parameters vs Strategy Parameters Comparison
Parameter Type
User Configurable
Definition
System Parameter
No
At run-time, by the BuildRun controller.
Strategy Parameter
Yes
At build-time, during the BuildStrategy creation.
Securely referencing string parameters
In build strategy steps, string parameters are referenced using $(params.PARAM_NAME). This applies to system parameters, and those parameters defined in the build strategy. You can reference those parameters at many locations in the build steps, such as environment variables values, arguments, image, and more. In the Pod, all $(params.PARAM_NAME) tokens will be replaced by simple string replaces. This is safe in most locations but requires your attention when you define an inline script using an argument. For example:
This opens the door to script injection, for example if the user sets the sample-parameter to argument-value" && malicious-command && echo ", the resulting pod argument will look like this:
To securely pass a parameter value into a script-style argument, you can choose between these two approaches:
Using environment variables. This is used in some of our sample strategies, for example ko, or buildpacks. Basically, instead of directly using the parameter inside the script, you pass it via environment variable. Using quoting, shells ensure that no command injection is possible:
Using arguments. This is used in some of our sample build strategies, for example buildah. Here, you use arguments to your own inline script. Appropriate shell quoting guards against command injection.
If you are using a strategy-managed push, see output directory vs output image, you can optionally store the size and digest of the image your build strategy created to a set of files.
Result file
Description
$(results.shp-image-digest.path)
File to store the digest of the image.
$(results.shp-image-size.path)
File to store the compressed size of the image.
You can look at sample build strategies, such as Buildpacks, to see how they fill some or all of the results files.
This information will be available in the .status.output section of the BuildRun.
Additionally, you can store error details for debugging purposes when a BuildRun fails using your strategy.
Result file
Description
$(results.shp-error-reason.path)
File to store the error reason.
$(results.shp-error-message.path)
File to store the error message.
Reason is intended to be a one-word CamelCase classification of the error source, with the first letter capitalized.
Error details are only propagated if the build container terminates with a non-zero exit code.
This information will be available in the .status.failureDetails section of the BuildRun.
apiVersion:shipwright.io/v1beta1kind:BuildRun# [...]status:# [...]failureDetails:location:container:step-source-defaultpod:baran-build-buildrun-gzmv5-b7wbf-pod-bbpqrmessage:The source repository does not exist, or you have insufficient permissionto access it.reason:GitRemotePrivate
Security Contexts
In a build strategy, it is recommended that you define a securityContext with a runAsUser and runAsGroup:
This runAs configuration will be used for all shipwright-managed steps such as the step that retrieves the source code, and for the steps you define in the build strategy. This configuration ensures that all steps share the same runAs configuration which eliminates file permission problems.
Without a securityContext for the build strategy, shipwright-managed steps will run with the runAsUser and runAsGroup that is defined in the configuration’s container templates that is potentially a different user than you use in your build strategy. This can result in issues when for example source code is downloaded as user A as defined by the Git container template, but your strategy accesses it as user B.
In build strategy steps you can define a step-specific securityContext that matches Kubernetes’ security context where you can configure other security aspects such as capabilities or privileged containers.
Steps Resource Definition
All strategies steps can include a definition of resources(limits and requests) for CPU, memory and disk. For strategies with more than one step, each step(container) could require more resources than others. Strategy admins are free to define the values that they consider the best fit for each step. Also, identical strategies with the same steps that are only different in their name and step resources can be installed on the cluster to allow users to create a build with smaller and larger resource requirements.
Strategies with different resources
If the strategy admins required to have multiple flavours of the same strategy, where one strategy has more resources that the other. Then, multiple strategies for the same type should be defined on the cluster. In the following example, we use Kaniko as the type:
---apiVersion:shipwright.io/v1beta1kind:ClusterBuildStrategymetadata:name:kaniko-smallspec:steps:- name:build-and-pushimage:gcr.io/kaniko-project/executor:v1.23.2workingDir:$(params.shp-source-root)securityContext:runAsUser:0capabilities:add:- CHOWN- DAC_OVERRIDE- FOWNER- SETGID- SETUID- SETFCAP- KILLenv:- name:DOCKER_CONFIGvalue:/tekton/home/.docker- name:AWS_ACCESS_KEY_IDvalue:NOT_SET- name:AWS_SECRET_KEYvalue:NOT_SETcommand:- /kaniko/executorargs:- --skip-tls-verify=true- --dockerfile=$(params.dockerfile)- --context=$(params.shp-source-context)- --destination=$(params.shp-output-image)- --snapshot-mode=redo- --push-retry=3resources:limits:cpu:250mmemory:65Mirequests:cpu:250mmemory:65Miparameters:- name:dockerfiledescription:The path to the Dockerfile to be used for building the image.type:stringdefault:"Dockerfile"---apiVersion:shipwright.io/v1beta1kind:ClusterBuildStrategymetadata:name:kaniko-mediumspec:steps:- name:build-and-pushimage:gcr.io/kaniko-project/executor:v1.23.2workingDir:$(params.shp-source-root)securityContext:runAsUser:0capabilities:add:- CHOWN- DAC_OVERRIDE- FOWNER- SETGID- SETUID- SETFCAP- KILLenv:- name:DOCKER_CONFIGvalue:/tekton/home/.docker- name:AWS_ACCESS_KEY_IDvalue:NOT_SET- name:AWS_SECRET_KEYvalue:NOT_SETcommand:- /kaniko/executorargs:- --skip-tls-verify=true- --dockerfile=$(params.dockerfile)- --context=$(params.shp-source-context)- --destination=$(params.shp-output-image)- --snapshot-mode=redo- --push-retry=3resources:limits:cpu:500mmemory:1Girequests:cpu:500mmemory:1Giparameters:- name:dockerfiledescription:The path to the Dockerfile to be used for building the image.type:stringdefault:"Dockerfile"
The above provides more control and flexibility for the strategy admins. For end-users, all they need to do, is to reference the proper strategy. For example:
The Build controller relies on the Tekton pipeline controller to schedule the pods that execute the above strategy steps. In a nutshell, the Build controller creates on run-time a Tekton TaskRun, and the TaskRun generates a new pod in the particular namespace. In order to build an image, the pod executes all the strategy steps one-by-one.
Tekton manage each step resources request in a very particular way, see the docs. From this document, it mentions the following:
The CPU, memory, and ephemeral storage resource requests will be set to zero, or, if specified, the minimums set through LimitRanges in that Namespace, if the container image does not have the largest resource request out of all container images in the Task. This ensures that the Pod that executes the Task only requests enough resources to run a single container image in the Task rather than hoard resources for all container images in the Task at once.
Examples of Tekton resources management
For a more concrete example, let´s take a look on the following scenarios:
Scenario 1. Namespace without LimitRange, both steps with the same resource values.
The pod definition is different, while Tekton will only use the highest values of one container, and set the rest(lowest) to zero:
$ kubectl -n test-build get pods buildah-golang-buildrun-9gmcx-pod-lhzbc -o json | jq '.spec.containers[] | select(.name == "step-step-buildah-bud" ) | .resources'{"limits": {"cpu": "500m",
"memory": "1Gi"},
"requests": {"cpu": "250m",
"ephemeral-storage": "0",
"memory": "65Mi"}}$ kubectl -n test-build get pods buildah-golang-buildrun-9gmcx-pod-lhzbc -o json | jq '.spec.containers[] | select(.name == "step-step-buildah-push" ) | .resources'{"limits": {"cpu": "500m",
"memory": "1Gi"},
"requests": {"cpu": "0", <------------------- See how the request is set to ZERO.
"ephemeral-storage": "0", <------------------- See how the request is set to ZERO.
"memory": "0" <------------------- See how the request is set to ZERO.
}}
In this scenario, only one container can have the spec.resources.requests definition. Even when both steps have the same values, only one container will get them, the others will be set to zero.
Scenario 2. Namespace without LimitRange, steps with different resources:
We will use a modified buildah strategy, with the following steps resources:
- name:buildah-budimage:quay.io/containers/buildah:v1.37.5workingDir:$(params.shp-source-root)securityContext:privileged:truecommand:- /usr/bin/buildahargs:- bud- --tag=$(params.shp-output-image)- --file=$(params.dockerfile)- $(build.source.contextDir)resources:limits:cpu:500mmemory:1Girequests:cpu:250mmemory:65MivolumeMounts:- name:buildah-imagesmountPath:/var/lib/containers/storage- name:buildah-pushimage:quay.io/containers/buildah:v1.37.5securityContext:privileged:truecommand:- /usr/bin/buildahargs:- push- --tls-verify=false- docker://$(params.shp-output-image)resources:limits:cpu:500mmemory:1Girequests:cpu:250mmemory:100Mi <------ See how we provide more memory to step-buildah-push, compared to the 65Mi of the other step
For the TaskRun, as expected we can see the resources on each step.
The pod definition is different, while Tekton will only use the highest values of one container, and set the rest(lowest) to zero:
$ kubectl -n test-build get pods buildah-golang-buildrun-95xq8-pod-mww8d -o json | jq '.spec.containers[] | select(.name == "step-step-buildah-bud" ) | .resources'{"limits": {"cpu": "500m",
"memory": "1Gi"},
"requests": {"cpu": "250m", <------------------- See how the CPU is preserved
"ephemeral-storage": "0",
"memory": "0" <------------------- See how the memory is set to ZERO
}}$ kubectl -n test-build get pods buildah-golang-buildrun-95xq8-pod-mww8d -o json | jq '.spec.containers[] | select(.name == "step-step-buildah-push" ) | .resources'{"limits": {"cpu": "500m",
"memory": "1Gi"},
"requests": {"cpu": "0", <------------------- See how the CPU is set to zero.
"ephemeral-storage": "0",
"memory": "100Mi" <------------------- See how the memory is preserved on this container
}}
In the above scenario, we can see how the maximum numbers for resource requests are distributed between containers. The container step-buildah-push gets the 100mi for the memory requests, while it was the one defining the highest number. At the same time, the container step-buildah-bud is assigned a 0 for its memory request.
Scenario 3. Namespace with a LimitRange.
When a LimitRange exists on the namespace, Tekton Pipeline controller will do the same approach as stated in the above two scenarios. The difference is that for the containers that have lower values, instead of zero, they will get the minimum values of the LimitRange.
Annotations
Annotations can be defined for a BuildStrategy/ClusterBuildStrategy as for any other Kubernetes object. Annotations are propagated to the TaskRun and from there, Tekton propagates them to the Pod. Use cases for this are for example:
The Kubernetes Network Traffic Shaping feature looks for the kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations to limit the network bandwidth the Pod is allowed to use.
The AppArmor profile of a container is defined using the container.apparmor.security.beta.kubernetes.io/<container_name> annotation.
The following annotations are not propagated:
kubectl.kubernetes.io/last-applied-configuration
clusterbuildstrategy.shipwright.io/*
buildstrategy.shipwright.io/*
build.shipwright.io/*
buildrun.shipwright.io/*
A Kubernetes administrator can further restrict the usage of annotations by using policy engines like Open Policy Agent.
Volumes and VolumeMounts
Build Strategies can declare volumes. These volumes can be referred to by the build steps using volumeMount.
Volumes in Build Strategy follow the declaration of Pod Volumes, so
all the usual volumeSource types are supported.
Volumes can be overridden by Builds and BuildRuns, so Build Strategies’ volumes support an overridable flag, which
is a boolean, and is false by default. In case volume is not overridable, Build or BuildRun that tries to override it,
will fail.
Build steps can declare a volumeMount, which allows them to access volumes defined by BuildStrategy, Build or BuildRun.
Here is an example of BuildStrategy object that defines volumes and volumeMounts:
Validates if the specified paramValues exist on the referenced strategy parameters. It also validates if the paramValues names collide with the Shipwright reserved names.
Validates if the container registry output secret exists.
Validates if the referenced spec.source.git.url endpoint exists.
Build Validations
Note: reported validations in build status are deprecated, and will be removed in a future release.
To prevent users from triggering BuildRuns (execution of a Build) that will eventually fail because of wrong or missing dependencies or configuration settings, the Build controller will validate them in advance. If all validations are successful, users can expect a Succeededstatus.reason. However, if any validations fail, users can rely on the status.reason and status.message fields to understand the root cause.
Status.Reason
Description
BuildStrategyNotFound
The referenced namespace-scope strategy doesn’t exist.
ClusterBuildStrategyNotFound
The referenced cluster-scope strategy doesn’t exist.
SetOwnerReferenceFailed
Setting ownerreferences between a Build and a BuildRun failed. This status is triggered when you set the spec.retention.atBuildDeletion to true in a Build.
SpecSourceSecretRefNotFound
The secret used to authenticate to git doesn’t exist.
SpecOutputSecretRefNotFound
The secret used to authenticate to the container registry doesn’t exist.
SpecBuilderSecretRefNotFound
The secret used to authenticate the container registry doesn’t exist.
MultipleSecretRefNotFound
More than one secret is missing. At the moment, only three paths on a Build can specify a secret.
RestrictedParametersInUse
One or many defined paramValues are colliding with Shipwright reserved parameters. See Defining Params for more information.
UndefinedParameter
One or many defined paramValues are not defined in the referenced strategy. Please ensure that the strategy defines them under its spec.parameters list.
RemoteRepositoryUnreachable
The defined spec.source.git.url was not found. This validation only takes place for HTTP/HTTPS protocols.
BuildNameInvalid
The defined Build name (metadata.name) is invalid. The Build name should be a valid label value.
SpecEnvNameCanNotBeBlank
The name for a user-provided environment variable is blank.
SpecEnvValueCanNotBeBlank
The value for a user-provided environment variable is blank.
SpecEnvOnlyOneOfValueOrValueFromMustBeSpecified
Both value and valueFrom were specified, which are mutually exclusive.
RuntimePathsCanNotBeEmpty
The spec.runtime feature is used but the paths were not specified.
WrongParameterValueType
A single value was provided for an array parameter, or vice-versa.
InconsistentParameterValues
Parameter values have more than one of configMapValue, secretValue, or value set.
EmptyArrayItemParameterValues
Array parameters contain an item where none of configMapValue, secretValue, or value is set.
IncompleteConfigMapValueParameterValues
A configMapValue is specified where the name or the key is empty.
IncompleteSecretValueParameterValues
A secretValue is specified where the name or the key is empty.
VolumeDoesNotExist
Volume referenced by the Build does not exist, therefore Build cannot be run.
VolumeNotOverridable
Volume defined by build is not set as overridable in the strategy.
UndefinedVolume
Volume defined by build is not found in the strategy.
TriggerNameCanNotBeBlank
Trigger condition does not have a name.
TriggerInvalidType
Trigger type is invalid.
TriggerInvalidGitHubWebHook
Trigger type GitHub is invalid.
TriggerInvalidImage
Trigger type Image is invalid.
TriggerInvalidPipeline
Trigger type Pipeline is invalid.
OutputTimestampNotSupported
An unsupported output timestamp setting was used.
OutputTimestampNotValid
The output timestamp value is not valid.
Configuring a Build
The Build definition supports the following fields:
Required:
apiVersion - Specifies the API version, for example shipwright.io/v1beta1.
kind - Specifies the Kind type, for example Build.
metadata - Metadata that identify the custom resource instance, especially the name of the Build, and in which namespace you place it. Note: You should use your own namespace, and not put your builds into the shipwright-build namespace where Shipwright’s system components run.
spec.source - Refers to the location of the source code, for example a Git repository or OCI artifact image.
spec.strategy - Refers to the BuildStrategy to be used, see the examples
spec.output- Refers to the location where the generated image would be pushed.
spec.output.pushSecret- Reference an existing secret to get access to the container registry.
Optional:
spec.paramValues - Refers to a name-value(s) list to specify values for parameters defined in the BuildStrategy.
spec.timeout - Defines a custom timeout. The value needs to be parsable by ParseDuration, for example, 5m. The default is ten minutes. You can overwrite the value in the BuildRun.
spec.output.annotations - Refers to a list of key/value that could be used to annotate the output image.
spec.output.labels - Refers to a list of key/value that could be used to label the output image.
spec.output.timestamp - Instruct the build to change the output image creation timestamp to the specified value. When omitted, the respective build strategy tool defines the output image timestamp.
Use string Zero to set the image timestamp to UNIX epoch timestamp zero.
Use string SourceTimestamp to set the image timestamp to the source timestamp, i.e. the timestamp of the Git commit that was used.
Use string BuildTimestamp to set the image timestamp to the timestamp of the build run.
Use any valid UNIX epoch seconds number as a string to set this as the image timestamp.
spec.output.vulnerabilityScan to enable a security vulnerability scan for your generated image. Further options in vulnerability scanning are defined here
spec.env - Specifies additional environment variables that should be passed to the build container. The available variables depend on the tool that is being used by the chosen build strategy.
spec.retention.atBuildDeletion - Defines if all related BuildRuns needs to be deleted when deleting the Build. The default is false.
spec.retention.ttlAfterFailed - Specifies the duration for which a failed buildrun can exist.
spec.retention.ttlAfterSucceeded - Specifies the duration for which a successful buildrun can exist.
spec.retention.failedLimit - Specifies the number of failed buildrun that can exist.
spec.retention.succeededLimit - Specifies the number of successful buildrun can exist.
spec.nodeSelector - Specifies a selector which must match a node’s labels for the build pod to be scheduled on that node.
Defining the Source
A Build resource can specify a source type, such as a Git repository or an OCI artifact, together with other parameters like:
source.type - Specify the type of the data-source. Currently, the supported types are “Git”, “OCIArtifact”, and “Local”.
source.git.url - Specify the source location using a Git repository.
source.git.cloneSecret - For private repositories or registries, the name references a secret in the namespace that contains the SSH private key or Docker access credentials, respectively.
source.git.revision - A specific revision to select from the source repository, this can be a commit, tag or branch name. If not defined, it will fall back to the Git repository default branch.
source.contextDir - For repositories where the source code is not located at the root folder, you can specify this path here.
By default, the Build controller does not validate that the Git repository exists. If the validation is desired, users can explicitly define the build.shipwright.io/verify.repository annotation with true. For example:
Example of a Build with the build.shipwright.io/verify.repository annotation to enable the spec.source.git.url validation.
Note: The Build controller only validates two scenarios. The first one is when the endpoint uses an http/https protocol. The second one is when an ssh protocol such as git@ has been defined but a referenced secret, such as source.git.cloneSecret, has not been provided.
Example of a Build with a source with credentials defined by the user.
A Build resource can specify paramValues for parameters that are defined in the referenced BuildStrategy. You specify these parameter values to control how the steps of the build strategy behave. You can overwrite values in the BuildRun resource. See the related documentation for more information.
The build strategy author can define a parameter as either a simple string or an array. Depending on that, you must specify the value accordingly. The build strategy parameter can be specified with a default value. You must specify a value in the Build or BuildRun for parameters without a default.
You can either specify values directly or reference keys from ConfigMaps and Secrets. Note: the usage of ConfigMaps and Secrets is limited by the usage of the parameter in the build strategy steps. You can only use them if the parameter is used in the command, arguments, or environment variable values.
When using paramValues, users should avoid:
Defining a spec.paramValues name that doesn’t match one of the spec.parameters defined in the BuildStrategy.
Defining a spec.paramValues name that collides with the Shipwright reserved parameters. These are BUILDER_IMAGE, DOCKERFILE, CONTEXT_DIR, and any name starting with shp-.
In general, paramValues are tightly bound to Strategy parameters. Please make sure you understand the contents of your strategy of choice before defining paramValues in the Build.
apiVersion:shipwright.io/v1beta1kind:ClusterBuildStrategymetadata:name:buildkit...spec:parameters:- name:build-argsdescription:"The ARG values in the Dockerfile. Values must be in the format KEY=VALUE."type:arraydefaults:[]- name:cachedescription:"Configure BuildKit's cache usage. Allowed values are 'disabled' and 'registry'. The default is 'registry'."type:stringdefault:registry...steps:...
The cache parameter is a simple string. You can provide it like this in your Build:
The build-args parameter is defined as an array. In the BuildKit strategy, you use build-args to set the ARG values in the Dockerfile, specified as key-value pairs separated by an equals sign, for example, NODE_VERSION=16. Your Build then looks like this (the value for cache is retained to outline how multiple paramValue can be set):
Here, we pass three items in the build-args array:
The first item references a ConfigMap. Because the ConfigMap just contains the value (for example "16") as the data of the node-version key, the format setting is used to prepend NODE_VERSION= to make it a complete key-value pair.
The second item is just a hard-coded value.
The third item references a Secret, the same as with ConfigMaps.
Note: The logging output of BuildKit contains expanded ARGs in RUN commands. Also, such information ends up in the final container image if you use such args in the final stage of your Dockerfile. An alternative approach to pass secrets is using secret mounts. The BuildKit sample strategy supports them using the secrets parameter.
Defining the Builder or Dockerfile
In the Build resource, you use the parameters (spec.paramValues) to specify the image that contains the tools to build the final image. For example, the following Build definition specifies a Dockerfile image.
A Build resource can specify the output where it should push the image. For external private registries, it is recommended to specify a secret with the related data to access it. An option is available to specify the annotation and labels for the output image. The annotations and labels mentioned here are specific to the container image and do not relate to the Build annotations. Analogous, the timestamp refers to the timestamp of the output image.
Note: When you specify annotations, labels, or timestamp, the output image may get pushed twice, depending on the respective strategy. For example, strategies that push the image to the registry as part of their build step will lead to an additional push of the image in case image processing like labels is configured. If you have automation based on push events in your container registry, be aware of this behavior.
For example, the user specifies a public registry:
Example of user specifies image annotations and labels:
apiVersion:shipwright.io/v1beta1kind:Buildmetadata:name:s2i-nodejs-buildspec:source:type:Gitgit:url:https://github.com/shipwright-io/sample-nodejscontextDir:source-build/strategy:name:source-to-imagekind:ClusterBuildStrategyparamValues:- name:builder-imagevalue:"docker.io/centos/nodejs-10-centos7"output:image:us.icr.io/source-to-image-build/nodejs-expushSecret:icr-knbuildannotations:"org.opencontainers.image.source": "https://github.com/org/repo""org.opencontainers.image.url": "https://my-company.com/images"labels:"maintainer": "team@my-company.com""description": "This is my cool image"
Example of user specified image timestamp set to SourceTimestamp to set the output timestamp to match the timestamp of the Git commit used for the build:
vulnerabilityScan provides configurations to run a scan for your generated image.
vulnerabilityScan.enabled - Specify whether to run vulnerability scan for image. The supported values are true and false.
vulnerabilityScan.failOnFinding - indicates whether to fail the build run if the vulnerability scan results in vulnerabilities. The supported values are true and false. This field is optional and false by default.
vulnerabilityScan.ignore.issues - references the security issues to be ignored in vulnerability scan
vulnerabilityScan.ignore.severity - denotes the severity levels of security issues to be ignored, valid values are:
low: it will exclude low severity vulnerabilities, displaying only medium, high and critical vulnerabilities
medium: it will exclude low and medium severity vulnerabilities, displaying only high and critical vulnerabilities
high: it will exclude low, medium and high severity vulnerabilities, displaying only the critical vulnerabilities
vulnerabilityScan.ignore.unfixed - indicates to ignore vulnerabilities for which no fix exists. The supported types are true and false.
Example of user specified image vulnerability scanning options:
A Build resource can specify how long a completed BuildRun can exist and the number of buildruns that have failed or succeeded that should exist. Instead of manually cleaning up old BuildRuns, retention parameters provide an alternate method for cleaning up BuildRuns automatically.
As part of the retention parameters, we have the following fields:
retention.atBuildDeletion - Defines if all related BuildRuns needs to be deleted when deleting the Build. The default is false.
retention.succeededLimit - Defines number of succeeded BuildRuns for a Build that can exist.
retention.failedLimit - Defines number of failed BuildRuns for a Build that can exist.
retention.ttlAfterFailed - Specifies the duration for which a failed buildrun can exist.
retention.ttlAfterSucceeded - Specifies the duration for which a successful buildrun can exist.
An example of a user using both TTL and Limit retention fields. In case of such a configuration, BuildRun will get deleted once the first criteria is met.
Note: When changes are made to retention.failedLimit and retention.succeededLimit values, they come into effect as soon as the build is applied, thereby enforcing the new limits. On the other hand, changing the retention.ttlAfterFailed and retention.ttlAfterSucceeded values will only affect new buildruns. Old buildruns will adhere to the old TTL retention values. In case TTL values are defined in buildrun specifications as well as build specifications, priority will be given to the values defined in the buildrun specifications.
Defining Volumes
Builds can declare volumes. They must override volumes defined by the according BuildStrategy. If a volume
is not overridable then the BuildRun will eventually fail.
Volumes follow the declaration of Pod Volumes, so
all the usual volumeSource types are supported.
Here is an example of Build object that overrides volumes:
Using the triggers, you can submit BuildRun instances when certain events happen. The idea is to be able to trigger Shipwright builds in an event driven fashion, for that purpose you can watch certain types of events.
Note: triggers rely on the Shipwright Triggers project to be deployed and configured in the same Kubernetes cluster where you run Shipwright Build. If it is not set up, the triggers defined in a Build are ignored.
The types of events under watch are defined on the .spec.trigger attribute, please consider the following example:
Certain types of events will use attributes defined on .spec.source to complete the information needed in order to dispatch events.
GitHub
The GitHub type is meant to react upon events coming from GitHub WebHook interface, the events are compared against the existing Build resources, and therefore it can identify the Build objects based on .spec.source.git.url combined with the attributes on .spec.trigger.when[].github.
To identify a given Build object, the first criteria is the repository URL, and then the branch name listed on the GitHub event payload must also match. Following the criteria:
First, the branch name is checked against the .spec.trigger.when[].github.branches entries
If the .spec.trigger.when[].github.branches is empty, the branch name is compared against .spec.source.git.revision
If spec.source.git.revision is empty, the default revision name is used (“main”)
The following snippet shows a configuration matching Push and PullRequest events on the main branch, for example:
# [...]spec:source:git:url:https://github.com/shipwright-io/sample-gotrigger:when:- name:push and pull-request on the main branchtype:GitHubgithub:events:- Push- PullRequestbranches:- main
Image
In order to watch over images, in combination with the Image controller, you can trigger new builds when those container image names change.
For instance, lets imagine the image named ghcr.io/some/base-image is used as input for the Build process and every time it changes we would like to trigger a new build. Please consider the following snippet:
# [...]spec:trigger:when:- name:watching for the base-image changestype:Imageimage:names:- ghcr.io/some/base-image:latest
Tekton Pipeline
Shipwright can also be used in combination with Tekton Pipeline, you can configure the Build to watch for Pipeline resources in Kubernetes reacting when the object reaches the desired status (.objectRef.status), and is identified either by its name (.objectRef.name) or a label selector (.objectRef.selector). The example below uses the label selector approach:
# [...]spec:trigger:when:- name:watching over for the Tekton Pipelinetype:PipelineobjectRef:status:- Succeededselector:label:value
While the next snippet uses the object name for identification:
# [...]spec:trigger:when:- name:watching over for the Tekton Pipelinetype:PipelineobjectRef:status:- Succeededname:tekton-pipeline-name
BuildRun Deletion
A Build can automatically delete a related BuildRun. To enable this feature set the spec.retention.atBuildDeletion to true in the Build instance. The default value is set to false. See an example of how to define this field:
The resource BuildRun (buildruns.shipwright.io/v1beta1) is the build process of a Build resource definition executed in Kubernetes.
A BuildRun resource allows the user to define:
The BuildRun name, through which the user can monitor the status of the image construction.
A referenced Build instance to use during the build construction.
A service account for hosting all related secrets to build the image.
A BuildRun is available within a namespace.
BuildRun Controller
The controller watches for:
Updates on a Build resource (CRD instance)
Updates on a TaskRun resource (CRD instance)
When the controller reconciles it:
Looks for any existing owned TaskRuns and updates its parent BuildRun status.
Retrieves the specified SA and sets this with the specify output secret on the Build resource.
If one does not exist, it generates a new tekton TaskRun and sets a reference to this resource(as a child of the controller).
On any subsequent updates on the TaskRun, the controller will update the parent BuildRun resource instance.
Configuring a BuildRun
The BuildRun definition supports the following fields:
Required:
apiVersion - Specifies the API version, for example shipwright.io/v1beta1.
kind - Specifies the Kind type, for example BuildRun.
metadata - Metadata that identify the CRD instance, for example the name of the BuildRun.
Optional:
spec.build.name - Specifies an existing Build resource instance to use.
spec.build.spec - Specifies an embedded (transient) Build resource to use.
spec.serviceAccount - Refers to the SA to use when building the image. (defaults to the default SA)
spec.timeout - Defines a custom timeout. The value needs to be parsable by ParseDuration, for example, 5m. The value overwrites the value that is defined in the Build.
spec.paramValues - Refers to a name-value(s) list to specify values for parameters defined in the BuildStrategy. This value overwrites values defined with the same name in the Build.
spec.output.image - Refers to a custom location where the generated image would be pushed. The value will overwrite the output.image value defined in Build. (Note: other properties of the output, for example, the credentials, cannot be specified in the buildRun spec. )
spec.output.pushSecret - Reference an existing secret to get access to the container registry. This secret will be added to the service account along with the ones requested by the Build.
spec.output.timestamp - Overrides the output timestamp configuration of the referenced build to instruct the build to change the output image creation timestamp to the specified value. When omitted, the respective build strategy tool defines the output image timestamp.
spec.output.vulnerabilityScan - Overrides the output vulnerabilityScan configuration of the referenced build to run the vulnerability scan for the generated image.
spec.env - Specifies additional environment variables that should be passed to the build container. Overrides any environment variables that are specified in the Build resource. The available variables depend on the tool used by the chosen build strategy.
spec.nodeSelector - Specifies a selector which must match a node’s labels for the build pod to be scheduled on that node.
Note: The spec.build.name and spec.build.spec are mutually exclusive. Furthermore, the overrides for timeout, paramValues, output, and env can only be combined with spec.build.name, but not with spec.build.spec.
Defining the Build Reference
A BuildRun resource can reference a Build resource, that indicates what image to build. For example:
BuildRun’s support the specification of a Local type source. This is useful for working on development mode, without forcing a user to commit/push changes to their related version control system. For more information please refer to SHIP 0016 - enabling local source code.
A BuildRun resource can define paramValues for parameters specified in the build strategy. If a value has been provided for a parameter with the same name in the Build already, then the value from the BuildRun will have precedence.
For example, the following BuildRun overrides the value for sleep-time param, which is defined in the a-buildBuild resource.
You can also set the value of spec.serviceAccount to ".generate". This will generate the service account during runtime for you. The name of the generated service account is the same as that of the BuildRun.
Note: When the service account is not defined, the BuildRun uses the pipeline service account if it exists in the namespace, and falls back to the default service account.
Defining Retention Parameters
A Buildrun resource can specify how long a completed BuildRun can exist. Instead of manually cleaning up old BuildRuns, retention parameters provide an alternate method for cleaning up BuildRuns automatically.
As part of the buildrun retention parameters, we have the following fields:
retention.ttlAfterFailed - Specifies the duration for which a failed buildrun can exist.
retention.ttlAfterSucceeded - Specifies the duration for which a successful buildrun can exist.
An example of a user using buildrun TTL parameters.
Note: In case TTL values are defined in buildrun specifications as well as build specifications, priority will be given to the values defined in the buildrun specifications.
Defining Volumes
BuildRuns can declare volumes. They must override volumes defined by the according BuildStrategy. If a volume is not overridable then the BuildRun will eventually fail.
In case Build and BuildRun that refers to this Build override the same volume, one that is defined in the BuildRun is the one used eventually.
Volumes follow the declaration of Pod Volumes, so all the usual volumeSource types are supported.
Here is an example of BuildRun object that overrides volumes:
We have two controllers that ensure that buildruns can be deleted automatically if required. This is ensured by adding retention parameters in either the build specifications or the buildrun specifications.
Buildrun TTL parameters: These are used to make sure that buildruns exist for a fixed duration of time after completiion.
buildrun.spec.retention.ttlAfterFailed: The buildrun is deleted if the mentioned duration of time has passed and the buildrun has failed.
buildrun.spec.retention.ttlAfterSucceeded: The buildrun is deleted if the mentioned duration of time has passed and the buildrun has succeeded.
Build TTL parameters: These are used to make sure that related buildruns exist for a fixed duration of time after completion.
build.spec.retention.ttlAfterFailed: The buildrun is deleted if the mentioned duration of time has passed and the buildrun has failed.
build.spec.retention.ttlAfterSucceeded: The buildrun is deleted if the mentioned duration of time has passed and the buildrun has succeeded.
Build Limit parameters: These are used to make sure that related buildruns exist for a fixed duration of time after completiion.
build.spec.retention.succeededLimit - Defines number of succeeded BuildRuns for a Build that can exist.
build.spec.retention.failedLimit - Defines number of failed BuildRuns for a Build that can exist.
Specifying Environment Variables
An example of a BuildRun that specifies environment variables:
The BuildRun resource is updated as soon as the current image building status changes:
$ kubectl get buildrun buildpacks-v3-buildrun
NAME SUCCEEDED REASON MESSAGE STARTTIME COMPLETIONTIME
buildpacks-v3-buildrun Unknown Pending Pending 1s
And finally:
$ kubectl get buildrun buildpacks-v3-buildrun
NAME SUCCEEDED REASON MESSAGE STARTTIME COMPLETIONTIME
buildpacks-v3-buildrun True Succeeded All Steps have completed executing 4m28s 16s
The above allows users to get an overview of the building mechanism state.
Understanding the state of a BuildRun
A BuildRun resource stores the relevant information regarding the object’s state under status.conditions.
Conditions allow users to quickly understand the resource state without needing to understand resource-specific details.
For the BuildRun, we use a Condition of the type Succeeded, which is a well-known type for resources that run to completion.
The status.conditions hosts different fields, like status, reason and message. Users can expect these fields to be populated with relevant information.
The following table illustrates the different states a BuildRun can have under its status.conditions:
Status
Reason
CompletionTime is set
Description
Unknown
Pending
No
The BuildRun is waiting on a Pod in status Pending.
Unknown
Running
No
The BuildRun has been validated and started to perform its work.
Unknown
Running
No
The BuildRun has been validated and started to perform its work.
Unknown
BuildRunCanceled
No
The user requested the BuildRun to be canceled. This results in the BuildRun controller requesting the TaskRun be canceled. Cancellation has not been done yet.
True
Succeeded
Yes
The BuildRun Pod is done.
False
Failed
Yes
The BuildRun failed in one of the steps.
False
BuildRunTimeout
Yes
The BuildRun timed out.
False
UnknownStrategyKind
Yes
The Build specified strategy Kind is unknown. (options: ClusterBuildStrategy or BuildStrategy)
False
ClusterBuildStrategyNotFound
Yes
The referenced cluster strategy was not found in the cluster.
False
BuildStrategyNotFound
Yes
The referenced namespaced strategy was not found in the cluster.
False
SetOwnerReferenceFailed
Yes
Setting ownerreferences from the BuildRun to the related TaskRun failed.
False
TaskRunIsMissing
Yes
The BuildRun related TaskRun was not found.
False
TaskRunGenerationFailed
Yes
The generation of a TaskRun spec failed.
False
MissingParameterValues
Yes
No value has been provided for some parameters that are defined in the build strategy without any default. Values for those parameters must be provided through the Build or the BuildRun.
False
RestrictedParametersInUse
Yes
A value for a system parameter was provided. This is not allowed.
False
UndefinedParameter
Yes
A value for a parameter was provided that is not defined in the build strategy.
False
WrongParameterValueType
Yes
A value was provided for a build strategy parameter using the wrong type. The parameter is defined as array or string in the build strategy. Depending on that, you must provide values or a direct value.
False
InconsistentParameterValues
Yes
A value for a parameter contained more than one of value, configMapValue, and secretValue. Any values including array items must only provide one of them.
False
EmptyArrayItemParameterValues
Yes
An item inside the values of an array parameter contained none of value, configMapValue, and secretValue. Exactly one of them must be provided. Null array items are not allowed.
False
IncompleteConfigMapValueParameterValues
Yes
A value for a parameter contained a configMapValue where the name or the value were empty. You must specify them to point to an existing ConfigMap key in your namespace.
False
IncompleteSecretValueParameterValues
Yes
A value for a parameter contained a secretValue where the name or the value were empty. You must specify them to point to an existing Secret key in your namespace.
False
ServiceAccountNotFound
Yes
The referenced service account was not found in the cluster.
False
BuildRegistrationFailed
Yes
The related Build in the BuildRun is in a Failed state.
False
BuildNotFound
Yes
The related Build in the BuildRun was not found.
False
BuildRunCanceled
Yes
The BuildRun and underlying TaskRun were canceled successfully.
False
BuildRunNameInvalid
Yes
The defined BuildRun name (metadata.name) is invalid. The BuildRun name should be a valid label value.
False
BuildRunNoRefOrSpec
Yes
BuildRun does not have either spec.build.name or spec.build.spec defined. There is no connection to a Build specification.
False
BuildRunAmbiguousBuild
Yes
The defined BuildRun uses both spec.build.name and spec.build.spec. Only one of them is allowed at the same time.
False
BuildRunBuildFieldOverrideForbidden
Yes
The defined BuildRun uses an override (e.g. timeout, paramValues, output, or env) in combination with spec.build.spec, which is not allowed. Use the spec.build.spec to directly specify the respective value.
The BuildRun Pod failed because a step went out of memory.
Note: We heavily rely on the Tekton TaskRun Conditions for populating the BuildRun ones, with some exceptions.
Understanding failed BuildRuns
To make it easier for users to understand why did a BuildRun failed, users can infer the pod and container where the failure took place from the status.failureDetails field.
In addition, the status.conditions hosts a compacted message under the message field that contains the kubectl command to trigger and retrieve the logs.
The status.failureDetails field also includes a detailed failure reason and message, if the build strategy provides them.
Example of failed BuildRun:
# [...]status:# [...]failureDetails:location:container:step-source-defaultpod:baran-build-buildrun-gzmv5-b7wbf-pod-bbpqrmessage:The source repository does not exist, or you have insufficient permissionto access it.reason:GitRemotePrivate
Understanding failed BuildRuns due to VulnerabilitiesFound
A buildrun can be failed, if the vulnerability scan finds vulnerabilities in the generated image and failOnFinding is set to true in the vulnerabilityScan. For setting vulnerabilityScan, see here.
Example of failed BuildRun due to vulnerabilities present in the image:
# [...]status:# [...]conditions:- type:SucceededlastTransitionTime:"2024-03-12T20:00:38Z"status:"False"reason:VulnerabilitiesFoundmessage:"Vulnerabilities have been found in the output image. For detailed information, check buildrun status or see kubectl --namespace default logs vuln-s6skc-v7wd2-pod --container step-image-processing"
Understanding failed git-source step
All git-related operations support error reporting via status.failureDetails. The following table explains the possible
error reasons:
Reason
Description
GitAuthInvalidUserOrPass
Basic authentication has failed. Check your username or password. Note: GitHub requires a personal access token instead of your regular password.
GitAuthInvalidKey
The key is invalid for the specified target. Please make sure that the Git repository exists, you have sufficient permissions, and the key is in the right format.
GitRevisionNotFound
The remote revision does not exist. Check the revision specified in your Build.
GitRemoteRepositoryNotFound
The source repository does not exist, or you have insufficient permissions to access it.
GitRemoteRepositoryPrivate
You are trying to access a non-existing or private repository without having sufficient permissions to access it via HTTPS.
GitBasicAuthIncomplete
Basic Auth incomplete: Both username and password must be configured.
GitSSHAuthUnexpected
Credential/URL inconsistency: SSH credentials were provided, but the URL is not an SSH Git URL.
GitSSHAuthExpected
Credential/URL inconsistency: No SSH credentials provided, but the URL is an SSH Git URL.
GitError
The specific error reason is unknown. Check the error message for more information.
Step Results in BuildRun Status
After completing a BuildRun, the .status field contains the results (.status.taskResults) emitted from the TaskRun steps generated by the BuildRun controller as part of processing the BuildRun. These results contain valuable metadata for users, like the image digest or the commit sha of the source code used for building.
The results from the source step will be surfaced to the .status.sources, and the results from
the output step will be surfaced to the .status.output field of a BuildRun.
Example of a BuildRun with surfaced results for git source (note that the branchName is only included if the Build does not specify any revision):
Note: The vulnerability scan will only run if it is specified in the build or buildrun spec. See Defining the vulnerabilityScan.
Build Snapshot
For every BuildRun controller reconciliation, the buildSpec in the status of the BuildRun is updated if an existing owned TaskRun is present. During this update, a Build resource snapshot is generated and embedded into the status.buildSpec path of the BuildRun. A buildSpec is just a copy of the original Build spec, from where the BuildRun executed a particular image build. The snapshot approach allows developers to see the original Build configuration.
Relationship with Tekton Tasks
The BuildRun resource abstracts the image construction by delegating this work to the Tekton Pipeline TaskRun. Compared to a Tekton Pipeline Task, a TaskRun runs all steps until completion of the Task or until a failure occurs in the Task.
During the Reconcile, the BuildRun controller will generate a new TaskRun. The controller will embed in the TaskRunTask definition the required steps to execute during the execution. These steps are defined in the strategy defined in the Build resource, either a ClusterBuildStrategy or a BuildStrategy.
2.4 - Authentication during builds
The following document provides an introduction around the different authentication methods that can take place during an image build when using the Build controller.
There are two places where users might need to define authentication when building images. Authentication to a container registry is the most common one, but also users might have the need to define authentications for pulling source-code from Git. Overall, the authentication is done via the definition of secrets in which the required sensitive data will be stored.
Build Secrets Annotation
Users need to add an annotation build.shipwright.io/referenced.secret: "true" to a build secret so that build controller can decide to take a reconcile action when a secret event (create, update and delete) happens. Below is a secret example with build annotation:
This annotation will help us filter secrets which are not referenced on a Build instance. That means if a secret doesn’t have this annotation, then although event happens on this secret, Build controller will not reconcile. Being able to reconcile on secrets events allow the Build controller to re-trigger validations on the Build configuration, allowing users to understand if a dependency is missing.
If you are using kubectl command create secrets, then you can first create build secret using kubectl create secret command and annotate this secret using kubectl annotate secrets. Below is an example:
There are two ways for authenticating into Git (applies to both GitLab or GitHub): SSH and basic authentication.
SSH authentication
For the SSH authentication you must use the tekton annotations to specify the hostname(s) of the git repository providers that you use. This is github.com for GitHub, or gitlab.com for GitLab.
As seen in the following example, there are three things to notice:
The Kubernetes secret should be of the type kubernetes.io/ssh-auth
The data.ssh-privatekey can be generated by following the command example base64 <~/.ssh/id_rsa, where ~/.ssh/id_rsa is the key used to authenticate into Git.
The Basic authentication is very similar to the ssh one, but with the following differences:
The Kubernetes secret should be of the type kubernetes.io/basic-auth
The stringData should host your user and personal access token in clear text.
Note: GitHub and GitLab no longer accept account passwords when authenticating Git operations.
Instead, you must use token-based authentication for all authenticated Git operations. You can create your own personal access token on GitHub and GitLab.
With the right secret in place(note: Ensure creation of secret in the proper Kubernetes namespace), users should reference it on their Build YAML definitions.
Depending on the secret type, there are two ways of doing this:
Notes: When generating a secret to access docker hub, the REGISTRY_HOST value should be https://index.docker.io/v1/, the username is the Docker ID.
Notes: The value of PASSWORD can be your user docker hub password, or an access token. A docker access token can be created via Account Settings, then Security in the sidebar, and the New Access Token button.
Usage of registry secret
With the right secret in place (note: Ensure creation of secret in the proper Kubernetes namespace), users should reference it on their Build YAML definitions.
For container registries, the secret should be placed under the spec.output.pushSecret path.
See more information in the official Tekton documentation for authentication.
2.5 - Configuration
Controller Settings
The controller is installed into Kubernetes with reasonable defaults. However, there are some settings that can be overridden using environment variables in controller.yaml.
The following environment variables are available:
Environment Variable
Description
CTX_TIMEOUT
Override the default context timeout used for all Custom Resource Definition reconciliation operations. Default is 5 (seconds).
REMOTE_ARTIFACTS_CONTAINER_IMAGE
Specify the container image used for the .spec.sources remote artifacts download, by default it uses quay.io/quay/busybox:latest.
TERMINATION_LOG_PATH
Path of the termination log. This is where controller application will write the reason of its termination. Default value is /dev/termination-log.
GIT_ENABLE_REWRITE_RULE
Enable Git wrapper to setup a URL insteadOf Git config rewrite rule for the respective source URL hostname. Default is false.
GIT_CONTAINER_TEMPLATE
JSON representation of a Container template that is used for steps that clone a Git repository. Default is {"image": "ghcr.io/shipwright-io/build/git:latest", "command": ["/ko-app/git"], "env": [{"name": "HOME", "value": "/shared-home"}], "securityContext":{"allowPrivilegeEscalation": false, "capabilities": {"drop": ["ALL"]}, "runAsUser": 1000,"runAsGroup": 1000}}1. The following properties are ignored as they are set by the controller: args, name.
GIT_CONTAINER_IMAGE
Custom container image for Git clone steps. If GIT_CONTAINER_TEMPLATE is also specifying an image, then the value for GIT_CONTAINER_IMAGE has precedence.
BUNDLE_CONTAINER_TEMPLATE
JSON representation of a Container template that is used for steps that pulls a bundle image to obtain the packaged source code. Default is {"image": "ghcr.io/shipwright-io/build/bundle:latest", "command": ["/ko-app/bundle"], "env": [{"name": "HOME","value": "/shared-home"}], "securityContext":{"allowPrivilegeEscalation": false, "capabilities": {"drop": ["ALL"]}, "runAsUser":1000,"runAsGroup":1000}}1. The following properties are ignored as they are set by the controller: args, name.
BUNDLE_CONTAINER_IMAGE
Custom container image that pulls a bundle image to obtain the packaged source code. If BUNDLE_IMAGE_CONTAINER_TEMPLATE is also specifying an image, then the value for BUNDLE_IMAGE_CONTAINER_IMAGE has precedence.
IMAGE_PROCESSING_CONTAINER_TEMPLATE
JSON representation of a Container template that is used for steps that processes the image. Default is {"image": "ghcr.io/shipwright-io/build/image-processing:latest", "command": ["/ko-app/image-processing"], "env": [{"name": "HOME","value": "/shared-home"}], "securityContext": {"allowPrivilegeEscalation": false, "capabilities": {"add": ["DAC_OVERRIDE"], "drop": ["ALL"]}, "runAsUser": 0, "runAsgGroup": 0}}. The following properties are ignored as they are set by the controller: args, name.
IMAGE_PROCESSING_CONTAINER_IMAGE
Custom container image that is used for steps that processes the image. If IMAGE_PROCESSING_CONTAINER_TEMPLATE is also specifying an image, then the value for IMAGE_PROCESSING_CONTAINER_IMAGE has precedence.
WAITER_CONTAINER_TEMPLATE
JSON representation of a Container template that waits for local source code to be uploaded to it. Default is {"image":"ghcr.io/shipwright-io/build/waiter:latest", "command": ["/ko-app/waiter"], "args": ["start"], "env": [{"name": "HOME","value": "/shared-home"}], "securityContext":{"allowPrivilegeEscalation": false, "capabilities": {"drop": ["ALL"]}, "runAsUser":1000,"runAsGroup":1000}}. The following properties are ignored as they are set by the controller: args, name.
WAITER_CONTAINER_IMAGE
Custom container image that waits for local source code to be uploaded to it. If WAITER_IMAGE_CONTAINER_TEMPLATE is also specifying an image, then the value for WAITER_IMAGE_CONTAINER_IMAGE has precedence.
BUILD_CONTROLLER_LEADER_ELECTION_NAMESPACE
Set the namespace to be used to store the shipwright-build-controller lock, by default it is in the same namespace as the controller itself.
BUILD_CONTROLLER_LEASE_DURATION
Override the LeaseDuration, which is the duration that non-leader candidates will wait to force acquire leadership.
BUILD_CONTROLLER_RENEW_DEADLINE
Override the RenewDeadline, which is the duration that the acting leader will retry refreshing leadership before giving up.
BUILD_CONTROLLER_RETRY_PERIOD
Override the RetryPeriod, which is the duration the LeaderElector clients should wait between tries of actions.
BUILD_MAX_CONCURRENT_RECONCILES
The number of concurrent reconciles by the build controller. A value of 0 or lower will use the default from the controller-runtime controller Options. Default is 0.
BUILDRUN_MAX_CONCURRENT_RECONCILES
The number of concurrent reconciles by the BuildRun controller. A value of 0 or lower will use the default from the controller-runtime controller Options. Default is 0.
BUILDSTRATEGY_MAX_CONCURRENT_RECONCILES
The number of concurrent reconciles by the BuildStrategy controller. A value of 0 or lower will use the default from the controller-runtime controller Options. Default is 0.
CLUSTERBUILDSTRATEGY_MAX_CONCURRENT_RECONCILES
The number of concurrent reconciles by the ClusterBuildStrategy controller. A value of 0 or lower will use the default from the controller-runtime controller Options. Default is 0.
KUBE_API_BURST
Burst to use for the Kubernetes API client. See Config.Burst. A value of 0 or lower will use the default from client-go, which currently is 10. Default is 0.
KUBE_API_QPS
QPS to use for the Kubernetes API client. See Config.QPS. A value of 0 or lower will use the default from client-go, which currently is 5. Default is 0.
VULNERABILITY_COUNT_LIMIT
holds vulnerability count limit if vulnerability scan is enabled for the output image. If it is defined as 10, then it will output only 10 vulnerabilities sorted by severity in the buildrun status.Output. Default is 50.
Role-based Access Control
The release deployment YAML file includes two cluster-wide roles for using Shipwright Build objects.
The following roles are installed:
shpwright-build-aggregate-view: this role grants read access (get, list, watch) to most Shipwright Build objects.
This includes BuildStrategy, ClusterBuildStrategy, Build, and BuildRun objects.
This role is aggregated to the Kubernetes “view” role.
shipwright-build-aggregate-edit: this role grants write access (create, update, patch, delete) to Shipwright objects that are namespace-scoped.
This includes BuildStrategy, Builds, and BuildRuns.
Read access is granted to all ClusterBuildStrategy objects.
This role is aggregated to the Kubernetes “edit” and “admin” roles.
Only cluster administrators are granted write access to ClusterBuildStrategy objects.
This can be changed by creating a separate Kubernetes ClusterRole with these permissions and binding the role to appropriate users.
The runAsUser and runAsGroup are dynamically overwritten depending on the build strategy that is used. See Security Contexts for more information. ↩︎↩︎
2.6 - Build Controller Metrics
The Build component exposes several metrics to help you monitor the health and behavior of your build resources.
The values have to be a comma-separated list of numbers. You need to set the environment variable for the build controller for your customization to become active. When running locally, set the variable right before starting the controller:
exportPROMETHEUS_BR_COMP_DUR_BUCKETS=30,60,90,120,180,240,300,360,420,480
make local
When you deploy the build controller in a Kubernetes cluster, you need to extend the spec.containers[0].spec.env section of the sample deployment file, controller.yaml. Add another entry:
As the amount of buckets and labels has a direct impact on the number of Prometheus time series, you can selectively enable labels that you are interested in using the PROMETHEUS_ENABLED_LABELS environment variable. The supported labels are:
buildstrategy
namespace
build
buildrun
Use a comma-separated value to enable multiple labels. For example:
exportPROMETHEUS_ENABLED_LABELS=namespace
make local
or
exportPROMETHEUS_ENABLED_LABELS=buildstrategy,namespace,build
make local
When you deploy the build controller in a Kubernetes cluster, you need to extend the spec.containers[0].spec.env section of the sample deployment file, controller.yaml. Add another entry:
The build controller supports a pprof profiling mode, which is omitted from the binary by default. To use the profiling, use the controller image that was built with pprof enabled.
Enable pprof in the build controller
In the Kubernetes cluster, edit the shipwright-build-controller deployment to use the container tag with the debug suffix.
kubectl --namespace <namespace> set image \
deployment/shipwright-build-controller \
shipwright-build-controller="$(kubectl --namespace <namespace> get deployment shipwright-build-controller --output jsonpath='{.spec.template.spec.containers[].image}')-debug"
Connect go pprof to build controller
Depending on the respective setup, there could be multiple build controller pods for high availability reasons. In this case, you have to look up the current leader first. The following command can be used to verify the currently active leader:
The pprof endpoint is not exposed in the cluster and can only be used from inside the container. Therefore, set-up port-forwarding to make the pprof port available locally.
Now, you can set up a local webserver to browse through the profiling data.
go tool pprof -http localhost:8080 http://localhost:8383/debug/pprof/heap
Please note: For it to work, you have to have graphviz installed on your system, for example using brew install graphviz, apt-get install graphviz, yum install graphviz, or similar.
3 -
Contributing Guidelines
Welcome to Shipwright, we are glad you want to contribute to the project!
This document contains general guidelines for submitting contributions.
Each component of Shipwright will have its own specific guidelines.
Contributing prerequisites (CLA/DCO)
The project enforces Developer Certificate of Origin (DCO).
By submitting pull requests submitters acknowledge they grant the
Apache License v2 to the code and that they are eligible to grant this license for all commits submitted in their pull requests.
operator - an operator to install Shipwright components on Kubernetes via OLM.
Technical documentation is spread across the code repositories, and is consolidated in the website repository.
Content in website is published to shipwright.io
Creating new Issues
We recommend to open an issue for the following scenarios:
Asking for help or questions. (Use the discussion or help_wanted label)
Reporting a bug. (Use the kind/bug label)
Requesting a new feature. (Use the kind/feature label)
Use the following checklist to determine where you should create an issue:
If the issue is related to how a Build or BuildRun behaves, or related to Build strategies, create an issue in build.
If the issue is related to the command line, create an issue in cli.
If the issue is related to how the operator installs Shipwright on a cluster, create an issue in operator.
If the issue is related to the shipwright.io website, create an issue in website.
If you are not sure, create an issue in this repository, and the Shipwright maintainers will route it to the correct location.
If feature request is sufficiently broad or significant, the community may ask you to submit a SHIP enhancement proposal.
Please refer to the SHIP guidelines to learn how to submit a SHIP proposal.
Writing Pull Requests
Contributions can be submitted by creating a pull request on Github.
We recommend you do the following to ensure the maintainers can collaborate on your contribution:
Fork the project into your personal Github account
Create a new feature branch for your contribution
Make your changes
If you make code changes, ensure tests are passing
Open a PR with a clear description, completing the pull request template if one is provided
Please reference the appropriate GitHub issue if your pull request provides a fix.
NOTE: All commits must be signed-off (Developer Certificate of Origin (DCO)) so make sure you use the -s flag when you commit. See more information on signing in here.
Code review process
Once your pull request is submitted, a Shipwright maintainer should be assigned to review your changes.
The code review should cover:
Ensure all related tests (unit, integration and e2e) are passing.
Ensure the code is properly documented, e.g. enough comments where needed.
Ensure the code is adding the necessary test cases (unit, integration or e2e) if needed.
Contributors are expected to respond to feedback from reviewers in a constructive manner.
Reviewers are expected to respond to new submissions in a timely fashion, with clear language if changes are requested.
Once the pull request is approved and marked “lgtm”, it will get merged.
Community Meetings Participation
Shipwright Community Meetings take place weekly on Monday’s at 13:00 UTC time.
In our Public Calendar you will find all weekly meetings.
Meetings are hosted in Zoom and are Public.
You can register yourself via the Public Calendar as well.
The Zoom Meeting comes with information about the Agenda to discuss.